Sunteți pe pagina 1din 1141

Sign Language

HSK 37
Handbücher zur
Sprach- und Kommunikations-
wissenschaft
Handbooks of Linguistics
and Communication Science

Manuels de linguistique et
des sciences de communication

Mitbegründet von Gerold Ungeheuer (†)


Mitherausgegeben 1985−2001 von Hugo Steger

Herausgegeben von / Edited by / Edités par


Herbert Ernst Wiegand

Band 37

De Gruyter Mouton
Sign Language
An International Handbook

Edited by
Roland Pfau
Markus Steinbach
Bencie Woll

De Gruyter Mouton
ISBN 978-3-11-020421-6
e-ISBN 978-3-11-026132-5
ISSN 1861-5090

Library of Congress Cataloging-in-Publication Data


A CIP catalog record for this book has been applied for at the Library of Congress.

Bibliographic information published by the Deutsche Nationalbibliothek


The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie;
detailed bibliographic data are available in the Internet at http://dnb.dnb.de.

© 2012 Walter de Gruyter GmbH & Co. KG, 10785 Berlin/Boston


Typesetting: META Systems GmbH, Wustermark
Printing: Hubert & Co. GmbH & Co. KG, Göttingen
Cover design: Martin Zech, Bremen

앝 Printed on acid-free paper
Printed in Germany
www.degruyter.com
Preface
Five long years ago, we met to plan what looked like an impossibly ambitious project ⫺
this Handbook. Since then, we have met in Berlin, London, Amsterdam, and Frankfurt;
we have exchanged hundreds of e-mails; we have read and commented on dozens of
chapters ⫺ and we have found the time to write our own. The work on this Handbook
has been challenging at times but it has also been inspiring and rewarding. We have
learned a lot.
Obviously, a project of this size would have been impossible without the help and
encouragement of others. We are therefore grateful to the people and organizations
that supported our work on this Handbook. First of all, we wish to express our grati-
tude to the section editors, who assisted us in providing feedback to authors and in
getting the chapters into shape: Onno Crasborn (section I), Josep Quer (section III),
Ronnie Wilbur (section IV), Trude Schermer (section VII), Adam Schembri (sec-
tion VIII), and Myriam Vermeerbergen (section IX).
As for the content and final shape of the chapters, we are indebted to all the pub-
lishers who granted us permission to reproduce figures, to Nancy Campbell, our metic-
ulous, reliable, and highly efficient editorial assistant, and to Sina Schade and Anna-
Christina Boell, who assisted us in the final check of consistency and formatting issues
as well as in putting together the index ⫺ a truly cumbersome task.
It was a true pleasure to cooperate with the professional and supportive people at
Mouton de Gruyter. We are indebted to Anke Beck for sharing our enthusiasm for
the project and for supporting us in getting the ball rolling. We are very grateful to
Barbara Karlson for guiding and encouraging us throughout the process. Her optimism
helped us to keep up our spirits whenever we felt that things were not going as
smoothly as we hoped. After talking to her, things always looked much brighter. Fi-
nally, we thank Wolfgang Konwitschny for his assistance during the production phase.
Bencie Woll’s work on the handbook has been supported by the Economic and
Social Research Council of Great Britain (Grants RES-620-28-6001 and 6002), Deaf-
ness, Cognition and Language Research Centre (DCAL). Roland Pfau’s editorial work
was facilitated thanks to a fellowship financed by the German Science Foundation
(DFG) in the framework of the Lichtenberg-Kolleg at the Georg-August-University,
Göttingen.
Last but definitely not least, we thank all the authors who contributed to the hand-
book for joining us in this adventure.
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Notational conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Sign language acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

1. Introduction · Roland Pfau, Markus Steinbach & Bencie Woll . . . . . 1

I. Phonetics, phonology, and prosody


2. Phonetics · Onno Crasborn . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Phonology · Diane Brentari . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4. Visual prosody · Wendy Sandler . . . . . . . . . . . . . . . . . . . . . . . . 55

II. Morphology
5. Word classes and word formation · Irit Meir . . . . . . . . . . . . . . . . 77
6. Plurality · Markus Steinbach . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7. Verb agreement · Gaurav Mathur & Christian Rathmann . . . . . . . . 136
8. Classifiers · Inge Zwitserlood . . . . . . . . . . . . . . . . . . . . . . . . . 158
9. Tense, aspect, and modality · Roland Pfau, Markus Steinbach &
Bencie Woll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
10. Agreement auxiliaries · Galini Sapountzaki . . . . . . . . . . . . . . . . . 204
11. Pronouns · Kearsy Cormier . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

III. Syntax
12. Word order · Lorraine Leeson & John Saeed . . . . . . . . . . . . . . . 245
13. The noun phrase · Carol Neidle & Joan Nash . . . . . . . . . . . . . . . 265
14. Sentence types · Carlo Cecchetto . . . . . . . . . . . . . . . . . . . . . . . 292
15. Negation · Josep Quer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
16. Coordination and subordination · Gladys Tang & Prudence Lau . . . 340
17. Utterance reports and constructed action · Diane Lillo-Martin . . . . 365

IV. Semantics and pragmatics


18. Iconicity and metaphors · Sarah F. Taub . . . . . . . . . . . . . . . . . . . 388
19. Use of sign space · Pamela Perniss . . . . . . . . . . . . . . . . . . . . . . 412
20. Lexical semantics: Semantic fields and lexical aspect · Donovan Grose 432
21. Information structure · Ronnie B. Wilbur . . . . . . . . . . . . . . . . . . 462
22. Communicative interaction · Anne Baker & Beppie van den
Bogaerde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
viii Contents

V. Communication in the visual modality


23. Manual communication systems: evolution and variation · Roland
Pfau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
24. Shared sign languages · Victoria Nyst . . . . . . . . . . . . . . . . . . . . 552
25. Language and modality · Richard P. Meier . . . . . . . . . . . . . . . . . 574
26. Homesign: gesture to language · Susan Goldin-Meadow . . . . . . . . . 601
27. Gesture · Aslı Özyürek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626

VI. Psycholinguistics and neurolinguistics


28. Acquisition · Deborah Chen Pichler . . . . . . . . . . . . . . . . . . . . . 647
29. Processing · Matthew W. G. Dye . . . . . . . . . . . . . . . . . . . . . . . 687
30. Production · Annette Hohenberger & Helen Leuninger . . . . . . . . . 711
31. Neurolinguistics · David Corina & Nicole Spotswood . . . . . . . . . . 739
32. Atypical signing · Bencie Woll . . . . . . . . . . . . . . . . . . . . . . . . . 762

VII. Variation and change


33. Sociolinguistic aspects of variation and change · Adam Schembri &
Trevor Johnston . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788
34. Lexicalization and grammaticalization · Terry Janzen . . . . . . . . . . . 816
35. Language contact and borrowing · Robert Adam . . . . . . . . . . . . . 841
36. Language emergence and creolisation · Dany Adone . . . . . . . . . . 862
37. Language planning · Trude Schermer . . . . . . . . . . . . . . . . . . . . 889

VIII. Applied issues


38. History of sign languages and sign language linguistics · Susan McBurney 909
39. Deaf education and bilingualism · Carolina Plaza Pust . . . . . . . . . . 949
40. Interpreting · Christopher Stone . . . . . . . . . . . . . . . . . . . . . . . 980
41. Poetry · Rachel Sutton-Spence . . . . . . . . . . . . . . . . . . . . . . . . . 998

IX. Handling sign language data


42. Data collection · Mieke Van Herreweghe & Myriam Vermeerbergen 1023
43. Transcription · Nancy Frishberg, Nini Hoiting & Dan I. Slobin . . . . 1045
44. Computer modelling · Eva Sáfár & John Glauert . . . . . . . . . . . . . 1075

Indexes
Index of subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103
Index of sign languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120
Index of spoken languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
Notational conventions
As is common convention in the sign language literature, signs are glossed in small
caps (sign) in the examples as well as in the text. Glosses are usually in English,
irrespective of the sign language, except for examples quoted from other sources where
these are not in English (see chapter 43 for a detailed discussion of the challenges of
sign language transcription). The acronym for the respective sign language is always
given at the end of the gloss line (see next section for a list of the acronyms used in
this handbook). For illustration, consider the following examples from Sign Language
of the Netherlands (NGT) and German Sign Language (DGS).

y/n
(1) index2 h-a-n-s index3a bookCC 2give:cl3a [NGT]
‘Will you give Hans the books?’
(2) two-days-ago monk^boss school index3a visit3a [DGS]
‘Two days ago, the abbot visited the school.’

With respect to manual signs, the following notation conventions are used.

index3/ix3 pointing sign used in pronominalization (e.g. index2 in (1)) and for localiz-
ing non-present referents and locations in the signing space (e.g. index3a
in (1) and (2)). The subscript numbers refer to points in the signing space
and are not necessarily meant to reflect person distinctions: 1 = towards
signer’s chest; 2 = towards addressee; 3a/3b = towards ipsi- or contralateral
side of the signing space.
1sign3a verb sign moving in space from one location to another; in (1), for example,
the verb sign give moves from the locus of the addressee to the locus intro-
duced for the non-present referent ‘h-a-n-s’.
s-i-g-n represents a fingerspelled sign.
sign^sign indicates either the combination of two signs in a compound, e.g.
monk^boss ‘abbot’ in (2), or a sign plus affix/clitic combination (e.g.
know^not); in both types of combinations, characteristic assimilation and/
or reduction processes may apply.
sign-sign indicates that two or more words are needed to gloss a single sign (e.g.
two-days-ago in (2)).
signCC indicates reduplication of a sign to express grammatical features such as
plurality (e.g. bookCC in (1)) or aspect (e.g. iterative or durative aspect).
cl indicates the use of a classifier handshape that may combine with verbs of
movement and location (e.g. give in (1)); throughout the handbook, differ-
ent conventions are used for classifiers: the cl may be further specified by
a letter of the manual alphabet (e.g. cl:c) or by a subscript specifying either
a shape characteristic or the entity that is classified (e.g. clround or clcar).

Lines above the glosses (as in (1)) indicate the scope, that is, the onset and offset of a
particular non-manual marker, be it a lexical, a morphological, a syntactic, or a pro-
x Notational conventions

sodic marker. Below we provide a list of the most common markers. Note that some
of the abbreviations used refer to the function of the non-manual marker (e.g. ‘top’
and ‘neg’) while others refer to its form (e.g. ‘re’ and ‘hs’). When necessary, additional
markers will be introduced in the respective chapters.

/xxx/ lexical marker: a mouthing (silent articulation of (part of) a spoken word)
associated with a sign;
xxx lexical or morphological marker: a mouth gesture associated with a sign;
top syntactic topic marker;
wh syntactic wh-question marker;
y/n syntactic yes/no-question marker (as in (1));
rel syntactic relative clause marker;
neg syntactic negation marker;
hs headshake;
hn headnod;
re raised eyebrows.

As for handshapes, whenever possible, the Tang handshape font is used (http://
www.cuhk.edu.hk/cslds), instead of labels relating to manual alphabet or counting sys-
tems, because the latter may differ from sign language to sign language (e.g. T-hand is
different in ASL, NGT, and DGS); that is, we use ‘:-hand’ instead of ‘C-hand’, etc.
The usual convention concerning the use of upper case D in Deaf vs. deaf is re-
spected. Deaf with an upper-case D refers to (members of) linguistic communities
characterized by the use of sign languages. Lower case deaf refers to an individual’s
audiological status.
Sign language acronyms
Below we provide a list of sign language acronyms that are used throughout the hand-
book. Within every chapter, acronyms will also be introduced when a particular sign
language is mentioned for the first time. For some sign languages, alternative acronyms
exist in the sign language literature (for instance, ISL is commonly used for both Israeli
Sign Language and Irish Sign Language, and Libras for Brazilian Sign Language). Note
that some of the acronyms listed below are based on the name of the sign language in
the respective country; these names are given in brackets in italics.

ABSL Al-Sayyid Bedouin Sign Language (Israel)


AdaSL Adamorobe Sign Language (Ghana)
ASL American Sign Language
Auslan Australian Sign Language
BSL British Sign Language
CisSL Cistercian Sign Language
CSL Chinese Sign Language
DGS German Sign Language (Deutsche Gebärdensprache)
DSL Danish Sign Language
FinSL Finnish Sign Language
GSL Greek Sign Language
HKSL Hong Kong Sign Language
HZJ Croatian Sign Language (Hrvatski Znakovni Jezik)
IPSL Indopakistani Sign Language
IS International Sign
Irish SL Irish Sign Language
Israeli SL Israeli Sign Language
ISN Nicaraguan Sign Language (Idioma de Señas Nicaragüense)
KK Sign Language of Desa Kolok, Bali (Kata Kolok)
KSL Korean Sign Language
LIL Lebanese Sign Language (Lughat il-Ishaarah il-Lubnaniah)
LIS Italian Sign Language (Lingua Italiana dei Segni)
LIU Jordanian Sign Language (Lughat il-Ishaara il-Urdunia)
LSA Argentine Sign Language (Lengua de Señas Argentina)
LSB Brazilian Sign Language (Língua de Sinais Brasileira)
LSC Catalan Sign Language (Llengua de Signes Catalana)
LSE Spanish Sign Language (Lengua de Señas Espanõla)
LSF French Sign Language (Langue des Signes Française)
LSQ Quebec Sign Language (Langue des Signes Québécoise)
MSL Mauritian Sign Language
NCDSL North Central Desert Sign Language (Australia)
NGT Sign Language of the Netherlands (Nederlandse Gebarentaal)
NS Japanese Sign Language (Nihon Syuwa)
NSL Norwegian Sign Language
NZSL New Zealand Sign Language
ÖGS Austrian Sign Language (Österreichische Gebärdensprache)
xii Sign language acronyms

PISL Plains Indian Sign Language (North America)


Providence Island Sign Language
RSL Russian Sign Language
SASL South African Sign Language
SGSL Swiss-German Sign Language
SKSL South Korean Sign Language
SSL Swedish Sign Language
TİD Turkish Sign Language (Türk İşaret Dili)
TSL Taiwan Sign Language
VGT Flemish Sign Language (Vlaamse Gebarentaal)
WSL Warlpiri Sign Language (Australia)
YSL Yolngu Sign Language (Australia)
1. Introduction
1. The impact of sign language research on linguistics
2. Why a handbook on sign language linguistics is timely and important
3. Structure of the handbook
4. Literature

1. The impact of sign language research on linguistics

Before the beginning of sign language linguistics, sign languages were regarded as ex-
emplifying a primitive universal way of communicating through gestures. Early sign
linguistic research from the 1960s onward emphasized the equivalences between sign
languages and spoken languages and the recognition of sign languages as full, complex,
independent human languages. Contemporary sign linguistics now explores the similar-
ities and differences between different sign languages, and between sign languages and
spoken languages. This move has offered a new window on human language but has
also posed challenges to linguistics. While it is uncommon to find an introductory text
on linguistics which does not include some mention of sign language, and sign language
linguistics is increasingly offered as a subject within linguistics departments, instead of
being restricted to departments of speech and language pathology, there is still great
scope for linguists to recognize that sign language linguistics provides a unique means
of exploring the most fundamental questions about human language: the role of modal-
ity in shaping language, the nature of linguistic universals approached cross-modally,
the functions of iconicity and arbitrariness in language, and the relationship of language
and gesture. The answers to these questions are not only of importance within the field
of linguistics but also to neuroscience, psychology, the social sciences, and to the broa-
dest understanding of human communication. It is in this spirit that this Handbook
has been created.

2. Why a handbook on sign language linguistics is timely


and important

The sign language linguistics scene has been very active in recent years. First of all, sign
language linguists have contributed (and continue to contribute) to various handbooks,
addressing topic from a sign language perspective and thus familiarizing a broader
audience with aspects of sign language research and structure; e.g. linguistics in general
(Sandler/Lillo-Martin 2001), cognitive linguistics (Wilcox 2007), linguistic analysis (Wil-
cox/Wilcox 2010), phonology (Brentari 2011), grammaticalization (Pfau/Steinbach
2011), and information structure (Kimmelman/Pfau forthcoming). A recent handbook
that focuses entirely on sign languages is Brentari (2010); this handbook covers
three broad areas: transmission, structure, and variation and change. There have also
been several comprehensive introductory textbooks on single sign languages ⫺ e.g.
2 1. Introduction

British Sign Language (Sutton-Spence/Woll 1999), Australian Sign Language (John-


ston/Schembri 2007), and Israeli Sign Language (Meir/Sandler 2008) ⫺ which discuss
some of the issues also addressed in the present handbook. The focus of these books,
however, is clearly on structural, and to a lesser extent, historical and sociolinguistic,
aspects of the respective sign language. A textbook that focuses on structural and
theoretical aspects of sign language grammar, discussing examples from different sign
languages (mostly American Sign Language and Israeli Sign Language), is Sandler and
Lillo-Martin (2006). The central aim of that book is to scrutinize the existence of
alleged linguistic universals in the light of languages in the visual-gestural modality.
The time is thus ripe for a handbook on sign language linguistics that addresses a
wider range of topics from cross-linguistic, cross-modal, and theoretical perspectives.
It is these features which distinguish the present handbook from previous publications,
making it a unique source of information: First, it covers all areas of contemporary
linguistic research. Second, given that sign language typology is a fascinating and prom-
ising young research field, authors have been encouraged to address the topic of their
chapter from a broad typological perspective, including ⫺ wherever possible ⫺ data
from different sign languages, thus also illustrating the range of variation attested
among sign languages. Third, where appropriate, the contributions also sketch theoreti-
cal analyses for the phenomena under discussion, providing a neutral survey of existing,
sometimes conflicting, approaches. Therefore, this handbook is of relevance to general
linguistics, that is, it is designed not only for linguists researching sign language but
also for linguists researching spoken language. Examples are provided from a large
number of sign languages covering all regions of the world, illustrating the similarities
and differences among sign languages and between sign languages and spoken lan-
guages. The book is also of interest to those working in related fields such as psycholin-
guistics and sociolinguistics and to those in applied fields, such as language learning
and neuropsychology.

3. Structure of the handbook

The handbook consists of 44 chapters organized in nine sections, each of which has
been supervised by a responsible section editor. Although each chapter deals with a
specific topic, several topics make an appearance in more than one chapter. The first
four sections of the handbook (sections I⫺IV) are dedicated to the core modules of
grammar (phonetics, phonology, morphology, syntax, semantics, and pragmatics). The
fifth section deals with issues of sign language evolution and typology, including a
discussion of the similarities and differences between signing and gesturing. Psycho-
and neurolinguistic aspects of sign languages are discussed in section VI. Section VII
addresses sociolinguistic variation and language change. Section VIII discusses a num-
ber of applied issues in sign language linguistics such as education, interpreting, and
sign language poetry. Finally, section IX deals with questions of sign language docu-
mentation, transcription, and computer modelling.
Despite the broad coverage, a few topics do not receive a detailed discussion in the
handbook; among these are topics such as Deaf culture, literacy, educational practices,
mental health, sign language assessment, ethical issues, and cochlear implants. We refer
1. Introduction 3

the reader to Marschark and Spencer (2003, 2010), two comprehensive handbooks that
address these and many other issues of an applied nature. We hope ⫺ whatever one’s
background ⫺ the reader will be drawn along new paths of interest and discovery.

4. Literature
Brentari, Diane (ed.)
2010 Sign Languages (Cambridge Language Surveys). Cambridge: Cambridge University
Press.
Brentari, Diane
2011 Sign Language Phonology. In: Goldsmith, John A./Riggle, Jason/Yu, Alan C. L. (eds.), The
Handbook of Phonological Theory (2nd Revised Edition). Oxford: Blackwell, 691⫺721.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language. An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
Kimmelman, Vadim/Pfau, Roland
forthcoming Information Structure in Sign Languages. In: Féry, Caroline/Ishihara, Shinichiro
(eds.), The Oxford Handbook of Information Structure. Oxford: Oxford University Press.
Marschark, Mark/Spencer, Patricia E. (eds.)
2003 Oxford Handbook of Deaf Studies, Language, and Education. Oxford: Oxford Univer-
sity Press.
Marschark, Mark/Spencer, Patricia E. (eds.)
2010 Oxford Handbook of Deaf Studies, Language, and Education, Volume 2. Oxford: Ox-
ford University Press.
Meir, Irit/Sandler, Wendy
2008 A Language in Space. The Story of Israeli Sign Language. New York: Lawrence Erl-
baum.
Pfau, Roland/Steinbach, Markus
2011 Grammaticalization in Sign Languages. In: Narrog, Heiko/Heine, Bernd (eds.), The
Oxford Handbook of Grammaticalization. Oxford: Oxford University Press, 683⫺695.
Sandler, Wendy/Lillo-Martin, Diane
2001 Natural Sign Languages. In: Aronoff, Mark/John Rees-Miller (eds.), The Handbook of
Linguistics. Oxford: Blackwell, 533⫺562.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Languages and Linguistic Universals. Cambridge: Cambridge University Press.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Wilcox, Sherman
2007 Signed Languages. In: Geeraerts, Dirk/Cuyckens, Herbert (eds.), The Oxford Hand-
book of Cognitive Linguistics. Oxford: Oxford University Press. 1113⫺1136.
Wilcox, Sherman/Wilcox, Phyllis P.
2010 The Analysis of Signed Languages. In: Heine, Bernd/Narrog, Heiko (eds.), The Oxford
Handbook of Linguistic Analysis (Oxford Handbooks in Linguistics). Oxford: Oxford
University Press, 739⫺760.

Roland Pfau, Amsterdam (The Netherlands)


Markus Steinbach, Göttingen (Germany)
Bencie Woll, London (United Kingdom)
I. Phonetics, phonology, and prosody

2. Phonetics
1. Introduction
2. The modality difference
3. Phonetics vs. phonology
4. Articulation
5. Phonetic variation
6. Conclusion
7. Literature

Abstract
Sign and spoken languages differ primarily in their perceptual channel, vision vs. audi-
tion. This ‘modality difference’ has an effect on the structure of sign languages through-
out the grammar, as is discussed in other chapters in this volume. Phonetic studies of
sign languages typically focus on the articulation of signs. The arms, hands, and fingers
form very complex articulators that allow for many different articulations for any given
phonological specification for hand configuration, movement, and location. Indeed pho-
netic variation in sign language articulation is abundant, and in this respect, too, sign
languages resemble spoken languages.

1. Introduction
Sign languages are produced by body movements that are perceived visually, while
spoken languages are produced by vocal articulation and perceived by the ear. This
most striking difference between sign and spoken languages is termed the ‘modality
difference’. It refers to a difference in communication channel that is often considered
to be the ultimate cause for structural differences between spoken and sign languages.
Since auditory perception is better targeted at processing small temporal detail than
visual perception, and since the manual articulators in signing move slower than the
oral articulators in speech, one would for example predict the richness of simultaneous
information in sign languages (Vermeerbergen/Leeson/Crasborn 2006).
In all, this chapter aims to characterise the area of sign language phonetics rather
than to provide an exhaustive overview of the studies that have been done. The focus
will be on the manual component in terms of articulation and phonetic variation. De-
spite the large importance that is often (intuitively) attributed to the phonetic differ-
ence between sign and speech, relatively little research within the field of sign language
studies has focused on the area of sign language phonetics, especially in comparison to
the phonological analysis of sign languages. This is illustrated by the fact that none of
the textbooks on sign language that have appeared in recent years includes ‘phonetics’
2. Phonetics 5

as a keyword (e.g., Boyes Braem 1995; Sutton-Spence/Woll 1999; Emmorey 2002; San-
dler/Lillo-Martin 2006; Johnston/Schembri 2007; Meir/Sandler 2008).
In section 2, the modality difference is discussed in further detail. Section 3 will
then discuss the relation between phonetics and phonology in sign languages, as it
may not be self-evident how a phonetic and a phonological level of analysis can be
distinguished in a visual language. Section 4 discusses articulation, and section 5 takes
a look at phonetic variation. (Note that perception studies are also discussed in sec-
tion F of the handbook, see especially chapter 29 on processing. The phonetic tran-
scription and notation of sign languages are covered in chapter 43.)

2. The modality difference

It is attractive to see modality as a black-and-white distinction in channel between


spoken language and sign language. One is auditory, the other visual. The deep embed-
ding of writing systems and written culture in many civilisations has perhaps contrib-
uted to our view of spoken language as a string of sounds, downplaying the presence
of non-verbal communication and visual communication more generally among hear-
ing people (Olson 1994). Yet there is growing evidence for the multimodality of spoken
language communication among hearing people. It is clear that visual aspects of com-
munication among hearing people can be complementary to auditory signals. For ex-
ample, emotional state is often visible in the facial expression while someone speaks
(Ekman 1993), and many interactional cues are expressed by a wide variety of head
movements (McClave 2000). Manual gestures are known to serve many functions that
complement the content of the spoken utterances (McNeill 1992; Kendon 2004).
Moreover, there is also evidence that speech itself is not only perceived auditorily
but also visually. McGurk and MacDonald (1976) showed that the visible state of the
face can influence the auditory perception of consonants. More recently, Swerts and
Krahmer (2008) demonstrated that the perception of manual beat gestures are inter-
preted as increased prominence of the simultaneously uttered spoken word. However,
while hearing people are very skilled at perceiving speech without looking at the
speaker (as when communicating by telephone), they are very bad at speech-reading
without any acoustic input (Woodward/Barber 1960). Only a small subset of the articu-
latory features of speech sounds can actually be seen (mainly lip rounding and opening,
labiodental contact, and jaw height), while others such as the state of the glottis, velum
lowering, and tongue dorsum height are invisible. Thus, for the segmental or syllabic
level in speech, it remains fair to say that speech primarily makes use of the acoustic-
auditory modality, while there is some visual input as well.
So as a starting point, it should be emphasised that the ‘modality difference’ appears
not to be a black-and-white contrast in phonetic channel. While sign languages are
exclusively perceived visually by their core users, deaf people, spoken languages are
perceived both auditorily and visually. Ongoing research on spoken language commu-
nication is exploring the role of visual communication among hearing people more and
more, including the role of gestures and facial expressions that are exclusively ex-
pressed visually. Hearing users of sign languages can in principle also hear some of the
sounds that are made, for instance by the lips or the hands contacting each other, yet
6 I. Phonetics, phonology, and prosody

ACTION SIGNAL PERCEPTION


Hearing communication
/ sound / auditory perception
bodily actions light / visual perception
Deaf communication
bodily actions / light / visual perception
Fig. 2.1: The modelling difference

this is unlikely to have a substantial phonetic impact on the linguistic structure of sign
languages given the fact that the core users of sign languages only have little residual
hearing, if any. The modality difference is summarised in Figure 2.1.
Where researchers have made significant progress in the acoustic analysis of the
speech signal and in the study of auditory perception, we have very little knowledge
of the signal and perception components of the communication chain of sign languages.
Yet these are important to study, as general human perceptual abilities form the frame-
work within which linguistic perception takes place. The phonetic research that has
been done has focused almost exclusively on the articulation of sign languages (but
see Bosworth 2003 for a notable exception). Therefore this chapter will also be pri-
marily devoted to sign language articulation. The reason for this may be that visual
perception is extremely complex. While there are only a few parameters of a small
section of the electromagnetic spectrum that the human visual system can exploit (lu-
minance and wavelength), these parameters constitute the input to a large array of
light-sensitive tissue (the retina) of the two eyes, which themselves move with our head
and body movements and which can also move independently (together constituting
‘eye gaze’). The human brain processes this very complex input in highly intricate ways
to give us the conscious impression that we see three-dimensional coloured objects
moving through space over time (Zeki 1993; Palmer 1999).
At a high level of processing, there are abstract forms that the brain can recognise.
There have been very few if any sign language studies that have aimed to describe the
phonetic form of signs in such abstract visual categories (see Crasborn 2001, 2003 for
attempts in that direction). It is clearly an underexplored area in the study of sign
languages. This may be due to the lack of a specialised field of ‘body movement percep-
tion’ in perceptual psychology that linguists can readily borrow a descriptive toolkit
from, whereas anatomical and physiological terminology is gratefully borrowed from
the biological and medical sciences when talking about the articulation of finger move-
ments, for example.
Two generalisations about visual perception have made their way into the sign lan-
guage literature in attempts to directly link properties of visual perception to the struc-
ture of sign languages. First, Siple (1978) noted that the visual field can be divided into
a ‘centre’ and a ‘periphery’. The centre is a small area in which fine spatial detail is
best processed, while in the relatively large periphery it is motion rather than fine
details that are best perceived. Siple argued that native signers perceiving ASL focus
their eye gaze around the chin, and do not move their gaze around to follow the
movements of the hands, for example. Thus, someone looking at signing would see
more details of handshape, orientation, and location for signs near the face than for
signs made lower on the body or in front of the trunk. This distinction might then
2. Phonetics 7

provide an explanatory basis for finer phonological location distinctions near the face
area as compared to the upper body area. Irrespective of the data on phonological
location distinctions, this hypothesis is hard to evaluate since the face area also includes
many visual landmarks that might also help perceivers distinguish small phonetic dif-
ferences in place of articulation and categorise these as phonologically distinct loca-
tions. Since 1978, very few if any eye tracking studies have specifically evaluated to
what extent eye gaze is actually relatively immobile and focused on the chin in sign
language perception. Also, we do not know whether this differs for different sign lan-
guages, nor whether there are differences in the perceptual behaviour of early versus
late sign language learners. A related hypothesis that has not yet been tested is that
there are more and finer handshape distinctions in the lexicon of any sign language
for locations at the face than for lower locations.
The second generalisation concerns the temporal processing of sound versus light.
Auditory perception is much better suited to distinguishing fine temporal patterns than
visual perception. This general difference is sometimes correlated to the sequential
structure found in spoken language phonology, where a sequence of segments together
can constitute one syllable, and in turn sequences of syllables can be the form of single
morphemes. In sign language, morphemes typically do not show such temporal com-
plexity (van der Kooij/Crasborn 2008). The phonological structure of signs is discussed
in the next chapter in this section. While the perceptual functional explanation for the
difference in phonological structure may well be valid, there is an equally plausible
explanation in terms of articulatory differences: the large difference in size between
the arms, hands, and fingers that are mostly involved in the realisation of lexical items
and the oral articulators involved in the production of speech sounds leads to a differ-
ence in the speed of movement given, assuming a constant energy expense. The mouth,
lips, and tongue are faster than the fingers and hands, and we thus correctly predict
more fine-grained temporal articulations in speech than in sign. As for the first general-
isation about the influence of language modality on structure, very few if any concrete
studies have been done in this area, for example allowing us to disentangle articulatory
and perceptual influences.

3. Phonetics vs. phonology

The phonetic study of sign languages includes the low-level production and perception
of manual and non-manual signals. It is much less evident how such phonetic analysis
of language relates to the phonological structure. As chapter 3 on phonology makes
clear, we have a good understanding of the phonological characteristics of several sign
languages and of sign languages in general. However, one cannot directly observe the
categorical properties and structures in sign language phonology: they have to be in-
ferred from the gradient phonetic form. Perhaps the impression that we can see the
articulators in sign languages has made it self-evident what the phonological form looks
like, and in that way reduced the need for an accurate phonetic description.
The first description of the manual form of signs that was introduced by Stokoe
(1960) in his groundbreaking work was clearly targeted at the lexical phonological
level. It used explicit articulatory terms in the description of the orientation of the
8 I. Phonetics, phonology, and prosody

hand, even though it aimed to characterise the distinctions within this ‘minor’ param-
eter at a phonological level. Orientation was characterised in terms of ‘prone’ and
‘supine’, referring to the rotation of the forearm around its length axis. There has never
been a phonetic variant of Stokoe’s system that has been commonly used as a phonetic
notation system. Phonetic notation systems such as HamNoSys (http://www.sign-
lang.uni-hamburg.de/projects/hamnosys.html) are sometimes used in lexicography.
HamNoSys itself is based on the linguistic analyses initiated by Stokoe, describing the
handshape, location, and movement for a manual sign, but it allows for the transcrip-
tion of finer phonetic detail than a phonological characterisation would require, and
like the International Phonetic Alphabet (IPA) for spoken languages it is not designed
for one specific language (see chapter 43 for details). Another ongoing effort to de-
scribe phonetic events insign languages aims to describe American Sign Language
(ASL) at a fine articulatory level of detail, yet still incorporates categories (similar to
‘movements’ and ‘holds’) that cannot be directly observed in a video recording of sign
but that derive from a specific phonological analysis (Johnson/Liddell, 2010, 2011a,b,
to appear).
What we consider to be ‘phonetic’ and ‘phonological’ descriptions and how these
two interact depends on our model of these different components of language form.
Different types of spoken language models have been applied to sign languages, from
rule-based formalisms of the SPE (Chomsky/Halle 1957) type to modern constraint-
based models (e.g., Sandler 1989; Corina/Sandler 1993; van der Hulst 1993; Brentari
1998). Irrespective of the specific model that is used, such models can help us to get a
better grip on what we talk about when we describe a phonetic form in sign language.
As an example, Figure 2.2 presents an overview of the Functional Phonology model
developed by Boersma (1998, 2007) for spoken languages that was adopted by Cras-
born (2001) for the description of a sign language.

Fig. 2.2: The Functional Phonology model

For example, take the sign proof from Sign Language of the Netherlands (NGT) as
illustrated in Figure 2.3. The underlying form of this sign specifies that the dominant
hand touches the non-dominant hand repeatedly, and that the shape of the two hands
is flat with all fingers selected. By default, signs that are specified for a location on the
2. Phonetics 9

non-dominant hand are realised with both hands in the centre of neutral space. This
predictable aspect of the phonological form is added to form the phonological surface
representation in the phonetic implementation, and it may be impacted by the phonetic
context, showing coarticulation effects (Ormel/Crasborn/van der Kooij 2012). Likewise,
the phonological characterisation of the form of signs does not contain any details of
how the movement is executed: whether it is the elbow, wrist, or even the fingers that
extend to realise the contact with the other hand, or both, is left to the phonetic
implementation. It is not fully predictable by phonological rules alone as the phonetic
form of a word or sign is also determined by all kinds of sociolinguistic and practical
factors (see Crasborn 2001 for extensive discussion). In the instance of the sign proof
in Figure 2.3, all three joint types appear to participate in the downward movement.
This specific type of phonetic variation will be further discussed in section 5.5.

Fig. 2.3: proof (NGT)

In the Functional Phonology model, the form of signs that is stored in the lexicon is a
perceptual target, whereas the concrete phonetic realisation at a given point in time
needs to be characterised at both an articulatory and a perceptual level in order to be
properly understood. Most phonological models of sign languages aim for the charac-
terisation of the underlying form of signs, yet this can be viewed as clearly distinct
from the phonetic form that is generated by the phonetic implementation in the model
above. Section 5 of this chapter will discuss studies on phonetic variation, and we will
see how these different articulations (phonetic forms) relate to a single underlying
representation. First, section 4 will discuss in some detail how the articulation of signs
can be described.

4. Articulation

4.1. Levels of description

The articulation of manual signs can be characterised in different ways. Figure 2.4a
presents an overview of the parts of the upper limb. We can describe the location and
10 I. Phonetics, phonology, and prosody

orientation of the various body parts (fingers, whole hand, forearm, upper arm) in
space or relative to the upper body or head, for example. In the sign language litera-
ture, we mostly find descriptions of the whole hand or of one or more of the fingers
with respect to a body location or in the ‘neutral space’ in front of the body. Such
descriptions rarely describe in detail the location and rotation of the upper arm, for
example. It is the ‘distal end’ of the articulator that realises the phonologically specified
values for location and movement in almost all lexical items in sign languages studied
to date. The anatomical terms ‘distal’ and ‘proximal’ refer to the relative location with
respect to the torso, following the line of the arm and hand (see Figure 2.4b). An
additional pair of terms displayed in Figure 2.4b is ‘ipsilateral ⫺ contralateral’. These
are similar to ‘left ⫺ right’, yet take the side of the active articulator as a basis: ipsilat-

a. Body parts and joints b. Location terms

c. Sides of the hand


2. Phonetics 11

d. Rotation states of the forearm


Fig. 2.4: Terminology used for the description of manual signs

eral refers to the side of the articulator in question, whereas contralateral refers to the
opposite side. As such, these terms are better suited to describe the bilaterally symmet-
ric human body than the terms ‘left ⫺ right’ are.
Alternatively, one can also look at manual articulations by focusing on the state of
the different joints, from the shoulder to the most distal finger joints. For joints like
the elbow that have only one degree of freedom, this is very straightforward, while
other joints are more complex. The wrist has two degrees of freedom in its movement
(flexion-extension and lateral flexion-extension), while the shoulder not only allows
movement of the upper arm at the upper body (three degrees of freedom: flexion in
two dimensions plus rotation about the upper arm axis), but also shows restricted
movement of the shoulder blade and clavicle with respect to the torso, affecting the
whole arm plus the hand.
In addition to describing articulation in terms of body part states or joint states,
one can look at the muscles involved in movements of the arms and hands. There are
a large number of muscles involved in the articulation of each sign, and as they are
not directly visible, knowledge about the anatomy and physiology of the hand is needed
to create such descriptions. Several sign language studies have focused at this level of
description in an attempt to phonetically distinguish easy from hard articulations; these
will be discussed in section 4.2.
The phonological description of signs typically centres on the hand: its shape, rota-
tion in space, location, and movement are represented in the lexicon. Such a specifica-
tion does not contain a concrete articulatory specification, irrespective of the level of
description. In terms of the model outlined in Figure 2.2, a phonetic implementation
is needed to generate a phonetic form from a phonological surface form. Take for
example the NGT sign india. Its phonological specification includes the location fore-
head, the extended thumb as the selected finger, and a rotation movement of the thumb
at the forehead. As the state of more proximal joints will influence the location of the
12 I. Phonetics, phonology, and prosody

end of the extremity, the state of the upper body will also influence the location of the
fingertips. Thus, bringing the tip of the thumb to the forehead (in other words, articulat-
ing the phonological location) does not only involve a specific state of the shoulder,
elbow, wrist, and thumb joints, but needs to take into account the current state of the
upper body and head. When the head is turned rightwards, the hand will also need to
be moved rightwards, for example by rotating the upper arm outwards. Thus, while
the phonological specification of a sign contains global phonetic information on the
realisation of that sign, it is quite different from its actual articulation in a given in-
stance.
Although this section aimed to characterise the articulation of manual parts of signs,
a short note on non-manual articulations is in place. The articulations of the jaw, head,
and upper body can be described in ways similar to those of the arms and hands. Facial
articulations are different in that other than the lower jaw there are no bones underly-
ing the skin of the face that can move. Rather, what we see when we describe facial
expressions are the impact that the muscles have on the skin of the face. Psychologist
Paul Ekman and colleagues have developed a notation system to analyse these articula-
tions. The system emphasises that there is no one-to-one mapping between muscle
actions and visible changes in the skin. In other words, we cannot directly see the
muscles, but only their effect on the facial skin. The FACS coding system uses the term
‘action unit’ for each type of articulation; each action unit can be the result of the
action of one or more muscles (Ekman/Friesen/Hagen 2002).

4.2. Ease of articulation

In an effort to explain the relative frequency of some forms over others in the lexicon
of sign languages, among other things, several studies have looked at the anatomy
and physiology of the upper extremity. In particular, the muscles that are used in the
articulation of aspects of signs have been discussed in a number of studies. Mandel
(1979) looked at the extensor muscles of the fingers, showing that these are not long
enough to fully flex the fingers at all joints when the wrist is also maximally flexed.
This physiological fact has an impact on the possible movements of the wrist and
fingers. One can easily test this by holding the forearm horizontal and pronated, and
relaxing both wrist and finger muscles. When one then quickly forms a fist, the wrist
automatically extends. Similarly, when the wrist quickly flexes from a neutral or ex-
tended state, the fingers automatically extend to accommodate the new position of the
wrist. The slower these movements are performed, the better they can be controlled,
although in the end the anatomy restricts the possible range of movement and the
resulting states of the different joints in combination. At normal signing speed, we do
expect to find a certain influence of this ‘knuckle-wrist connection’, as Mandel called
it: closing movements of all fingers are likely to be combined with wrist extension,
which in turn leads to a dorsal movement of the hand. Mandel argues that these dorsal
movements are typically enhanced as path movements of the whole hand through
space in ASL; conversely, opening movements of the fingers tend to be combined
with path movements in the direction of the palmar surface of the hand. Thus, while
phonologically, path movement direction and handshape change are independent,
there is a phonetic effect that relates the two. This is illustrated by the two configura-
2. Phonetics 13

(a) Fingers flexed, wrist hyperextended (b) Fingers extended, wrist flexed
Fig. 2.5: The relation between finger extension and hand position in two articulatory configura-
tions

tions in Figure 2.5: when all fingers are closed (2.5a), the wrist is hyperextended; by
consequence, the hand appears more ‘backwards’ than when all fingers are open and
the wrist can flex (2.5b).
The literature on ASL contains several studies on handshape that make reference to
the articulation of the fingers, arguing that some handshapes are easier to articulate
than others (Mandel 1981; Woodward 1982, 1985, 1987; Ann 1993). Patterns of fre-
quency of occurrence ⫺ both within the ASL lexicon and in comparison to the lexicon
of other sign languages ⫺ were attributed as evidence for the ‘unmarked’ status of
handshapes with only the index, thumb, or little finger extended, or with all fingers
extended. Supporting evidence came from the order of acquisition of such handshapes.
Such distributional (phonological) patterns were related to articulatory (phonetic)
properties. Ann (1993, 2008) was the first to perform a detailed physiological study of
the articulation of all handshapes. She argued that many of the patterns that were
found could be explained by reference to the anatomy and physiology of the hand. For
instance, both the index finger and the little finger have a separate extensor muscle
and tendon allowing them to extend independently (viz. the extensor indicis proprius
and the extensor digiti minimi). The middle and ring fingers do not: they can only be
extended on their own by employing a shared extensor muscle for all four fingers (the
extensor digitorum communis) while other muscles simultaneously flex the other fin-
gers.
A different articulatory constraint appears to play a role in the formation of some
morphological forms. Mathur and Rathmann (2001) argued that the range of motion
of the arm joints restricts the inflection of some verbs in sign languages. Inflections for
first person plural objects (as in ‘send us’) do not occur if their articulation requires
extreme flexion or rotation at multiple joints. These articulations are required in com-
bining an arc movement (part of the first person plural morpheme) with the lexical
orientation and location specifications of verbs such as invite in ASL and German
Sign Language (DGS) or pay in Australian Sign Language (Auslan).
14 I. Phonetics, phonology, and prosody

5. Phonetic variation

5.1. Introduction

Studies on the articulation of signs as described above form an important contribution


to our phonetic understanding of signs. In most of the studies that were done until now,
this articulatory knowledge was related directly to patterns observed in the lexicon. As
the model of the relation between phonetics and phonology in Figure 2.2 makes clear,
this is a rather large step to make. As the lexicon contains abstract phonological repre-
sentations that are more likely to be perceptual than articulatory, it is not always self-
evident how a sign (or even the handshape of a sign) can be articulated and whether
there is a prototypical articulation of a sign that can be taken as a reference point for
studies on markedness.

5.2. Handedness

The phonetic realisation of signs, just as for words in spoken language, is in fact highly
variable. In other words, there are many different phonetic forms corresponding to a
single phonological underlying form. One obvious aspect that leads to variation is
handedness: whether a signer is left-dominant or right-dominant for non-sign tasks is
the primary factor in determining whether one-handed signs are typically realised with
the left or right hand (Bonvillian/Orlansky/Garland 1982; Sáfár/Crasborn/Ormel 2010).
There is anecdotal evidence that L2 learners may find left-handed signers more diffi-
cult to perceive.

5.3. Hand height

The height of the hand in signs that are lexically specified for a neutral space location
has been shown to vary. Coulter (1993) found that in the realisation of lists of number
signs one to five in ASL, the location is realised higher for stressed items and lower
for the initial and final items. In an experimental study of ASL, Mauk, Lindblom, and
Meier (2008) found that the height of the hand in the realisation of neutral space
locations in ASL is raised under the influence of a high location of the hand in the
preceding and following sign. The same has been shown for NGT (Ormel/Crasborn/
van der Kooij 2012). For signs located on the body, Tyrone and Mauk (2008) found
the reverse effect as well: under the influence of a lower location in the preceding or
following sign, a target sign assumes a lower location. These raising and lowering ef-
fects in the last two studies are argued to be an instance of coarticulation in sign
languages. Similar to coarticulation in spoken language, the strength of the effect is
gradual and sensitive to the rate of speaking or signing. It is thus not categorical phono-
logical assimilation that leads to the visible difference in phonetic location, but a case
of phonetic variation. This analysis is supported by the fact that the degree of experi-
mentally elicited differences in hand height varies across signers (Tyrone/Mauk 2008).
2. Phonetics 15

5.4. Handshape

Similar coarticulation effects for the realisation of handshapes have been described by
Jerde, Soechting, and Flanders (2003) for the articulation of fingerspelling (see also
Wilcox 1992). They found both progressive and anticipatory influences of fingerspelled
letters on each other in ASL; both dissimilation and assimilation were found. Cheek
(2001) found that similar assimilation processes also occur in the articulation of hand-
shapes in regular lexical items in ASL. For example, the extension of the little finger
needed for the articulation of the <-handshape following a @-handshape was demon-
strated to start before the end of the preceding sign. Again, their gradient nature and
dependence on signing rate argues for the interpretation of these findings as instances
of phonetic coarticulation rather than phonological assimilation.

5.5. Movement

In addition to these effects of the sequential linguistic context on the appearance of


signs, different articulations are found depending on the distance between the signers.
Larger and smaller forms of a sign can be compared to shouting and whispering in
speech. Crasborn (2001) elicited such forms by changing the distance between pairs of
NGT signers, and found that different articulations of the same sign can invoke move-
ment at different joints. For example, phonologically specified changes in location that
in their neutral form are articulated by extension of both the elbow and wrist joint
were found to be enhanced by a large movement of the elbow joint alone, and reduced
by movement at the wrist and metacarpophalangeal joints (which link the fingers to
the rest of the hand). In Figure 2.6 below, this is illustrated for the NGT signs warm
and say.

a. Small and large articulations of warm (NGT)


16 I. Phonetics, phonology, and prosody

b. Small and large articulations of say (NGT)


Fig. 2.6: Smaller and larger realisations of path movements can involve articulation by different
joints in the NGT signs warm and say.

The contribution of various joints to a change in location was also illustrated in Fig-
ure 2.3 for the NGT sign proof. Rather than the whole hand moving downward as a
unit, the movement to contact was articulated by simultaneous extension at finger,
wrist, and elbow joints in the instance in the image.

5.6. The nature of the phonological abstraction of phonetically


variable forms

On the basis of these movement variation data, it can be argued that even though
phonological specifications by definition show a large step of abstraction away from
the concrete articulatory detail, one hidden articulatory category that may be too con-
crete for accurate phonological specifications is the hand itself: phonological specifica-
tions typically specify the selected fingers and their state, but in many cases this is done
in such a way that there is no distinction anymore between ‘finger state’ and ‘hand-
shape’ (Crasborn 2003). Finger configurations such as ‘extended’ or ‘straight’ imply
not only that the two interphalangeal joints of a finger are extended, but also the
metacarpophalangeal joint. Thus, most phonological ‘handshape’ specifications are just
that: a specification of the form of the whole hand, albeit at a certain level of abstrac-
tion, not aiming to include the exact angles of all joints in the lexicon. For example, in
the characterisation of different types of movement, Brentari (1998) distinguishes path
movements from local movements by referring directly to possible articulators: by de-
fault, the former are realised by the shoulder or elbow joints, the latter are realised by
the wrist or finger joints (Brentari 1998, 130⫺131). Thus, movement of the hand
2. Phonetics 17

through space is distinguished from movement that changes the form or orientation of
the hand. While it may be the case that the underlying form of some signs does indeed
include the activity of the whole hand, it may be more accurate for yet other signs to
consider a fingertip or a finger to be the articulator (Crasborn 2003). Such a representa-
tion would better account for some of the variations in the data that are found in
several sign languages, because it abstracts away further from the concrete articulation
and aims for a more perceptual representation. However, Emmorey, Bosworth and
Kraljic (2009) found that signers only use visual feedback of their own signing to a
limited extent, suggesting that visual representations may not play an important role
in language production. This is clearly an area in need of further research.

5.7. Summary

In conclusion, the few studies that have explicitly targeted phonetic variation have
looked at articulatory variability in the realisation of categorical phonological distinc-
tions. These studies open up a whole field of investigation for linguists and movement
scientists. The few studies that there are show that similar processes are at work as
in speech variation. Although for convenience’s sake these studies have targeted an
articulatory level rather than the level of the visual signal, basic factors like the aim to
reduce articulatory effort whenever perceptual demands of the addressee do not pro-
hibit it are not different from the spoken modality.

6. Conclusion
The phonetic variation studies discussed above make clear that indeed there is a pho-
netic level of description in sign languages that is different from the phonological level,
even though it has received relatively little attention in the sign language literature. At
the same time, these studies make clear that there is a whole field of study to be
further explored: the articulation and perception of sign languages is likely to be just
as complex as the phonetics of the vocal-auditory modality. While we primarily expect
to find differences between sign and speech due to the unique importance of the ges-
tural-visual modality used in Deaf communication, there are also likely to be similar-
ities between the two modalities at some phonetic level. Both sign and speech are
instances of human perception and performance; both take place over time and cost
energy to perform. These similarities and their impact on the phonology of human
language form an important area for future investigations, just as a deeper understand-
ing of the differences merits much further research.

7. Literature
Ann, Jean
1993 A Linguistic Investigation Into the Relation Between Physiology and Handshape. PhD
Dissertation, University of Arizona, Tuscon.
18 I. Phonetics, phonology, and prosody

Ann, Jean
2008 Frequency of Occurrence and Ease of Articulation of Sign Language Handshapes. The
Taiwanese Example. Washington, DC: Gallaudet University Press.
Boersma, Paul
1998 Functional Phonology. Formalizing the Interactions Between Articulatory and Perceptual
Drives. The Hague: Holland Academic Graphics.
Boersma, Paul
2007 Cue Constraints and Their Interactions in Phonological Perception and Production. Un-
published Manuscript. Rutgers Optimality Archives #944.
Bonvillian, John D./Orlansky, Michael D./Garland, Jane B.
1982 Handedness Patterns in Deaf Persons, In: Brain and Cognition 1, 141⫺157.
Bosworth, Rain G./Wright, Charles E./Bartlett, Marian S./Corina, David/Dobkins, Karen R.
2003 Characterization of the Visual Properties of Signs in ASL. In Baker, Anne E./Bogaerde,
Beppie van den/Crasborn, Onno (eds.), Cross-Linguistic Perspectives in Sign Language
Research. Selected papers from TISLR 2000, Hamburg: Signum, 265⫺282.
Boyes Braem, Penny
1995 Einführung in die Gebärdensprache und ihre Erforschung. Hamburg: Signum.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Cheek, Adrienne
2001 The Phonetics and Phonology of Handshape in American Sign Language. PhD Disserta-
tion, University of Texas at Austin, Texas.
Chomsky, Noam/Halle, Morris
1957 The Sound Pattern of English. Cambridge, MA: MIT Press.
Corina, David/Sandler, Wendy
1993 On the Nature of Phonological Structure in Sign Language. In: Phonology 10, 165⫺207.
Coulter, Geoffrey R.
1993 Phrase-level Prosody in ASL: Final Lengthening and Phrasal Contours. In: Coulter,
Geoffrey R. (ed.), Phonetics and Phonology: Current Issues in ASL Phonology (Vol. 3).
San Diego, CA: Academic Press, 263⫺272.
Crasborn, Onno
2001 Phonetic Implementation of Phonological Categories in Sign Language of the Nether-
lands. Utrecht: Landelijke Onderzoeksschool Taalwetenschap.
Crasborn, Onno
2003 Cylinders, Planes, Lines and Points. Suggestions for a New Conception of the Hand-
shape Parameter. In: Cornips, Leonie/Fikkert, Paula (eds.), Linguistics in the Nether-
lands 2003. Amsterdam: Benjamins, 25⫺32.
Ekman, Paul
1993 Facial Expression and Emotion. In: American Psychologist 48(4), 384⫺392.
Ekman, Paul/Friesen, Wallace V./Hager, Joseph C.
2002 Facial Action Coding System. Salt Lake City, Utah: Research Nexus.
Emmorey, Karen
2002 Language, Cognition and the Brain. Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Emmorey, Karen/Bosworth, Rain/Kraljic, Tanya
2009 Visual Feedback and Self-monitoring of Sign Language. Journal of Memory and Lan-
guage 61, 398⫺411.
Hulst, Harry van der
1993 Units in the Analysis of Signs. In: Phonology 10, 209⫺241.
Jerde, Thomas E./Soechting, John F./Flanders, Martha
2003 Coarticulation in Fluent Fingerspelling. In: The Journal of Neuroscience 23(6), 2383⫺
2393.
2. Phonetics 19

Johnson, Robert E./Liddell, Scott K.


2010 Toward a Phonetic Representation of Signs: Sequentiality and Contrast. Sign Language
Studies 11(2), 241⫺274.
Johnson, Robert E./Liddell, Scott K.
2011a A Segmental Framework for Representing Signs Phonetically. Sign Language Studies
11(3), 408⫺463.
Johnson, Robert E./Liddell, Scott K.
2011b Towards a Phonetic Representation of Hand Configuration: The Fingers. Sign Lan-
guage Studies 12(1), 5⫺45.
Johnson, Robert E./Liddell, Scott K.
to appear Toward a Phonetic Representation of Hand Configuration: The Thumb. Sign Lan-
guage Studies 12(2).
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language. An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
Kendon, Adam
2004 Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press.
Kooij, Els van der/Crasborn, Onno
2008 Syllables and the Word Prosodic System in Sign Language of the Netherlands. In: Lin-
gua 118, 1307⫺1327.
Mandel, Mark A.
1979 Natural Constraints in Sign Language Phonology: Data from Anatomy. In: Sign Lan-
guage Studies 24, 215⫺229.
Mandel, Mark A.
1981 Phonotactics and Morphophonology in American Sign Language. PhD Dissertation,
UC Berkeley.
Mathur, Gaurav/Rathmann, Christian
2001 Why not give-us: An Articulatory Constraint in Signed Languages. In: Dively, Valerie
L./Metzger, Melanie/Taub, Sarah F./Baer, Anne Marie (eds.), Signed Languages. Dis-
coveries from International Research. Washington, DC: Gallaudet University Press,
1⫺25.
Mauk, Claude/Lindblom, Björn/Meier, Richard
2008 Undershoot of ASL Locations in Fast Signing. In: Quer, Josep (ed.), Signs of the Time:
Selected Papers from TISLR 2004. Hamburg: Signum, 3⫺23.
McClave, Evelyn Z.
2000 Linguistic Functions of Head Movements in the Context of Speech. In: Journal of Prag-
matics 32, 855⫺878.
McGurk, Harry/MacDonald, John
1976 Hearing Lips and Seeing Voices. In: Nature 264, 746⫺748.
Meir, Irit/Sandler, Wendy
2008 A Language in Space. The Story of Israeli Sign Language. New York: Lawrence Erl-
baum.
Olson, David R.
1994 The World on Paper. The Conceptual and Cognitive Implications of Writing and Read-
ing. Cambridge: Cambridge University Press.
Ormel, Ellen/Crasborn, Onno/Kooij, Els van der
2012 Coarticulation of Hand Height in Sign Language of the Netherlands is Affected by Con-
tact Type. Manuscript submitted for publication, Radboud University Nijmegen.
Palmer, Stephen E.
1998 Vision Science. From Photons to Phenomenology. Cambridge, MA: MIT Press.
Sáfár, Anna/Crasborn, Onno/Ormel, Ellen
2010 Handedness in Corpus NGT. Poster Presented at the 10th Conference on Theoretical
Issues in Sign Language Research (TISLR10), West Lafayette.
20 I. Phonetics, phonology, and prosody

Sandler, Wendy
1989 Phonological Representation of the Sign: Linearity and Nonlinearity in American Sign
Language. Dordrecht: Foris.
Siple, Patricia
1978 Visual Constraints for Sign Language Communication. In: Sign Language Studies 19,
95⫺110.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge Uni-
versity Press.
Swerts, Marc/Krahmer, Emiel
2008 Facial Expression and Prosodic Prominence: Effects of Modality and Facial Area. In:
Journal of Phonetics 36(2), 219⫺238.
Tyrone, Martha E./Mauk, Claude E.
2008 Sign Lowering in ASL: The Phonetics of wonder. Paper Presented at The Phonetics
and Phonology of Sign Languages. The First SignTyp Conference, University of Connec-
ticut.
Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno
2006 Simultaneity in Signed Languages. Form and Function. Amsterdam: Benjamins.
Wilcox, Sherman
1992 The Phonetics of Fingerspelling. Amsterdam: Benjamins.
Woodward, Mary F./Barber, Carroll G.
1960 Phoneme Perception in Lipreading. In: Journal of Speech and Hearing Research 3,
212⫺222.
Woodward, James
1982 Single Finger Extension: For a Theory of Naturalness in Sign Language Phonology. In:
Sign Language Studies 37, 289⫺304.
Woodward, James
1985 Universal Constraints on Two-finger Extension Across Sign Languages. In: Sign Lan-
guage Studies 46, 53⫺72.
Woodward, James
1987 Universal Constraints Across Sign Languages: Single Finger Contact Handshapes. In:
Sign Language Studies 57, 375⫺385.
Zeki, Semir
1993 A Vision of the Brain. Oxford: Blackwell.

Onno Crasborn, Nijmegen (The Netherlands)


3. Phonology 21

3. Phonology
1. Introduction
2. Structure
3. Modality effects
4. Iconicity effects
5. Conclusion
6. Literature

Abstract
This chapter is concerned with the sub-lexical structure of sign language phonology:
features and their organization into phonological units, such as the segment, syllable and
word. It is organized around three themes – structure, modality, and iconicity – because
these themes have been well-studied since the inception of the field and they touch on
the reasons why the consideration of sign languages is essential if one wishes to under-
stand the full range of possibilities of the phonology of natural languages. The cumula-
tive work described here makes two main arguments. First, modality affects the phono-
logical representation in sign and spoken languages; that is, the phonological structure
represents the strengths of the phonetic and physiological systems employed. Without a
comparison between sign and spoken languages, it is easy to lose sight of this point.
Second, iconicity works with phonology, not against it. It is one of the pressures – like
ease of perception and ease of production – that shape a phonological system. This
interaction is more readily seen in sign languages because of the availability of visual
iconicity and the ease with which it is assumed by phonological structures.

1. Introduction
Why should phonologists, who above all else are fascinated with the way things sound,
care about systems without sound? The short answer is that the organization of phono-
logical material is as interesting as the phonological material itself ⫺ whether it is of
spoken or sign languages. Moreover, certain aspects of work on spoken languages can
be seen in a surprising new light, because sign languages offer a new range of possibili-
ties both articulatorily and perceptually.
In this chapter the body of work on the single sign will be described under the
umbrella terms structure, modality, and iconicity. Under the term structure is included
all the work that showed that sign languages were natural languages with demonstrable
structure at all levels of the grammar including, of course, phonology. Much progress
has been achieved toward the aim of delineating the structures, distribution, and opera-
tions in sign language phonology, even though this work is by no means over and
debates about the segment, feature hierarchies, contrast, and phonological operations
continue. For now, it will suffice to say that it is well-established crosslinguistically that
sign languages have hierarchical organization of structures analogous to those of spo-
22 I. Phonetics, phonology, and prosody

ken languages. Phonologists are in a privileged place to see differences between sign
and spoken languages, because, unlike semantics or syntax, the language medium af-
fects the organization of the phonological system. This chapter deals with the word-
sized unit (the sign) and phonological elements relevant to it; phonetic structure and
prosodic structure above the level of the word are dealt with in chapter 2 and chapter 4
of the handbook, respectively.
Taken together, the five sign language parameters of Handshape, Place of Articula-
tion (where the sign is made), Movement (how the articulators move), Orientation
(the hands’ relation towards the Place of Articulation), and Non-manual behaviors
(what the body and face are doing) function similarly to the cavities, articulators and
features of spoken languages. Despite their different content, these parameters (i.e.,
phonemic groups of features) in sign languages are subject to operations that are simi-
lar to their counterparts in spoken languages. These broad-based similarities must be
seen, however, in light of important differences due to modality and iconicity effects
on the system. Modality addresses the effect of peripheral systems (i.e., visual/gestural
vs. auditory/vocal) on the very nature of the phonological system that is generated (see
also chapter 25). Iconicity refers to the non-arbitrary relationships between form and
meaning, either visual/spatial iconicity in the case of sign languages (Brennan 1990,
2005), or sound symbolism in the case of spoken languages (Hinton/Nicholls/Ohala
1995; Bodomo 2006; see also chapter 18).
This chapter will be structured around the three themes of structure, modality, and
iconicity because these issues have been studied in sign language phonology (indeed,
in sign language linguistics) from the very beginning. Section 2 will outline the phono-
logical structures of sign languages, focusing on important differences from and similar-
ities to their spoken language counterparts. Section 3 will discuss modality effects by
using a key example of word-level phonotactics. I will argue that modality effects allow
sign languages to occupy a specific typological niche based on signal processing and
experimental evidence. Section 4 will focus on iconicity. Here I will argue that this
concept is not in opposition to arbitrariness; instead iconicity co-exists along with other
factors ⫺ such as ease of perception and ease of production ⫺ that contribute to sign
language phonological form.

2. Structure

2.1. The word and sublexical structure

The structure in Figure 3.1 shows the three basic manual parameters ⫺ Handshape
(HS), Place of Articulation (POA), and Movement (MOV) ⫺ in a hierarchical struc-
ture from the Prosodic Model (Brentari 1998), which will be used throughout the chap-
ter to make generalizations across sets of data. This structure presents a fundamental
difference between sign and spoken languages. Besides the different featural content,
the most striking difference between sign and spoken languages is the hierarchical
structure itself ⫺ i.e., the root node at the top of the structure is an entire lexeme, a
stem, not a consonant- or vowel-like unit. This is a fact that is ⫺ if not explicitly
3. Phonology 23

Fig. 3.1: The hierarchical organization of a sign’s Handshape, Place of Articulation, and Move-
ment in the Prosodic Model (Brentari 1998).

stated ⫺ inferred in many models of sign language phonology (Sandler 1989; Brentari
1990a, 1998; Channon 2002; van der Hulst 1993, 1995, 2000; Sandler/Lillo-Martin 2006).
Both sign and spoken languages have simultaneous structure, but the representation
in Figure 3.1 encodes the fact that a high number of features are specified only once
per lexeme in sign languages. This idea will be described in detail below. Since the
beginning of the field there has been debate about how much to allow the simultaneous
aspects of sublexical sign structure to dominate the representation: whether sign lan-
guages have the same structures and structural relationships as spoken languages, but
with lots of exceptional behavior, or a different structure entirely. A proposal such as
the one in Figure 3.1 is proposing a different structure, a bold move not to be taken
lightly. Based on a wide range of available evidence, it appears that the simultaneous
structure of words is indeed more prevalent in sign than in spoken languages. The
point here is that the root node refers to a lexical unit, rather than a C- or V-unit or
a syllabic unit.
The general concept of ‘root-as-lexeme’ in sign language phonology accurately re-
flects the fact that sign languages typically specify many distinctive features just once
per lexeme, not once per segment or once per syllable, but once per word. Tone in
tonal languages, and features that harmonize across a lexeme (e.g., vowel features and
nasality) behave this way in spoken languages, but fewer features seem to have this
type of domain in spoken than in sign languages. And when features do operate this
way in spoken languages, it is not universal for all spoken languages. In sign languages
a larger number of features operate this way and they do so universally across most
known sign languages that have been well studied to date.

2.2. The Prosodic Model


In the space provided, I can provide neither a complete discussion of all of the phono-
logical models nor of the internal debates about particular elements of structure. Please
24 I. Phonetics, phonology, and prosody

see Brentari (1998) and Sandler and Lillo-Martin (2006) for a more comprehensive
treatment of these matters. When possible, I will be as theory-neutral as possible, but
given that many points made in the chapter refer to the Prosodic Model, I will provide
a brief overview here of the major structures of a sign in the Prosodic Model for
Handshape, Place of Articulation, Movement, and Orientation. Non-manual properties
of signs will be touched on only as necessary, since their sublexical structure is not well
worked out in any phonological model of sign language, and, in fact, it plays a larger
role in prosodic structure above the level of the word (sign); see chapter 4. The struc-
ture follows Dependency Theory (Anderson/Ewen 1987; van der Hulst 1993) in that
each node is maximally binary branching, and each branching structure has a head,
which is more elaborate, and a dependent, which is less elaborate. The specific features
will be introduced only as they become relevant; the discussion below will focus on
the class nodes of the feature hierarchy.
The inherent feature structure (Figure 3.2a) includes both Handshape and Place of
Articulation. The Handshape (HS) structure (Figure 3.2ai) specifies the active articula-
tor. Moving down the tree in (2ai), the head and body (non-manual articulators) can
be active articulators in some signs, but in most cases the arm and hands are the active
articulators. The manual node branches into the dominant (H1) and non-dominant
(H2) hands. If the sign is two-handed as in sit and happen (Figures 3.3aiii and 3.3aiv)
it will have both H1 and H2 features. There are a number of issues about two-handed
signs that are extremely interesting, since nothing like this exists in spoken languages
(i.e., two articulators potentially active at the same time). Unfortunately these issues
will not be covered in this chapter in the interest of space (Battison 1978; Crasborn
1995, submitted; Brentari 1998). If the sign is one-handed, as in we, sorry, and throw
(Figures 3.3ai, 3.3aii, and 3.3av), it will have only H1 features. The H1 features enable
each contrastive handshape in a sign language to be distinguished from every other.
These features indicate, for instance, which fingers are ‘active’ (selected), and of these
selected fingers, exactly how many of them there are (quantity) and whether they are
straight bent, flat, or curved (joints). The Place of Articulation (POA) structure (Figure
3.2aii) specifies the passive articulator, divided into the three dimensional planes ⫺
horizontal (y-plane), vertical (x-plane), and midsagittal (z-plane). If the sign occurs in
the vertical plane, then it might also require further specifications for the major place
on the body where the sign is articulated (head, torso, arm, H2) and also even a particu-
lar location within that major body area; each major body area has eight possibilities.
The POA specifications allow all of the contrastive places of articulation to be distin-
guished from one another in a given sign language. The inherent features have only
one specification per lexeme; that is, no changes in values.
Returning to our point of root-as-lexeme, we can see this concept at work in the
signs illustrated in Figure 3.3a. There is just one Handshape in the first three signs: we
(3.3ai), sorry (3.3aii), and sit (3.3aiii). The Handshape does not change at all through-
out articulation of the sign. In each case, the letters ‘1’, ‘S’, and ‘V’ stand for entire
feature sets that specify the given handshape. In the last sign, throw (3.3av), the two
fingers change from closed [⫺open] to open [Copen], but the selected fingers used in
the handshape do not change. The opening is itself a type of movement, which is
described below in more detail. Regarding Place of Articulation, even though it looks
like the hand starts and stops in a different places in each sign, the major region where
the sign is articulated is the same ⫺ the torso in we and sorry, the horizontal plane
3. Phonology 25

Fig. 3.2: The feature geometry for Handshape, Place of Articulation, and Movement in the Pro-
sodic Model.

(y-plane) in front of the signer in sit and happen, and the vertical plane (x-plane) in
front of the signer in throw. These are examples of contrastive places of articulation
within the system, and the labels given in Figure 3.3b stand for the entire Place of
Articulation structure.
The prosodic feature structure in Figure 3.1 (shown in detail in Figure 3.2b) specifies
movements within the sign, such as the aperture change just mentioned for the sign
throw (3.3av). These features allow for changes in their values within a single root
26 I. Phonetics, phonology, and prosody

Fig. 3.3: Examples of ASL signs that demonstrate how the phonological representation organizes
sublexical information in the Prosodic Model (3a). Inherent features (HS and POA) are
specified once per lexeme in (3b), and prosodic features (PF) may have different values
within a lexeme; PF features also generate the timing units (x-slots) (3c).

node (lexeme) while the inherent features do not, and this phonological behavior is
part of the justification for isolating the movement features on a separate autosegmen-
tal tier. Note that Figures 3.3ai, 3.3iv, and 3.3av (we, happen, and throw) all have
changes in their movement feature values; i.e., only one contrastive feature, but
changes in values. Each specification indicates which anatomical structures are respon-
sible for articulating the movement. Going from top to bottom, the more proximal
joints of the shoulder and arm are at the top and the more distal joints of the wrist
and hand are at the bottom. In other words, the shoulder articulating the setting move-
ment in we is located closer to the center of the body than the elbow that articulates
a path movement in sorry and sit. A sign having an orientation change (e.g., happen)
is articulated by the forearm or wrist, a joint that is even further away from the body’s
center, and an aperture change (e.g., throw), is articulated by joints of the hand, fur-
thest away from the center of the body. Notice that it is possible to have two simultane-
ous types of movement articulated together; the sign throw has a path movement
and an aperture change. Despite their blatantly articulatory labels, these may have an
articulatory or a perceptual basis (see Crasborn 2001). The trees in Figure 3.3c demon-
strate different types of movement features for the signs in Figure 3.3a. Note that
3. Phonology 27

Figures 3.3ai, 3.3aiv, and 3.3av (we, happen, and throw) all have changes in their
movement feature values; one contrastive feature but changes in their values.
Orientation was proposed as a major manual parameter like Handshape, Place of
Articulation and Movement by Battison (1978), but there are only a few minimal pairs
based on Orientation alone. In the Prosodic Model, Orientation is derivable from a
relation between the handpart specified in the Handshape structure and the Place of
Articulation, following a convincing proposal by Crasborn and van der Kooij (1997).
The mini-representations of the signs in Figure 3.3 show their orientation as well. The
position of the fingertip of the 1-handshape towards the POA determines the hand’s
orientation of we and throw; the position of the back of the fingers towards the torso
determines the hand’s orientation in sorry, and the front of the fingers towards the
POA determines the hand’s orientation in sit and happen.
The timing slots (segments) are projected from the prosodic structure, shown as
x-slots in Figure 3.2b. Path features generate two timing slots; all other features gener-
ate one timing slot. The inherent features do not generate timing slots at all, only
movement features can do this in the Prosodic Model. When two movement compo-
nents are articulated simultaneously as in throw, they align with one another and only
two timing slots are projected onto the timing tier. The movement features play an
important role in the sign language syllable, discussed in the next section.

2.3. The syllable

The syllable is as fundamental a unit in sign as it is in spoken languages. One point of


nearly complete consensus across models of sign language phonology is that the move-
ments are the nuclei of the syllable. This idea has its origin in the correlation between
the function of movements and the function of vowels in spoken languages (Liddell
1984; Brentari 2002), which is that both vowels and movements are the ‘medium’ by
which signs are visible from considerable distance, just as vowels are the ‘medium’ in
spoken languages making words audible from considerable distance. This physical fact
was determined to have theoretical consequences and was developed into a theory of
syllable structure by Brentari (1990a) and Perlmutter (1992). The arguments for the
syllable are based on its importance to the system (see also Jantunen/Takkinen 2010).
They are as follows:

2.3.1. The babbling argument

Petitto and Marentette (1991) have observed that a sequential dynamic unit formed
around a phonological movement appears in young Deaf children at the same time
as hearing children start to produce syllabic babbling. Because the distributional and
phonological properties of such units are analogous to the properties usually associated
with syllabic babbling, this activity has been referred to as manual babbling. Like syl-
labic babbling, manual babbling includes a lot of repetition of the same movement,
and also like syllabic babbling, manual babbling makes use of only a part of the pho-
nemic units available in a given sign language. The period of manual babbling develops
without interruption into the first signs (just as syllabic babbling continues without
28 I. Phonetics, phonology, and prosody

interruption into the first words in spoken languages). Moreover, manual babbling can
be distinguished from excitatory motor hand activity and other communicative gestures
by its rhythmic timing, velocity, and spectral frequencies (Petitto 2000).

2.3.2. The minimal word argument

This argument is based on the generalization that all well-formed (prosodic) words
must contain at least one syllable. In spoken languages, a vowel is inserted to insure
well-formedness, and in the case of sign languages a movement is inserted for the same
reason. Brentari (1990b) observed that American Sign Language (ASL) signs without
a movement in their input, such as the numeral signs ‘one’ to ‘nine’ add a small,
epenthetic path movement when used as independent words, signed one at a time.
Jantunen (2007) observed that the same is true in Finnish Sign Language (FinSL), and
Geraci (2009) has observed a similar phenomenon in Italian Sign Language (LIS).

2.3.3. Evidence of a sonority hierarchy

Many researchers have proposed sonority hierarchies based ‘movement visibility’ (Co-
rina 1990; Perlmutter 1992; Sandler 1993; Brentari 1993). Such a sonority hierarchy is
built into the prosodic features’ structure in Figure 3.2b since movements represented
by the more proximal joints higher in the structure are more visible than are those
articulated by the distal joints represented lower in the structure. For example, move-
ments executed by the elbow are typically more easily seen from further away than
those articulated by opening and closing of the hand. See Crasborn (2001) for experi-
mental evidence demonstrating this point. Because of this finding some researchers
have observed that movements articulated by more proximal joints are a manifestation
of visual ‘loudness’ (Crasborn 2001; Sander/Lillo-Martin 2006). In both spoken and
sign languages more sonorous elements of the phonology are louder than less sonorous
ones (/a/ is louder than /i/; /l/ is louder than /b/, etc.). The evidence from the nativization
of fingerspelled words, below, demonstrates that sonority has also infiltrated the word-
level phonotactics of sign languages.
In a study of fingerspelled words used in a series of published ASL lectures on
linguistics (Valli/Lucas 1992), Brentari (1994) found that fingerspelled forms containing

Fig. 3.4: An example of nativization of the fingerspelled word p-h-o-n-o-l-o-g-y, demonstrating


evidence of the sonority hierarchy by organizing the reduced form around the two more
sonorous wrist movements.
3. Phonology 29

strings of eight or more handshapes representing the English letters were reduced in
a systematic way to forms that contain fewer handshapes. The remaining handshapes
are organized around just two movements. This is a type of nativization process; native
signs conform to a word-level phonotactic of having no more than two syllables. By
native signs I am referring to those that appear in the core vocabulary, including mono-
morphemic forms and lexicalized compounds (Brentari/Padden 2001). Crucially, the
movements retained were the most visible ones, argued to be most sonorous ones, e.g.,
movements made by the wrist were retained while aperture changes produced by the
hand were deleted. Figure 3.4 contains an example of this process: the carefully finger-
spelled form p-h-o-n-o-l-o-g-y is reduced to the letters underlined, which are the let-
ters responsible for the two wrist movements.

2.3.4. Evidence for light vs. heavy syllables

Further evidence for the syllable comes from a division between those movements that
contain just one movement element (features on only one tier of Figure 3.2b are speci-
fied), which behave as light syllables (e.g., we, sorry, and sit in Figure 3.3 are light),
vs. those that contain more than one simultaneous movement element, which behave
as heavy syllables (e.g., throw in Figure 3.3). It has been observed in ASL that a
process of nominalization by movement reduplication can occur only to forms that
consist of a light syllable (Brentari 1998). In other words, holding other semantic fac-
tors constant, there are signs, such as sit, that have two possible forms: a verbal form
with the whole sequential movement articulated once and a nominal form with the
whole movement articulated twice in a restrained manner (Supalla/Newport 1978). The
curious fact is that the verb sit has such a corresponding reduplicated nominal form
(chair), while throw does not. Reduplication is not the only type of nominalization
process in sign languages, so when reduplication is not possible, other possible forms
of nominalization are possible (see Shay 2002). These facts can be explained by the
following generalization: ceteris paribus, the set of forms that allow reduplication have
just one simultaneous movement component, and are light syllables, while those that
disallow reduplication, such as throw, have two or more simultaneous movement el-
ements and are therefore heavy. A process in FinSL requiring the distinction between
heavy and light syllables has also been observed by Jantunen (2007) and Jantunen and
Takkinen (2010). Both analyses call syllables with one movement component light, and
those with more than one heavy.

2.4. The segment and feature organization

This is an area of sign language phonology where there is still lively debate. Abstracting
away from the lowest level of representation, the features themselves (e.g., [one], [all],
[flexed], etc.), I will try to summarize one trend ⫺ namely, features and their relation
to segmental (timing) structure. Figure 3.5 shows schematic structures capturing the
changes in perspective on how timing units, or segments, are organized with respect to
the feature material throughout the 50 years of work in this area. All models in Fig-
ure 3.5 are compatible with the idea of ‘root-as-lexeme’ described in section 2.1; the
30 I. Phonetics, phonology, and prosody

Fig. 3.5: Schematic structures showing the relationship of segments to features in different models of
sign language phonology (left to right): the Cheremic Model (Stokoe 1960; Stokoe et al.
1965), the Hold-Movement Model (Liddell/Johnson 1989), the Hand-Tier Model (Sandler
1989), the Dependency Model (van der Hulst 1995), and the Prosodic Model (Brentari
1998).

root node at the top of the structure represents the lexeme. Figure 3.5a represents
Stokoe’s Cheremic Model described in Sign Language Structures (1960). The sub-lexical
parameters of Handshape, Place of Articulation, and Movement had no hierarchical
organization, and like the spoken models of the 1950s (e.g., Bloomfield 1933), were
based entirely on phonemic structure (i.e., minimal pairs). It was the first linguistic
work on sign language linguistics of any type, and the debt owed to Stokoe is enormous
for bringing the sub-lexical parameters of signs to light.
Thirty years later, Liddell and Johnson (1989) looked primarily to sequential struc-
ture (timing units) to organize phonological material (see Figure 3.5b). Their Hold-
Movement Model was also a product of spoken language models of the period, which
were largely segmental (Chomsky/Halle 1968), but were moving in the direction of
non-linear phonology, starting with autosegmental phonology (Goldsmith 1976). Seg-
mental models depended heavily on slicing up the signal into units of time as a way of
organizing the phonological material. In such a model, consonant and vowel units take
center stage in spoken language, which can be identified sequentially. Liddell and John-
son called the static Holds ‘consonants’, and Movements the ‘vowels’ of sign languages.
While this type of division is certainly possible phonetically, several problems related
to phonological distribution of Holds make this model implausible. First of all, the
presence and duration of most instances of holds are predictable (Perlmutter 1992;
Brentari 1998). Secondly, length is not contrastive in movements or holds; a few mor-
phologically related forms are realized by lengthening ⫺ e.g., [intensive] forms have a
geminated first segment, such as good vs. good [intensive] ‘very good’, late vs. late
[intensive] ‘very late’, etc. ⫺ but no lexical contrast is achieved by segment length.
Thirdly, the feature matrix of all of the holds in a given lexeme contains a great deal
of redundant material (Sandler 1989; Brentari 1990a, 1998).
As spoken language theories became increasingly non-linear (Clements 1985; Sagey
1986) sign language phonology re-discovered and re-acknowledged the non-linear si-
multaneous structure of these languages. The Hand-Tier Model (Sandler 1989 and
Figure 3.5c) and all future models use feature geometry to organize the properties
of the sign language parameters according to phonological behavior and articulatory
properties. The Hand-Tier Model might be considered balanced in terms of sequential
3. Phonology 31

and simultaneous structure. Linear segmental timing units still hold a prominent place
in the representation, but Handshape was identified as having non-linear (autosegmen-
tal) properties. The Moraic Model (Perlmutter 1992) is similar to the Hand-Tier Model
in hierarchical organization, but this approach uses morae, a different type of timing
unit (Hyman 1985; Hayes 1989).
Two more recent models have placed the simultaneous structure back in central
position, and they have made further use of feature geometry. In these models, timing
units play a role, but this role is not as important as that which they play in spoken
languages. The Dependency Model (van der Hulst 1993, 1995, see Figure 3.5d) derives
timing slots from the dependent features of Handshape and Place of Articulation. In
fact, this model calls the root node a segment/lexeme and refers to the timing units as
timing (X)-slots, shown at the bottom of this representation. The Movement parameter
is demoted in this model, and van der Hulst argues that most of movement can be
derived from Handshape and Place of Articulation features, despite its role in the
syllable (discussed in section 2.3) and in morphology (see sections 4.3 and 4.4). The
proposals by Uyechi (1995) and Channon (2002) are similar in this regard. This differs
from the Hand-Tier and Prosodic Models, which award movement a much more central
role in the structure.
Like the Dependency Model, the Prosodic Model (already discussed in section 2.2,
Brentari 1990a, 1998) derives segmental structure. It recognizes that Handshape, Place
of Articulation, and Movement all have autosegmental properties. The role of the sign
language syllable is acknowledged by incorporating it into the representation (see Fig-
ure 3.5e). Because of their role in the syllable and in generating segments prosodic
(Movement) features are set apart from Handshape and Place of Articulation on their
own autosegmental tier, and the skeletal structure is derived from them, as in the
Dependency Model described above. In Figure 3.5e timing slots are at the bottom of
the representation.
To summarize these sections on sign language structure, it is clear that sign lan-
guages have all of the elements one might expect to see in a spoken language phono-
logical system, yet their organization and content is somewhat different: features are
organized around the lexeme and segmental structure assumes a more minor role.
What motivates this difference? One might hypothesize that this is in part due to the
visual/gestural nature of sign languages, and this topic of modality effects will be taken
up in section 3.

3. Modality effects

The modality effects described here refer to the influence that the phonetics (or com-
munication mode) used in a signed or spoken medium have on the very nature of the
phonological system that is generated. How is communication modality expressed in
the phonological representation? Brentari (2002) describes several ways in which signal
processing differs in sign and spoken languages (see also chapters 2 and 25). ‘Simulta-
neous processing’ is a cover term for our ability to process various input types pre-
sented roughly at the same time (e.g., pattern recognition, paradigmatic processing in
phonological terms) for which the visual system is better equipped relative to audition.
32 I. Phonetics, phonology, and prosody

‘Sequential processing’ is our ability to process temporally discrete inputs into tempo-
rally discrete events (e.g., ordering and sequencing of objects in time, syntagmatic proc-
essing in phonological terms), for which the auditory system is better equipped relative
to vision. I am claiming that these differences have consequences for the organization
of units in the phonology at the most fundamental level. Word shape will be used as
an example of how modality effects ultimately become reflected in phonological and
morphological representations.

3.1. Word shape

In this section, first outlined in Brentari (1995), the differences in the shape of the
canonical word in sign and spoken languages will be described, first in terms of typolog-
ical characteristics alone, and then in terms of factors due to communication modality.
Canonical word shape refers to the preferred phonological shape of words in a given
language. For an example of such canonical word properties, many languages, including
the Bantu language Shona (Myers 1987) and the Austronesian language Yidin (Dixon
1977), require that all words be composed of binary branching feet. With regard to
statistical tendencies at the word level, there is also a preferred canonical word shape
exhibited by the relationship between the number of syllables and morphemes in a
word, and it is here that sign languages differ from spoken languages. Signed words
tend to be monosyllabic (Coulter 1982) and, unlike spoken languages, sign languages
have an abundance of monosyllabic, polymorphemic words because most affixes in
sign languages are feature-sized and are layered simultaneously onto the stem rather
than concatenated (see also Aronoff/Meir/Sandler (2005) for a discussion of this point).
This relationship between syllables and morphemes is a hybrid measurement, which
is both phonological and morphological in nature, due in part to the shape of stems
and in part to the type of affixal morphology in a given language. A spoken language
such as Hmong contains words that tend to be monosyllabic and monomorphemic
with just two syllable positions (CV), but a rather large segmental inventory of 39
consonants and 13 vowels. The distinctive inventory of consonants includes voiced and
voiceless nasals, as well as several types of secondary articulations (e.g., pre- and post-
nasalized obstruents, lateralized obstruents). The inventory of vowels includes mon-
ophthongs, diphthongs, and seven contrastive tones, both simple and contour tones
(Golston/Yang 2001; Andruski/Ratliff 2000). Affixal morphology is linear, but there
isn’t a great deal of it. In contrast, a language such as West Greenlandic contains stems
of a variety of shapes and a rich system of affixal morphology that lengthens words
considerably (Fortescue 1984). In English, stems tend to be polysyllabic, and there is
relatively little affixal morphology. In sign languages, words tend to be monosyllabic,
even when they are polymorphemic. An example of such a form ⫺ re-presented from
Brentari (1995, 633) ⫺ is given in Figure 3.6; this form means ‘two bent-over upright-
beings advance-forward carefully side-by-side’ and contains at least six morphemes in
a single syllable. All of the classifier constructions in Figure 3.10 (discussed later in
section 4) are monosyllabic, as are the agreement forms in Figure 3.11. There is also a
large amount of affixal morphology, but most of these affixes are smaller than a seg-
ment in size; hence, both polymorphemic and monomorphemic words are typically just
3. Phonology 33

Fig. 3.6: An example of a monosyllabic, polymorphemic form in ASL: ‘two bent-over upright-
beings advance-forward carefully side-by-side’.

one syllable in length. In Table 3.1, a chart schematizes the canonical word shape in
terms of the number of morphemes and syllables per word.

(1) Canonical word shape according to the number of syllables and morphemes
per word

Tab. 3.1: Canonical word shape according to the number of syllable and morphemes per word
monosyllabic polysyllabic
English, German,
monomorphemic Hmong Hawaiian
West Greenlandic,
polymorphemic sign languages Turkish, Navajo

This typological fact about sign languages has been attributed to communication mo-
dality, as a consequence of their visual/gestural nature. Without a doubt, spoken lan-
guages have simultaneous phenomena in phonology and morphophonology such tone,
vowel harmony, nasal harmony, and ablaut marking (e.g., the past preterite in English
(‘sing’ [pres.]/‘sang’ [preterit]; ‘ring’ [pres.]/‘rang’ [preterit]), and even person marking
in Hua indicated by the [Gback] feature on the vowel (Haiman 1979)). There is also
nonconcatenative morphology found in Semitic languages, which is another type of
simultaneous phenomenon, where lexical roots and grammatical vocalisms alternate
with one another in time. Even collectively, however, this doesn’t approach the degree
of simultaneity in sign languages, because many features are specified once per stem
to begin with: one Handshape, one Place of Articulation, one Movement. In addition,
the morphology is feature-sized and layered onto the same monosyllabic stem, adding
additional features but no more linear complexity, and the result is that sign languages
have two sources of simultaneity ⫺ one phonological and another morphological. I
would argue that it is this combination of these two types of simultaneity that causes
sign languages to occupy this typological niche (see also Aronoff et al. (2004) for a
similar argument). Many researchers since the 1960s have observed a preference for
simultaneity of structure in sign languages, but for this particular typological compari-
son it was important to have understood the nature of the syllable in sign languages
and its relationship to the Movement component (Brentari 1998).
34 I. Phonetics, phonology, and prosody

Consider this typological fact about canonical word shape just described from the
perspective of the peripheral systems involved and their particular strengths in signal
processing, described in detail in chapters 2 and 25. What I have argued here is that
signal processing differences in the visual and auditory system have typological conse-
quences for the shape of words, which is a notion that goes to the heart of what a
language looks like. In the next section we explore this claim experimentally using
word segmentation task.

3.2. Word segmentation is grounded in communication modality

If this typological difference between words in sign and spoken language is deeply
grounded in communication modality it should be evident in populations with different
types of language experience. From a psycholinguistic perspective, this phenomenon
of word shape can be fruitfully explored using word segmentation tasks, because it can
address how language users with different experience handle the same types of items.
We discuss such studies in this section. In other words, if the typological niche in (1) is
due to the visual nature of sign languages, rather than historical similarity or language-
particular constraints, then signers of different sign languages and non-signers should
segment nonsense strings of signed material into word-sized units in the same way.
The cues that people use to make word segmentation decisions are typically put
into conflict with each other in experiments to determine their relative salience to
perceivers. Word segmentation judgments in spoken languages are based on (i) the
rhythmic properties of metrical feet (syllabic or moraic in nature), (ii) segmental cues,
such as the distribution of allophones, and (iii) domain cues, such as the spreading of
tone or nasality. Within the word, the first two of these are ‘linear’ or ‘sequential’ in
nature, while domain cues are simultaneous in nature ⫺ they are co-extensive with the
whole word. These cues have been put into conflict in word segmentation experiments
in a number of spoken languages, and it has been determined crosslinguistically that
rhythmic cues are more salient when put into conflict with domain cues or segmental
cues (Vroomen/Tuomainen/Gelder 1998; Jusczyk/Cutler/Redanz 1993; Jusczyk/Hohne/
Bauman 1999; Houston et al. 2000). By way of background, while both segmental and
rhythmic cues in spoken languages are realized sequentially, segmental alternations,
such as knowing the allophonic form that appears in coda vs. onset position, requires
language-particular knowledge at a rather sophisticated level. Several potential allo-
phonic variants can be associated with different positions in the syllable or word,
though infants master it sometime between 9 and 12 months of age (Jusczyk/Hohne/
Bauman 1999). Rhythm cues unfold more slowly than segmental alternations and re-
quire less specialized knowledge about the grammar. For instance, there are fewer
degrees of freedom (e.g., strong vs. weak syllables in ‘chil.dren’, ‘break.fast’) and there
are only a few logically possible alternatives in a given word. If we assume that there
is at least one prominent syllable in every word (two-syllable words have three possibil-
ities; three-syllable words have seven possibilities). Incorporating modality into the
phonological architecture of spoken languages would help explain why certain struc-
tures, such as the trochaic foot, may be so powerful a cue to word learning in infants
(Jusczyk/Hohne/Bauman 1999).
3. Phonology 35

Word-level phonotactic cues are available for sign languages as well, and these have
also been used in word segmentation experiments. Rhythmic cues are not used at the
word level in ASL, they begin to be in evidence at the phrasal level (see Miller 1996;
see also chapter 4, Visual Prosody). The word-level phonotactics described in (1) hold
above all for lexical stems; they are violated in ASL compounds to different degrees.

(1) Word-level phonotactics


a. Handshape: within a word selected finger features do not change in their
value; aperture features may change. (Mandel 1981)
b. Place of Articulation: within a word major Place of Articulation features
may not change in their value; setting features (minor place features) within
the same major body region may change. (Brentari 1998)
c. Movement: within a word repetition of movement is possible, or
‘circleCstraight’ sequences (*‘straightCcircle’ sequences). (Uyechi 1996)

Within a word, which properties play more of a role in sign language word segmenta-
tion: those that span the whole word (the domain cues) or those that change within

ai. One sign, based on ASL aii. Two signs, based on ASL

bi. One sign, based on ASL bii. Two signs, based on ASL
Fig. 3.7: Examples of one- and two-movement nonsense forms in the word segmentation experi-
ments. The forms in (a) with one movement were judged to be one sign by our partici-
pants; the forms in (b) with two movements were judged to be two signs by our partici-
pants. Based on ASL phonotactics, however, the forms (ai) and (bi) should have been
judged to be one sign and those of (aii) and (bii) should have been judged to be two signs.
36 I. Phonetics, phonology, and prosody

the word (i.e. the linear ones)? These cues were put into conflict with one another in
a set of balanced nonsense stimuli that were presented to signers and non-signers. The
use of a linear cue might be, for example, noticing that the open and closed aperture
variants of handshapes are related, and thereby judging a form containing such a
change to be one sign. The use of a domain strategy might be, for example, to ignore
sequential alternations entirely, and to judge every handshape or movement as a new
word. The nonsense forms in Figure 3.7 demonstrate this. If an ASL participant relied
on a linear strategy, Figure 3.7ai would be judged as one sign because it has an open
and closed variant of the same handshape, and Figure 3.7aii would be judged as two
signs because it contains two distinctively contrastive handshapes (two different se-
lected finger groups). Figure 3.7bi should be judged as one sign because it has a repeti-
tion of the movement and only one handshape and 3.7bii as two signs because it has
two contrastive handshapes and two contrastive movements.
In these studies there were six groups of subjects included in two experiments. In
one study groups of native users of ASL and English participated (Brentari 2006), and
in a second study four more groups were added, totaling six: native users of ASL,
Croatian Sign Language (HZJ), and Austrian Sign Language (ÖGS), spoken English,
spoken Austrian German and spoken Croatian (Brentari et al. 2011). The method was
the same in both studies. All were administered the same word segmentation task using
signed stimuli only. Participants were asked to judge whether controlled strings of
nonsense stimuli based on ASL words were one sign or two signs. It was hypothesized
that (1) signers and non-signers would differ in their strategies for segmentation, and
(2) signers would use their language-particular knowledge to segment sign strings.
Overall, the results were mixed. A “1 value = 1 word” strategy was employed overall,
primarily based on how many movements were in the string, despite language-particu-
lar grammatical knowledge. As stated earlier, for ASL participants Figures 3.7ai and
3.7bi should be judged one sign, because in 3.7ai the two handshapes are allowable in
one sign, as are the two movements in 3.7bi. Those in Figures 3.7aii and 3.7bii should
be judged as two signs. This did not happen in general; however, if each parameter is
analyzed separately, the way that the Handshape parameter was employed was signifi-
cantly different both between signing and non-signing groups, and among sign lan-
guage groups.
The conclusion drawn from the word segmentation experiments is that modality
(the visual nature of the signal) plays a powerful role in word segmentation; this drives
the strong similarity in performance between groups using the Movement parameter.
It suggests that, when faced with a new type of linguistic string, the modality will play
a role in segmenting it. Incorporating this factor into the logic of phonological architec-
ture might help to explain why certain structures, such as the trochaic foot, may be so
powerful a cue to word learning in infants (Jusczyk/Cutler/Redanz 1993) and why pro-
sodic cues are so resilient crosslinguistically in spoken languages.

3.3. The reversal of segment to melody

A final modality effect is the organization of melody features to skeletal segments in


the hierarchical structure in Figure 3.1, and this will be described here in detail. The
reason that timing units are located at the top of the hierarchical structure of spoken
3. Phonology 37

languages is because they can be contrastive. In spoken languages, affricates, geminates,


long vowels, and diphthongs demonstrate that the number of timing slots must be
represented independently from the melody, even if the default case is one timing slot
per root node. Examples of affricate and geminates in Italian are given in (2).

(2) Spoken language phonology ⫺ root:segment ratios [Italian]

The Dependency and Prosodic Models of sign language phonology build into them the
fact that length is not contrastive in any known sign language, and the number of
timing slots is predictable from the content of the features. As a consequence, the
melody (i.e., the feature material) has a higher position in the structure and timing
slots a lower position; in other words, the reverse of what occurs in spoken languages
where timing units are the highest node in the structure (see also van der Hulst (2000)
for this same point). As shown in Figure 3.2b and 3.3c, the composition of the prosodic
features can generate the number of timing slots. In the Prosodic Model path features
generate two timing slots, all other features generate one timing slot.
What would motivate this structural difference between the two types of languages?
One reason has already been mentioned: audition has the advantage over vision in
making temporal judgments, so it makes sense that the temporal elements of speech
have a powerful and independent role in phonological structure with respect to the
melody. One logical consequence of this is that the timing tier, containing either seg-
ments or moras, is more heavily exploited to produce contrast within the system and
must assume a more prominent role in spoken than in sign languages. A schema for
the relationship between timing slots, root node, and melody in sign and spoken lan-
guages is given in (3).

(3) Organization of phonological material in sign vs. spoken languages


a. Spoken languages b. Sign languages
x (timing slot) root

root melody

melody x (timing slot)

To conclude this section on modality, we see that it affects other levels of representa-
tion. An effect of modality on the phonetic representation can be seen in the similar
use of movement in signers and non-signers in making word segmentation judgments.
An effect on the phonological representation can be seen when the single movement
(now assuming the role of syllable in a sign language phonological system) is used to
express a particular phonological rule or constraint, such as the phonotactic constraint
on handshape change: that is, one handshape change per syllable. An effect of modality
38 I. Phonetics, phonology, and prosody

on the morphophonological representation can be seen in the typological niche that


sign languages occupy, whereby words are monosyllabic and polymorphemic.
Modality effects are readily observable when considering sign languages because of
their contrast with structures in spoken languages, and they also encourage an addi-
tional look at spoken language systems for similar effects of modality on speech that
might be typically taken for granted.

4. Iconicity effects

The topic of iconicity in sign languages is vast, covering all linguistic areas ⫺ e.g., prag-
matics, lexical organization, phonetics, morphology, the evolution of language ⫺ but in
this chapter only aspects of iconicity that are specifically relevant for the phonological
and morphophonemic representation will be discussed in depth (see also chapter 18 on
iconicity and metaphor). The idea of analyzing iconicity and phonology together is fasci-
nating and relatively recent. See, for instance, van der Kooij (2002), who examined the
phonology-iconicity connection in native signs and has proposed a level of phonetic im-
plementation rules where iconicity exerts a role. Even more recently, Eccarius (2008)
provides a way to rank the effects of iconicity throughout the whole lexicon of a sign lan-
guage. Until recently research on phonology and research concerning iconicity have been
taken up by sub-fields completely independent from one other, one side sometimes even
going so far as to deny the importance of the other side. Iconicity has been a serious topic
of study in cognitive, semiotic, and functionalist linguistic perspectives, most particularly
dealing with productive, metaphoric, and metonymic phenomena (Brennan 1990; Cuxac
2000; Taub 2001; Brennan 2005, Wilcox 2001; Russo 2005; Cuxac/Sallandre 2007). In con-
trast, with the notable exceptions just mentioned, phonology has been studied within a
generative approach, using tools that make as little reference to meaning or iconicity as
possible. For example, the five models in Figure 3.5 (the Cheremic, Hold-Movement,
Hand-Tier, Dependency, and Prosodic Models) make reference to iconicity in neither the
inventory nor the system of rules.
‘Iconicity’ refers to mapping of a source domain and the linguistic form (Taub 2001);
it is one of three Peircean notions of iconicity, indexicality, and symbolicity (Peirce, 1932
[1902]); see chapter 18 for a general introduction to iconicity in sign languages. From the
very beginning, iconicity has been a major topic of study in sign language research. It is
always the ‘800-lb. gorilla in the room’, despite the fact that the phonology can be con-
structed without it. Stokoe (1960), Battison (1978), Friedman (1976), Klima and Bellugi
(1979), Boyes Braem (1981), Sandler (1989), Brentari (1998), and hosts of references
cited therein have all established that ASL has a phonological level of representation
using exclusively linguistic evidence based on the distribution of forms ⫺ examples come
from slips of the hand, minimal pairs, phonological operations, and processes of word-
formation (see Hohenberger/Happ/Leuninger 2002 and chapter 30). In native signers,
iconicity has been shown experimentally to play little role in first-language acquisition
(Bonvillian/Orlansky/Folven 1990; Conlin et al. 2000 and also chapter 28) or in language
processing; Poizner, Bellugi, and Tweney (1981) demonstrated that iconicity has no relia-
ble effect on short-term recall of signs; Emmorey et al. (2004) showed specifically that
motor-iconicity of sign languages (involving movement) does not alter the neural systems
3. Phonology 39

underlying tool and action naming. Thompson, Emmorey, and Gollan (2005) have used
‘tip of the finger’ phenomena (i.e., almost ⫺ but not quite ⫺ being able to recall a sign)
to show that the meaning and form of signs are accessed independently, just as they are
in spoken languages (see also chapter 29 for further discussion). Yet iconicity is present
throughout the lexicon, and every one of these authors mentioned above also acknowl-
edges that iconicity is pervasive.
There is, however, no means to quantitatively and absolutely measure just how
much iconicity there is in a sign language lexicon. The question, ‘Iconic to whom, and
under what conditions?’ is always relevant, so we need to acknowledge that iconicity
is age-specific (signs for telephone have changed over time, yet both are iconic, cf.
Supalla 1982, 2004) and language-specific (signs for tree are different in Danish, Hong
Kong, and American Sign Languages, yet all are iconic). Except for a restricted set of
cases where entire gestures from the surrounding (hearing) community are incorpo-
rated in their entirety into a specific sign language, the iconicity resides in the sub-
lexical units, either in classes of features that reside at a class node or in individual
features themselves. Iconicity is thought to be one of the factors that makes sign lan-
guages look so similar (Guerra 1999; Guerra/Meier/Walters 2002; Wilcox/Rossini/Piz-
zuto 2010; Wilbur 2010), and sensitivity to and productive use of iconicity may be one
of the reasons why signers from different language families can communicate with each
other so readily after such little time, despite crosslinguistic differences in lexicon, and,
in many instances, also in the grammar (Russo 2005). Learning how to use iconicity
productively within the grammar is undoubtedly a part of acquiring a sign language.
I will argue that iconicity and phonology are not incompatible, and this view is gaining
more support within the field (van der Kooij 2002; Meir 2002; Brentari 2007; Eccarius
2008; Brentari/Eccarius 2010; Wilbur 2010). Now, after all of the work over recent dec-
ades showing indisputably that sign languages have phonology and duality of patterning,
one can only conclude it is the distribution that must be arbitrary and systematic in order
for phonology to exist. In other words, even if a property is iconic, it can also be phono-
logical because of its distribution. Iconicity should not be thought of as either a hindrance
or opposition to a phonological grammar, but rather another mechanism, on a par with
ease of production or ease of perception, that contributes to inventories. Saussure wasn’t
wrong, but since he based his generalizations on spoken languages, his conclusions are
based on tendencies in a communication modality that can only use iconicity on a more
limited basis than sign languages can. Iconicity does exist in spoken languages in redupli-
cation (e.g., Haiman 1980) as well as expressives/ideophones. See, for example, Bodomo
(2006) for a discussion of these in Dagaare, a Gur language of West Africa. See also
Okrent (2002), Shintel, Nussbaum, and Okrent (2006), and Shintel and Nussbaum (2007)
for the use of vocal quality, such as length and pitch, in an iconic manner.
Iconicity contributes to the phonological shape of forms more in sign than in spoken
languages, so much so that we cannot afford to ignore it. I will show that iconicity is a
strong factor in building signed words, but it is also restricted and can ultimately give
rise to arbitrary distribution in the morphology and phonology. What problems can be
confronted or insights gained from considering iconicity? In the next sections we will
see some examples of iconicity and arbitrariness working in parallel to build words
and expressions in sign languages, using the feature classes of handshape and orienta-
tion and movement. See also chapter 20 for a discussion of the Event Visibility Hypoth-
esis (Wilbur 2008, 2010), which also pertains to iconicity and movement. The mor-
40 I. Phonetics, phonology, and prosody

phophonology of word formation exploits and restricts iconicity at the same time; it is
used to build signed words, yet outputs are still very much restricted by the phonologi-
cal grammar. Section 4.1 can be seen as contributing to the historical development of
a particular aspect of sign language phonology; the other sections concern synchronic
phenomena.

4.1. The historical emergence of phonology

Historically speaking, Frishberg (1975) and Klima and Bellugi (1979) have established
that sign languages become ‘less iconic’ over time, but iconicity never reduces to zero
and continues to be productive in contemporary sign languages. Let us consider the
two contexts in which sign languages arise. In most Deaf communities, sign languages
are passed down from generation to generation not through families, but through com-
munities ⫺ i.e., schools, athletic associations, social clubs, etc. But initially, before there
is a community per se, signs begin to be used through interactions among individuals ⫺
either among deaf and hearing individuals (‘homesign systems’), or in stable communi-
ties in which there is a high incidence of deafness. In inventing a homesign system,
isolated individuals live within a hearing family or community and devise a method
for communicating through gestures that become systematic (Goldin-Meadow 2001).
Something similar happens on a larger scale in systems that develop in communities
with a high incidence of deafness due to genetic factors, such as the island of Martha’s
Vineyard in the seventeenth century (Groce 1985) and Al-Sayyid Bedouin Sign Lan-
guage (ABSL; Sandler et al. 2005; Meir et al. 2007; Padden et al. 2010). In both cases,
these systems develop at first within a context where being transparent through the
use of iconicity is important in making oneself understood.
Mapping this path from homesign to sign language has become an important re-
search topic since it allows linguists the opportunity to follow the diachronic path of a
sign language al vivo in a way that is no longer possible for spoken languages. In the
case of a pidgin, a group of isolated deaf individuals are brought together to a school
for the deaf. Each individual brings to the school a homesign system that, along with
other homesign systems, undergoes pidginization and ultimately creolization. This has
happened in the development of Nicaraguan Sign Language (NSL; Kegl/Senghas/Cop-
pola 1999; Senghas/Coppola 2001). This work to date has largely focused on morphol-
ogy and syntax, but when and how does phonology arise in these systems? Aronoff et
al. (2008) and Sandler et al. (2011) have claimed that ABSL, while highly iconic, still
has no duality of patterning even though it is ~75 years old. It is well known, however,
that in first-language acquisition of spoken languages, infants are statistical learners
and phonology is one of the first components to appear (Locke 1995; Aslin/Saffran/
Newport 1998; Creel/Newport/Aslin 2004; Jusczyk et al. 1993, 1999).
Phonology emerges in a sign language when properties ⫺ even those with iconic
origins ⫺ take on conventionalized distributions, which are not predictable from their
iconic forms. Over the last several years, a project has been studying how these sub-
types of features adhere to similar patterns of distribution in sign languages, gesture,
and homesign (Brentari et al. 2012). It is an example of the intertwined nature of
iconicity and phonology that addresses how a phonological distribution might emerge
in sign languages over time.
3. Phonology 41

Productive handshapes were studied in adult native signers, hearing gesturers (with-
out using their voices), and homesigners in handshapes ⫺ particularly the selected
finger features of handshape. The results show that the distribution of selected finger
properties is re-organized over time. Handshapes were divided into three levels of
selected finger complexity. Low complexity handshapes have the simplest phonological
representation (Brentari 1998), are the most frequent handshapes crosslinguistically
(Hara 2003; Eccarius/Brentari 2007), and are the earliest handshapes acquired by na-
tive signers (Boyes Braem 1981). Medium complexity and High complexity handshapes
are defined in structural terms ⫺ i.e., the simpler the structure the less complexity
it contains. Medium complexity handshapes have one additional elaboration of the
representation of a [one]-finger handshape, either by adding a branching structure or
an extra association line. High complexity handshapes are all other handshapes. Exam-
ples of low and medium complexity handshapes are shown in Figure 3.8.

Fig. 3.8: The three handshapes with low finger complexity and examples of handshapes with me-
dium finger complexity. The parentheses around the B-handshape indicate that it is the
default handshape in the system.

The selected finger complexity of two types of productive handshapes was analyzed:
those representing objects and those representing the handling of objects (correspond-
ing to whole entity and handling classifier handshapes in a sign language, respectively
see section 4.2 and chapter 8). The pattern that appeared in signers and homesigners
showed no significant differences along the dimension analyzed: relatively higher finger
complexity in object handshapes and lower for handling handshapes (Figure 3.9). The
opposite pattern appeared in gesturers, which differed significantly from the other two
groups: higher finger complexity in handling handshapes and lower in object hand-
shapes. These results indicate that as handshape moves from gesture to homesign and
ultimately to a sign language, object handshapes gain finger complexity and handling
handshapes lose it relative to their distribution in gesture. In other words, even though
all of these handshapes are iconic in all three groups, the features involved in selected
fingers are heavily re-organized in sign languages, and the homesigners already display
signs of this re-organization.
42 I. Phonetics, phonology, and prosody

Fig. 3.9: Mean finger complexity, using a Mixed Linear statistical model for Object handshapes and
Handling handshapes in signers, homesigners, and gesturers (Brentari et al. submitted).

4.2. Orientation in classifier constructions is arbitrarily distributed

Another phonological structure that has iconic roots but is ultimately distributed arbi-
trarily in ASL is the orientation of the handshape of classifier constructions (see chap-
ter 8 for further discussion). For our purposes here, classifier constructions can be de-
fined as complex predicates in which movement, handshape, and location are meaningful
elements; we focus here on handshape, which includes the orientation relation discussion
in section 2. We will use Engberg-Pedersen’s (1993) system, given in (4), which divides
the classifier handshapes into four groups. Examples of each are given in Figure 3.10.

(4) Categories of handshape in classifier constructions (Engberg-Pedersen 1993)


a. Whole Entity: These handshapes refer to whole objects (e.g., 1-handshape:
‘person’ (Figure 3.10a)).
b. Surface: These handshapes refer to the physical properties of an object (e.g.,
B-B-handshape: ‘flat_surface’ (Figure 3.10b)).
c. Limb/Body Part: These handshapes refer to the limbs/body parts of an agent
(e.g., V-handshape: ‘by_legs’ (Figure 3.10c)). In ASL we have found that the
V-handshape ‘by-legs’ can function as either a body or whole entity classifier.
d. Handling: These handshapes refer to how an object is handled or manipu-
lated (e.g., S-handshape: ‘grasp_gear_shift’ (Figure 3.10d)).

Benedicto and Brentari (2004) and Brentari (2005) argued that, while all types of
classifier constructions use handshape morphologically because at least part of the
handshape is used in this way, only classifier handshapes of the handling and limb/
3. Phonology 43

Phonological use of orientation:

ai. upright aii. upside down bi. surface of table bii. surface upside down
whole entity (1-HS ‘person’) surface/extension (B-HS ‘flat surface’)

Morphological use of orientation:

ci. upright cii. upside down di. grasp from above dii. grasp from below
body part (V-HS ‘person’) handling (S-HS ‘grasp-gear shift’)
Fig. 3.10: Examples of the distribution of phonological (top) and morphological (bottom) use of
orientation in classifier predicates (ASL). Whole Entity and Surface/Extension classifier
handshapes (10a) and (10b) allow only phonological use of orientation (so a change in
orientation is not permissible), while Body Part and Handling classifier handshapes
(10c) and (10d) do allow both phonological and morphological use of orientation so a
change in orientation is possible.

body part type can use orientation in a morphological way. Whole entity and surface
classifier handshapes cannot. This is shown in Figure 3.10, which illustrates the varia-
tion of the forms using orientation phonologically and morphologically. The forms
using the whole entity classifier in Figure 3.10ai ‘person’ and the surface classifier in
Figure 3.10bi ‘flat surface’ are not grammatical if the orientation is changed to the
hypothetical forms as in 3.10aii (‘person upside down’) and 3.10bii (‘flat surface upside
down), indicated by an ‘x’ through the ungrammatical forms. Orientation differences
in whole entity classifiers are shown by signing the basic form, and then sequentially
adding a movement to that form to indicate a change in orientation. In contrast forms
using the body part classifier and the handling classifier in Figure 3.10ci (‘by-legs’) and
3.10di (‘grasp gear shift’) are grammatical when articulated with different orientations
as shown in 3.10cii (‘by-legs be located upside down’) and 3.10dii (‘grasp gear shift
from below’).
This analysis requires phonology because the representation of handshape must
allow for subclasses of features to function differently, according to the type of classifier
handshape being used. In all four types of classifiers, part of the phonological orienta-
tion specification expresses a relevant handpart’s orientation (palm, fingertips, back of
hand, etc.) toward a place of articulation, but only in body part and handling classifiers
is it allowed to function morphologically as well. It has been shown that these four
44 I. Phonetics, phonology, and prosody

types of classifiers have different syntactic properties as well (Benedicto/Brentari 2004;


Grose et al. 2007).
It would certainly be more iconic to have the orientation expressed uniformly across
the different classifier types, but the grammar does not allow this. We therefore have
evidence that iconicity is present but constrained in the use of orientation in classifier
predicates in ASL.

4.3. Directional path movement and verb agreement

Another area in sign language grammars where iconicity plays an important role is
verb agreement (see also chapters 7 and 10). Agreement verbs manifest the transfer
of entities, either abstract or concrete. Salience and stability among arguments may be
encoded not only in syntactic terms, but also by visual-spatial means. Moreover, path
movements, which are an integral part of these expressions, are phonological properties
in the feature tree, as are the spatial loci of sign language verb agreement. There is
some debate about whether the locational loci are, in fact, part of the phonological
representation because they have an infinite number of phonetic realizations. See
Brentari (1998) and Mathur (2000) for two possible solutions to this problem.
There are three types of verbs attested in sign languages (Padden 1983): those that
do not manifest agreement (‘plain’ verbs), and those that do, which divide further
into those known as ‘spatial’ verbs, which take only take source-goal agreement, and
‘agreement’ verbs, which take source-goal agreement, as well as object and potentially
subject agreement (Brentari 1988; Meir 1998, 2002; Meir et al. 2007). While Padden’s
1983 analysis was based on syntactic criteria alone, these more recent studies include
both semantics (including iconicity) and syntax in their analysis. The combination of
syntactic and semantic motivations for agreement in sign languages was formalized as
the ‘direction of transfer principle’ (Brentari 1988), but the analysis of verb agreement
as having an iconic source was first proposed in Meir (2002). Meir (2002) argues that
the main difference between verb agreement in spoken languages and sign languages
is that verb agreement in sign languages seems to be thematically (semantically), rather
than syntactically, determined (Kegl (1985) was the first to note this). Agreement typi-
cally involves the representation of phi features of the NP arguments, and functionally
it is a part of the referential system of a language. Meir observes that typically in
spoken languages there is a closer relationship between agreement markers and struc-
tural positions in the syntax than between agreement markers and semantic roles, but
sign language verbs can agree not only with themes and agents, they can also agree
with their source and goal arguments.
Crucially, Meir argues that ‘DIR’, which is an abstract construct used in a transfer
(or directional) verb, is the iconic representation of the semantic notion ‘path’ used in
theoretical frameworks, such as Jackendoff (1996, 320); DIR denotes spatial relations.
It can appear as an independent verb or as an affix to other verbs. This type of iconicity
is rooted in the fact that referents in a signed discourse are tracked both syntactically
and visuo-spatially; however, this iconicity is constrained by the phonology. Independ-
ently a [direction] feature has been argued for in the phonology, indicating a path
moving to or from a particular plane of articulation, as described in section 2 (Bren-
tari 1998).
3. Phonology 45

Fig. 3.11: Examples of verb agreement in ASL and how it is expressed in the phonology: be-sorry
expresses no manual agreement; say-yes expresses the direction feature of agreement
in orientation; help in path; and pay in both orientation and path.

The abstract morpheme DIR and the phonological feature [direction] are distrib-
uted in a non-predictable (arbitrary) fashion both across sign languages (Mathur/Rath-
mann 2006, 2010) and language internally. In ASL it can surface in the path of the verb
or in the orientation; that is, on one or both of these parameters. It is the phonology of
the stem that accounts for the distribution of orientation and path as agreement mark-
ers, predicting if it will surface, and if so, where it will surface. Figure 3.11 provides
ASL examples of how this works. In Figure 3.11a we see an example of the agreement
verb, be-sorry, that takes neither orientation nor source-goal properties. Signs in this
set have been argued to have eye gaze substituted for the manual agreement marker
(Bahan 1996; Neidle et al. 2000), but there is debate about exactly what role eye gaze
plays in the agreement system (Thompson et al. 2006). The phonological factor rele-
vant here is that many signs in this set have a distinct place of articulation that is on
or near the body. In Figure 3.11b we see an example of an agreement verb that takes
only the orientation marker of agreement, say-yes; this verb has no path movement in
the stem that can be modified in its beginning and ending points (Askins/Perlmutter
1995), but the affixal DIR morpheme is realized on the orientation, with the palm of
the hand facing the vertical plane of articulation associated with the indirect object. In
Figure 3.11c there is an example of an agreement verb that has a path movement in
the stem ⫺ help ⫺ whose beginning and endpoints can be modified according to the
subject and object locus. Because of the angle of wrist and forearm, it would be very
difficult (if not impossible) to modify the orientation of this sign (Mathur/Rathmann
2006). In Figure 3.11d we see an example of the agreement verb pay that expresses the
DIR verb agreement on both path movement and orientation; the path moves from
the payer to the payee, and the orientation of the fingertip is towards the payee at the
end of the sign. The analysis of this variation depends in part on the lexical specifica-
tion of the stem ⫺ whether orientation or path is specified in the stem of the verb or
supplied by the verb-agreement morphology (Askins/Perlmutter 1995) ⫺ and in part
on the phonetic-motoric constraints on the articulators involved in articulating the
stem ⫺ i.e., the joints of the arms and hands (Mathur/Rathmann 2006).
46 I. Phonetics, phonology, and prosody

In summary, iconicity is a factor that contributes to the phonological inventories of


sign languages. Based on the work presented in this section, I would maintain that the
distribution of the material is more important for establishing the phonology of sign
languages than the material used ⫺ iconic or otherwise. One can generalize across
sections 4.1⫺4.3 and say that, taken alone, each of the elements discussed has iconic
roots, yet even so, this iconicity is distributed in unpredictable ways (that is, unpredicta-
ble if iconicity were the only motivation). This is true for which features ⫺ joints or
fingers ⫺ will be the first indications of an emerging phonology (section 4.1), for the
orientation of the hand representing the orientation of the object in space (section
4.2), and for the realization of verb agreement (section 4.3).

5. Conclusion

As stated in the introduction to this chapter, this piece was written in part to answer
the following questions: ‘Why should phonologists, who above all else are fascinated
with the way things sound, care about systems without sound? How does it relate to
their interests?’ I hope that I have shown that by using work on sign languages, phonol-
ogists can broaden the scope of the discipline from one that includes not only analyses
of phonological structures, but also how modality and iconicity infiltrates and interacts
with phonetic, phonological, and morph-phonological structure. This is true in both
sign and spoken languages, but we see these effects more vividly in sign languages. In
the case of modality this is because, chronologically speaking, analyses of sign lan-
guages set up comparisons with what has come before (e.g., analyses of spoken lan-
guages grounded in a different communication modality) and we now see that some
of the differences between the two languages result from modality differences. An
important point of this chapter was that general phonological theory can be better
understood by considering its uses in sign language phonology. For example, non-linear
phonological frameworks allowed for breakthroughs in understanding spoken and sign
languages that would not have been possible otherwise, but also allowed the architec-
tural building blocks of a phonological system to be isolated and examined in such a
way as to see how both the visual and auditory systems (the communication modalities)
affect the ultimate shape of words and organization of units, such as features, segments,
and syllables.
The effects of iconicity on phonological structure are seen more strongly in sign
languages because of the stronger role that visual iconicity can play in these languages
compared with auditory iconicity in spoken languages. Another important point for
general phonological theory that I have tried to communicate in this chapter has to do
with the ways in which sign languages manage iconicity. Just because a property is
iconic, doesn’t mean it can’t also be phonological. Unfortunately some phonologists
studying sign languages called attention away from iconicity for a long time, but iconic-
ity is a pervasive pressure on the output of phonological form in sign languages (on a
par with ease of perception and ease of articulation), and we can certainly benefit from
studying its differential effects both synchronically and diachronically.
Finally, the more phonologists focus on the physical manifestations of the system ⫺
the vocal tract, the hands, the ear, the eyes ⫺ sign and spoken language phonology
3. Phonology 47

will look different but in interesting ways. The more focus there is on the mind, the
more sign language and spoken language phonologies will look the same in ways that
can lead to a better understanding of a general (cross-modal) phonological compe-
tence.

Acknowledgements: This work is being carried out thanks to NSF grants BCS 0112391
and BCS 0547554 to Brentari. Portions of this chapter have appeared in Brentari (2011)
and are reprinted with permission of Blackwell Publishing.

6. Literature
Alpher, Barry
1994 Yir-Yoront Ideophones. In: Hinton, Leanne/Nichols, Johanna/Ohala, John J. (eds.),
Sound Symbolism. Cambridge: Cambridge University Press, 161⫺177.
Anderson, John/Ewen, Colin J.
1987 Principles of Dependency Phonology. Cambridge: Cambridge University Press.
Andruski, Jean E./Ratliff, Martha
2000 Phonation Types in Production of Phonological Tone: The Case of Green Mong. In:
Journal of the International Phonetics Association 30(1/2), 63⫺82.
Aronoff, Mark/Padden, Carol/Meir, Irit/Sandler, Wendy
2004 Morphological Universals and the Sign Language Type. In: Booij, Geert/Marle, Jaap
van (eds.), Yearbook of Morphology 2004. Dordrecht: Kluwer Academic Publishers,
19⫺40.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344.
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2008 The Roots of Linguistic Organization in a New Language. In: Bickerton, Derek/Arbib,
Michael (eds.), Holophrasis vs. Compositionality in the Emergence of Protolanguage
(Special Issue of Interaction Studies 9(1)), 131⫺150.
Askins, David/Perlmutter, David
1995 Allomorphy Explained through Phonological Representation: Person and Number In-
flection of American Sign Language. Paper Presented at the Annual Meeting of the
German Linguistic Society, Göttingen.
Aslin, Richard N./Saffran, Jenny R./Newport, Elissa L.
1998 Computation of Conditional Probability Statistics by 8-Month-Old Infants. In: Psycho-
logical Science 9, 321⫺324.
Bahan, Benjamin
1996 Nonmanual Realization of Agreement in American Sign Language. PhD Dissertation,
Boston University.
Battison, Robbin
1978 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
Benedicto, Elena/Brentari, Diane
2004 Where Did All the Arguments Go? Argument Changing Properties of Classifiers in
ASL. In: Natural Language and Linguistic Theory 22, 743⫺810.
Bloomfield, Leonard
1933 Language. New York: Henry Holt and Co.
Bodomo, Adama
2006 The Structure of Ideophones in African and Asian Languages: The Case of Dagaare
and Cantonese. In: Mugane, John (ed.), Selected Proceedings of the 35th Annual Confer-
ence on African Languages. Somerville, MA: Cascadilla Proceedings Project, 203⫺213.
48 I. Phonetics, phonology, and prosody

Bonvillian, John D./Orlansky, Michael D./Folven, Raymond J.


1990 Early Sign Language Acquisition: Implications for Theories of Language Acquisition.
In: Volterra, Virginia/Erting, Carol J. (eds.), From Gesture to Language in Hearing and
Deaf Children. Berlin: Springer, 219⫺232.
Boyes Braem, Penny
1981 Distinctive Features of the Handshapes of American Sign Language. PhD Dissertation,
University of California.
Brennan, Mary
1990 Word Formation in British Sign Language. PhD Dissertation, University of Stockholm.
Brennan, Mary
2005 Conjoining Word and Image in British Sign Language (BSL): An Exploration of Meta-
phorical Signs in BSL. In: Sign Language Studies 5, 360⫺382.
Brentari, Diane
1988 Backwards Verbs in ASL: Agreement Re-Opened. In: Proceedings from the Chicago
Linguistic Society Issue 24, Volume 2: Parasession on Agreement in Grammatical Theory.
Chicago: University of Chicago, 16⫺27.
Brentari, Diane
1990a Theoretical Foundations of American Sign Language Phonology. PhD Dissertation, Lin-
guistics Department, University of Chicago.
Brentari, Diane
1990b Licensing in ASL Handshape Change. In: Lucas, Ceil (ed.), Sign Language Research:
Theoretical Issues. Washington, DC: Gallaudet University Press, 57⫺68.
Brentari, Diane
1993 Establishing a Sonority Hierarchy in American Sign Language: The Use of Simultane-
ous Structure in Phonology. In: Phonology 10, 281⫺306.
Brentari, Diane
1994 Prosodic Constraints in American Sign Language. Proceedings from the 20th Annual
Meeting of the Berkeley Linguistics Society, Berkeley: Berkeley Linguistic Society.
Brentari, Diane
1995 Sign Language Phonology: ASL. In: Goldsmith, John (ed.), Handbook of Phonological
Theory. Oxford: Blackwell, 615⫺639.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Brentari, Diane
2002 Modality Differences in Sign Language Phonology and Morphophonemics. In: Meier,
Richard/Cormier, Richard/Quinto-Pozos, David (eds.), Modality and Structure in Signed
and Spoken Languages. Cambridge: Cambridge University Press, 35⫺64.
Brentari, Diane
2005 The Use of Morphological Templates to Specify Handshapes in Sign Languages. In:
Linguistische Berichte 13, 145⫺177.
Brentari, Diane
2006 Effects of Language Modality on Word Segmentation: An Experimental Study of Pho-
nological Factors in a Sign Language. In: Goldstein, Louis/Whalen, Douglas H./Best,
Catherine (eds.), Papers in Laboratory Phonology VIII. The Hague: Mouton de Gruy-
ter, 155⫺164.
Brentari, Diane
2007 Sign Language Phonology: Issues of Iconicity and Universality. In: Pizzuto, Elena/Pie-
trandrea, Paola/Simone, Raffaele (eds.), Verbal and Signed Languages, Comparing
Structures, Constructs and Methodologies. Berlin: Mouton de Gruyter, 59⫺80.
Brentari, Diane
2011 Sign Language Phonology. In: Goldsmith, John/Riggle, Jason/Yu, Alan (eds.), Hand-
book of Phonological Theory. Oxford: Blackwell, 691⫺721.
3. Phonology 49

Brentari, Diane/Coppola, Marie/Mazzoni, Laura/Goldin-Meadow, Susan


2012 When Does a System Become Phonological? Handshape Production in Gesturers, Sign-
ers, and Homesigners. In: Natural Language and Linguistic Theory 30, 1⫺31.
Brentari, Diane/Eccarius, Petra
2010 Handshape Contrasts in Sign Language Phonology. In: Brentari, Diane (ed.) Sign Lan-
guages: A Cambridge Language Survey. Cambridge: Cambridge University Press,
284⫺311.
Brentari, Diane/González, Carolina/Seidl, Amanda/Wilbur, Ronnie
2011 Sensitivity to Visual Prosodic Cues in Signers and Nonsigners. In: Language and Speech
54, 49⫺72.
Brentari, Diane/Padden, Carol
2001 Native and Foreign Vocabulary in American Sign Language: A Lexicon with Multiple
Origins. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages. Mahwah, NJ:
Lawrence Erlbaum, 87⫺119.
Channon, Rachel
2002 Beads on a String? Representations of Repetition in Spoken and Signed Languages. In:
Meier, Richard/Cormier, Richard/Quinto-Pozos, David (eds.), Modality and Structure
in Signed and Spoken Languages. Cambridge: Cambridge University Press, 65⫺87.
Chomsky, Noam/Halle, Morris
1968 The Sound Pattern of English. New York: Harper and Row.
Clements, G. Nick
1985 The Geometry of Phonological Features. In: Phonology Yearbook 2, 225⫺252.
Conlin, Kimberly E./Mirus, Gene/Mauk, Claude/Meier, Richard P.
2000 Acquisition of First Signs: Place, Handshape, and Movement. In: Chamberlain, Char-
lene/Morford, Jill P./Mayberry, Rachel (eds.), Language Acquisition by Eye. Mahwah,
NJ: Lawrence Erlbaum, 51⫺69.
Corina, David
1990 Reassessing the Role of Sonority in Syllable Structure: Evidence from a Visual-gestural
Language. In: Proceedings from the 26th Annual Meeting of the Chicago Linguistic Soci-
ety: Vol. 2: Parasession on the Syllable in Phonetics and Phonology. Chicago: Chicago
Linguistic Society, 33⫺44.
Coulter, Geoffrey
1982 On the Nature of ASL as a Monosyllabic Language. Paper Presented at the Annual
Meeting of the Linguistic Society of America, San Diego, California.
Crasborn, Onno
1995 Articulatory Symmetry in Two-handed Signs. MA Thesis, Linguistics Department, Rad-
boud University Nijmegen.
Crasborn, Onno
2001 Phonetic Implementation of Phonological Categories in Sign Language of the Nether-
lands. Utrecht: LOT (Netherlands Graduate School of Linguistics).
Crasborn, Onno
submitted Inactive Manual Articulators in Signed Languages.
Crasborn, Onno/Kooij, Els van der
1997 Relative Orientation in Sign Language Phonology. In: Jane Coerts/de Hoop, Helen
(eds.), Linguistics in the Netherlands 1997. Amsterdam: Benjamins, 37⫺48.
Creel, Sarah C./Newport, Elissa L./Aslin, Richard N.
2004 Distant Melodies: Statistical Learning of Nonadjacent Dependencies in Tone Sequen-
ces. In: Journal of Experimental Psychology: Learning, Memory, and Cognition 30,
1119⫺1130.
Cuxac, Christian
2000 La LSF, les Voies de l’Iconicité. Paris: Ophrys.
50 I. Phonetics, phonology, and prosody

Cuxac, Christian/Sallandre, Marie-Anne


2007 Iconicity and Arbitrariness in French Sign Language: Highly Iconic Structures, Degen-
erated Iconicity and Diagrammatic Iconicity. In: Pizzuto, Elena/Pietrandrea, Paolo/Sim-
one, Raffaele (eds.), Verbal and Signed Languages, Comparing Structures, Constructs,
and Methodologies. Berlin: Mouton de Gruyter, 13⫺34.
Dixon, Robert Malcom Ward
1977 A Grammar of Yidiny. Cambridge: Cambridge University Press.
Eccarius, Petra
2008 A Constraint-based Account of Handshape Contrast in Sign Languages. PhD Disserta-
tion, Linguistics Program, Purdue University.
Emmorey, Karen/Grabowski, Thomas/McCullough, Steven/Damasio, Hannah/Ponto, Laurie/
Hichwa, Richard/Bellugi, Ursula
2004 Motor-iconicity of Sign Language Does not Alter the Neural Systems Underlying Tool
and Action Naming. In: Brain and Language 89, 27⫺38.
Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language. Hamburg: Signum.
Fortesque, Michael
1984 West Greenlandic. London: Croom Helm.
Friedman, Lynn
1976 Phonology of a Soundless Language: Phonological Structure of American Sign Lan-
guage. PhD Dissertation, University of California.
Frishberg, Nancy
1975 Arbitrariness and Iconicity. In: Language 51, 696⫺719.
Geraci, Carlo
2009 Epenthesis in Italian Sign Language. In: Sign Language & Linguistics 12, 3⫺51.
Goldin-Meadow, Susan
2003 The Resilience of Language. New York: Psychology Press.
Goldin-Meadow, Susan/McNeill, David/Singleton, Jenny
1996 Silence is Liberating: Removing the Handcuffs on Grammatical Expression in the Man-
ual Modality. In: Psychological Review 103(1), 34⫺55.
Goldin-Meadow, Susan/Mylander, Carolyn/Butcher, Cynthia
1995 The Resilience of Combinatorial Structure at the Word Level: Morphology in Self-
styled Gesture Systems. In: Cognition 56, 195⫺262.
Goldsmith, John
1976 Autosegmental Phonology. PhD Dissertation, Linguistics Department, MIT [published
1979, New York: Garland Press].
Golston, Chris/Yang, Phong
2001 White Hmong Loanword Phonology. Paper Presented at the Holland Institute for Gen-
erative Linguistics Phonology Conference V (HILP 5). Potsdam, Germany.
Groce, Nora Ellen
1985 Everyone Here Spoke Sign Language. Cambridge, MA: Harvard University Press.
Grose, Donovan/Schalber, Katharina/Wilbur, Ronnie
2007 Events and Telicity in Classifier Predicates: A Reanalysis of Body Part Classifier Predi-
cates in ASL. In: Lingua 117(7), 1258⫺1284.
Guerra, Anne-Marie Currie
1999 A Mexican Sign Language Lexicon: Internal and Cross-linguistic Similarities and Varia-
tions. PhD Dissertation, Linguistics Department, University of Texas at Austin.
Guerra, Anne-Marie Currie/Meier, Richard/Walters, Keith
2002 A Crosslinguistic Examination of the Lexicons of Four Signed Languages. In: Meier,
Richard/Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed
and Spoken Languages. Cambridge: Cambridge University Press, 224⫺236.
3. Phonology 51

Haiman, John
1979 Hua: A Papuan Language of New Guinea. In: Timothy Shopen (ed.), Languages and
Their Status. Cambridge, MA: Winthrop Publishers, 35⫺90.
Haiman, John
1980 The Iconicity of Grammar: Isomorphism and Motivation. In: Language 56(3), 515⫺540.
Hara, Daisuke
2003 A Complexity-based Approach to the Syllable Formation in Sign Language. PhD Disser-
tation, Linguistics Department, University of Chicago.
Hayes, Bruce
1995 Metrical Stress Theory: Principles and Case Studies. Chicago: University of Chicago
Press.
Hinton, Leeann/Nichols, Johanna/Ohala, John
1995 Sound Symbolism. Cambridge: Cambridge University Press.
Hohenberger, Annette/Happ, Daniela/Leuninger, Helen
2002 Modality-dependent Aspects of Sign Language Production: Evidence of Slips of the
Hands and Their Repairs in German Sign Language. In: Meier, Richard/Cormier,
Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Lan-
guages. Cambridge: Cambridge University Press, 112⫺142.
Houston, Derek M./Jusczyk, Peter W./Kuijpers, Cecile/Coolen, Riet/Cutler, Anne
2000 Cross-language Word Segmentation by 9-month Olds. In: Psychonomic Bulletin and
Review 7, 504⫺509.
Hulst, Harry van der
1993 Units in the Analysis of Signs. In: Phonology 10(2), 209⫺241.
Hulst, Harry van der
1995 The Composition of Handshapes. In: University of Trondheim Working Papers in Lin-
guistics 23, 1⫺18.
Hulst, Harry van der
2000 Modularity and Modality in Phonology. In: Burton-Roberts, Noel/Carr, Philip/Do-
cherty, Gerard (eds.), Phonological Knowledge: Its Nature and Status. Oxford: Oxford
University Press, 207⫺244.
Hyman, Larry
1985 A Theory of Phonological Weight. Dordrecht: Foris.
Jackendoff, Ray
1996 Foundations of Language. Oxford: Oxford University Press.
Jantunen, Tommi
2007 Tavu Suomalaisessa Viittomakielessä. [The Syllable in Finnish Sign Language; with
English abstract]. In: Puhe ja kieli 27, 109⫺126.
Jantunen, Tommi/Takkinen, Ritva
2010 Syllable Structure in Sign Language Phonology. In: Brentari, Diane (ed.), Sign Lan-
guages: A Cambridge Language Survey. Cambridge: Cambridge University Press,
312⫺331.
Jusczyk, Peter W./Cutler, Anne/Redanz, Nancy J.
1993 Preference for the Predominant Stress Patterns of English Words. In: Child Develop-
ment 64, 675⫺687.
Jusczyk, Peter W./Hohne, Elizabeth/Bauman, Angela
1999 Infants’ Sensitivity to Allophonic Cues for Word Segmentation. In: Perception and Psy-
chophysics 61, 1465⫺1473.
Kegl, Judy
1985 Locative Relations in ASL Word Formation, Syntax and Discourse. PhD Dissertation,
MIT.
Kegl, Judy/Senghas, Anne/Coppola, Marie
1999 Creation through Contact: Sign Language Emergence and Sign Language Change in
Nicaragua. In: DeGraff, Michael (ed.), Language Creation and Language Change. Cam-
bridge, MA: MIT Press, 179⫺238.
52 I. Phonetics, phonology, and prosody

Klima, Edward/Bellugi, Ursula


1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Kooij, Els van der
2002 Phonological Categories in Sign Language of the Netherlands: The Role of Phonetic
Implementation and Iconicity. Utrecht: LOT (Netherlands Graduate School of Linguis-
tics).
Liddell, Scott
1984 think and believe: Sequentiality in American Sign Language. Language 60, 372⫺392.
Liddell, Scott/Johnson, Robert
1989 American Sign Language: The Phonological Base. In: Sign Language Studies 64, 197⫺
277.
Lock, John L.
1995 The Child’s Path to Spoken Language. Cambridge, MA: Harvard University Press.
Mandel, Mark A.
1981 Phonotactics and Morphophonology in American Sign Language. PhD Dissertation,
University of California, Berkeley.
Mathur, Gaurav
2000 Verb Agreement as Alignment in Signed Languages. PhD Dissertation, Department of
Linguistics and Philosophy, MIT.
Mathur, Gaurav/Rathmann, Christian
2006 Variability in Verbal Agreement Forms Across Four Sign Languages. In: Whalen,
Douglas H./Goldstein, Louis/Best, Catherine (eds.), Papers from Laboratory Phonology
VIII: Varieties of Phonological Competence. The Hague: Mouton, 285⫺314.
Mathur, Gaurav/Rathmann, Christian
2010 Verb Agreement in Sign Language Morphology. In: Brentari, Diane (ed.), Sign Language:
A Cambridge Language Survey. Cambridge: Cambridge University Press, 173⫺196.
McNeill, David
2005 Language and Gesture. Cambridge: Cambridge University Press.
Meir, Irit
1998 Thematic Structure and Verb Agreement in Israeli Sign Language. PhD Dissertation,
The Hebrew University of Jerusalem.
Meir, Irit
2002 A Cross-modality Perspective on Verb Agreement. In: Natural Language and Linguistic
Theory 20(2), 413⫺450.
Meir, Irit/Padden, Carol/Aronoff, Mark/Sandler, Wendy
2007 Body as Subject. In: Journal of Linguistics 43, 531⫺563.
Miller, Christopher
1996 Phonologie de la Langue des Signes Québecoise: Structure Simultanée et Axe Temporel.
PhD Dissertation, Linguistics Department, Université du Québec à Montreal.
Myers, Scott
1987 Tone and the Structure of Words in Shona. PhD Dissertation, Linguistics Department,
University of Massachusetts.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert
2000 The Syntax of American Sign Language. Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Okrent, Arika
2002 A Modality-free Notion of Gesture and How It Can Help Us with the Morpheme vs.
Gesture Question in Sign Language Linguistics. In: Meier, Richard/Cormier, Kearsy/
Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 175⫺198.
Padden, Carol
1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation,
University of California, San Diego [published 1988, New York: Garland Press].
3. Phonology 53

Padden, Carol/Meir, Irit/Aronoff, Mark/Sandler, Wendy


2010 The Grammar of Space in Two New Sign Languages. In: Brentari, Diane (ed.), Sign Lan-
guage: A Cambridge Language Survey. Cambridge: Cambridge University Press, 570⫺592.
Peirce, Charles Sanders (1932 [1902]), The Icon, Index, and Symbol. In: Hartshorne, Charles/
Weiss, Paul (eds.), Collected Papers of Charles Sander Peirce (vol. 2). Cambridge, MA:
Harvard University Press, 156⫺173.
Perlmutter, David
1992 Sonority and Syllable Structure in American Sign Language. In: Linguistic Inquiry 23,
407⫺442.
Pettito, Laura A./Marantette, Paula
1991 Babbling in the Manual Mode: Evidence for the Ontogeny of Language. In: Science
251, 1493⫺1496.
Petitto, Laura A.
2000 On the Biological Foundations of Human Language. In: Emmorey, Karen/Lane, Harlan
(eds.), The Signs of Language Revisited: An Anthology in Honor of Ursula Bellugi and
Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 447⫺471.
Poizner, Howard/Bellugi, Ursula/Tweney, Ryan D.
1981 Processing of Formational, Semantic, and Iconic Information in American Sign Language.
In: Journal of Experimental Psychology: Human Perception and Performance 7, 1146⫺1159.
Russo, Tommaso
2005 A Cross-cultural, Cross-linguistic Analysis of Metaphors in Two Italian Sign Language
Registers. In: Sign Language Studies 5, 333⫺359.
Sagey, Elizabeth
1986 The Representation of Features and Relations in Nonlinear Phonology. PhD Disserta-
tion, MIT [published 1990, New York: Garland Press].
Sandler, Wendy
1989 Phonological Representation of the Sign: Linearity and Nonlinearity in American Sign
Language. Dordrecht: Foris Publications.
Sandler, Wendy
1993 A Sonority Cycle in American Sign Language. In: Phonology 10(2), 243⫺279.
Sandler, Wendy/Aranoff, Mark/Meir, Irit/Padden, Carol
2011 The Gradual Emergence of Phonological Form in a New Language. In: Natural Lan-
guage and Linguistic Theory 29(2), 503⫺543.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Sandler, Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark
2005 The Emergence of Grammar in a New Sign Language. In: Proceedings of the National
Academy of Sciences 102(7), 2661⫺2665.
Senghas, Anne
1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation,
MIT.
Senghas, Anne/Coppola, Marie
2001 Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial
Grammar. In: Psychological Science 12(4), 323⫺328.
Shay, Robin
2002 Grammaticalization and Lexicalization: Analysis of Fingerspelling. MA Thesis, Purdue
University, West Lafayette, IN.
Shintel, Hadas/Nussbaum, Howard/Okrent, Arika
2006 Analog Acoustic Expression in Speech Communication. In: Journal of Memory and
Language 55(2), 167⫺177.
Shintel, Hadas/Nussbaum, Howard
2007 The Sound of Motion in Spoken Language: Visual Information Conveyed in Acoustic
Properties of Speech. In: Cognition 105(3), 681⫺690.
54 I. Phonetics, phonology, and prosody

Singleton, Jenny/Morford, Jill/Goldin-Meadow, Susan


1993 Once is not Enough: Standards of Well-formedness in Manual Communication Created
over Three Different Timespans. In: Language 69, 683⫺715.
Stokoe, William
1960 Sign Language Structure: An Outline of the Visual Communication Systems of the
American Deaf. In: Studies in Linguistics, Occasional Papers 8. Silver Spring, MD: Lin-
stok Press.
Stokoe, William/Casterline, Dorothy/Croneberg, Carl
1965 A Dictionary of American Sign Language on Linguistic Principles. Silver Spring, MD:
Linstok Press.
Supalla, Ted/Newport, Elissa
1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign
Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language
Research. New York: Academic Press, 91⫺132.
Supalla, Ted
1982 Structure and Acquisition of Verbs of Motion and Location in American Sign Language.
PhD Dissertation, University of California, San Diego, CA.
Supalla, Ted
2004 The Validity of the Gallaudet Lecture Films. In: Sign Language Studies 4, 261⫺292.
Taub, Sarah
2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cam-
bridge: Cambridge University Press.
Thompson, Robin/Emmorey, Karen/Gollan, Tamar H.
2005 “Tip of the Fingers” Experiences by Deaf Signers: Insights into the Organization of a
Sign-based Lexicon. In: Psychological Science 16(11), 856⫺860.
Thompson, Robin/Emmorey, Karen/Kluender, Robert
2006 The Relationship Between Eye Gaze and Verb Agreement in American Sign Language:
An Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604.
Uyechi, Linda
1995 The Geometry of Visual Phonology. PhD Dissertation, Stanford University.
Valli, Clayton/Lucas, Ceil
1992 The Linguistic Structure of American Sign Language. Washington, DC: Gallaudet Uni-
versity Press.
Vroomen, Jean/Tuomainen, Jurki/de Gelder, Beatrice
1998 The Roles of Word Stress and Vowel Harmony in Speech Segmentation. In: Journal of
Memory and Language 38, 133⫺149.
Wilbur, Ronnie
2008 Complex Predicates Involving Events, Time and Aspect: Is this Why Sign Languages
Look so Similar? In: Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR
2004. Hamburg: Signum, 219⫺250.
Wilbur, Ronnie
2010 The Event Visibility Hypothesis. In: Brentari, Diane (ed.), Sign Languages: A Cam-
bridge Language Survey. Cambridge, MA: Cambridge University Press, 355⫺380.
Wilcox, Sherman/Rossini, Paolo/Pizzuto, Elena
2010 Grammaticalization in Sign Languages. In: Brentari, Diane (ed.), Sign Languages: A
Cambridge Language Survey. Cambridge: Cambridge University Press, 332⫺354.
Wilcox, Phyllis
2001 Metaphor in American Sign Language. Washington, DC: Gallaudet University Press.

Diane Brentari, Chicago, Illinois (USA)


4. Visual prosody 55

4. Visual prosody
1. Introduction
2. Prosodic constituents
3. Intonation
4. Prominence
5. Residual issues
6. Conclusion
7. Literature

Abstract
Prosody is the part of language that determines how we say what we say. By manipulat-
ing timing, prominence, and intonation, we separate constituents from one another and
indicate ways in which constituents are related to one another. Prosody enables us to
emphasize certain parts of an utterance, and to signal whether the utterance is an asser-
tion or a question, whether it relies on shared knowledge, and other pragmatic informa-
tion. This chapter demonstrates how sign languages organize a range of available articu-
lators ⫺ the two hands, parts of the face, the head and body ⫺ into a linguistic system
of prosody. It takes the position that prosody and syntax are separate, interacting compo-
nents of the grammar in sign languages as in spoken languages. The article also shows
that prosody is encoded by both manual and non-manual articulators, and that all of
these articulators also subserve other, non-prosodic functions in sign language grammar.
This state of affairs contradicts the common assumption that ‘non-manuals’ constitute a
natural class in the grammar.

1. Introduction
It’s not just what you say, it’s how you say it. Well understood by actors, this simple
truth is less obvious to most people, though it is equally important to all of us in
everyday communication through language. The ‘how’ of what we say ⫺ the division
of our utterances into rhythmic chunks, the relative emphasis placed on parts of them,
and the meaningful modulation of the signal through intonation ⫺ all make our linguis-
tic interactions interpretable. Without this richly structured delivery system, it is quite
likely that we would have difficulty communicating at all. The importance of this com-
ponent of the linguistic system, called prosody, is often overlooked, even by linguists,
because of our reliance on the written word in our analysis of language data. But the
written word is only a shorthand code for representing language. Apart from a few
punctuation conventions, it is missing the crucial contribution to language made by
prosody.
Writing systems for sign languages are not widely used, and this may explain why
researchers began paying attention to the prosodic system quite early in the history of
the field ⫺ though most of them didn’t call it that (cf. also chapter 43, Transcription).
56 I. Phonetics, phonology, and prosody

Specifically, they noticed that different types of utterances in American Sign Language
(ASL), such as questions, topics, and relative clauses, were consistently characterized
by particular configurations of the face, head, and body, accompanying the signs made
by the hands (Liddell 1978, 1980; Baker/Padden 1978). From that time on, descriptions
of non-manual markers have made their way into nearly all linguistic studies of many
sign languages, so important are they considered to be for the interpretation of the
data at all levels of structure. For space reasons, this overview refers only to research
contributing to a theory of sign language prosody, and must omit reference to many
studies on non-manuals in ASL and other sign languages that do not deal with prosody.
The early work on signals of the kind to be discussed here did not in fact attribute
them to prosody. Liddell’s groundbreaking observations led him to argue that certain
non-manual articulations correspond to syntactic markers of the sentence and clause
types with which they co-occur, and, in the case of relative clauses, that the markers
cue embedded sentences in ASL. This was welcome news, providing the first evidence
for the existence of embedded sentences in the language, a claim that was later sup-
ported by rigorous syntactic tests (starting with Padden (1988), cf. also chapter 14 on
sentence types and chapter 16 on coordination and subordination).
The pragmatic function of some non-manual markers in ASL was pointed out early
on by Coulter (1979); see also Janzen/Shaffer (2002) for ASL, Engberg-Pedersen
(1990) for Danish Sign Language (DSL) and chapter 21 on information structure.
Eventually, linguists began to compare some of these functions, particularly articula-
tions of the eyes and brows, to those of intonation (see Johnston (1989) for Australian
Sign Language (Auslan); Woll (1981) and Deuchar (1984) for British Sign Language
(BSL); Reilly/McIntire/Bellugi (1990) and Wilbur (1994) for ASL; Nespor/Sandler
(1999) and Sandler (1999a,b) for Israeli Sign Language (Israeli SL)), and this compari-
son is echoed here.
If facial articulations correspond to intonation, then they cannot be adequately un-
derstood independently. Instead, they participate in a broader prosodic system, one
that also includes chunking the words into units denoted by timing, conveyed by the
hands. This means that the prosodic system is not solely non-manual, and indeed ‘non-
manuals’ do not constitute a coherent linguistic category. Instead, prosody involves
both manual and non-manual signals, and, conversely, both manual and non-manual
signals serve other components of the grammar apart from prosody.
As in any new field, the study of prosody in sign language is characterized by differ-
ences among investigators in assumptions, methods, and analyses, differences which
may confuse even the savviest of scholars. Finding the common ground, identifying the
differences, and, where possible, choosing between different analyses and approaches
are all equally important, and it is that difficult endeavor that this chapter seeks to
elucidate. Relying mainly on data from ASL and Israeli SL, these pages focus on stud-
ies that tie particular non-manual and manual articulations specifically to prosody. Sec-
tion 2 deals with the division of utterances into rhythmic constituents in a prosodic
hierarchy. Intonation gives added meaning to these constituents, and its temporal align-
ment marks constituent boundaries, as section 3 explains. The third ingredient of pros-
ody is phrase level prominence or stress, described in section 4. Each of these sections
begins with a brief description of the basic characteristics of that level of structure in
spoken language, as context.
No description is without a theory behind it, and the present overview is no excep-
tion. A model of sign language prosody emerges which carves up the territory into the
4. Visual prosody 57

three subcomponents, rhythm (or timing), intonation, and stress, and describes particu-
lar phonetic cues attributed to each of them. The general picture is this: rhythmic
and temporal structure are conveyed primarily by the hands, while the equivalent of
intonation is articulated primarily by the face. Prominence is one of the indicators of
rhythm and, as such, also relies on features associated with the manual articulators
that convey the words of the text, but it is also enhanced by leans of the body.
Not all non-manual markers are prosodic, as explained above, and several examples
of non-manual articulations with different or unresolved status are noted in section 5.1.
The bulk of the chapter shows ways in which sign language prosody is similar to that
of spoken language. But clearly that is not the whole story, as the physical transmission
systems are so different in the two modalities. The issue of the relation between the
linguistic and physical systems is broached in section 5.2, where the use of visual signals
with spoken language prosody is also touched upon. Section 6 is a summary and conclu-
sion, highlighting areas for future research.

2. Prosodic constituents

The utterances of language are divided into constituents denoted by timing. Prosodic
constituents are hierarchically organized, each with its own phonetic and phonological
properties, and interacting with other components of the grammar in different ways.
First, prosodic constituents in spoken language are described, followed by a characteri-
zation of their sign language counterparts.

2.1. Prosodic constituents in spoken language

Much of the literature on prosody in spoken language adopts a hierarchy of prosodic


constituents, shown in part in example (1) (adapted from Nespor/Vogel 1986).

(1) mora > syllable > prosodic word > phonological phrase > intonational phrase
> phonological utterance

There is clearly a correspondence between prosodic and syntactic constituents like


syntactic phrases such as noun phrases (corresponding roughly to phonological
phrases) and clauses (corresponding roughly to intonational phrases), and some theo-
ries propose that phonological and intonational phrases are projected from syntactic
constituents (Selkirk 1984, 1995; Nespor/Vogel 1986). In one possible prosodic render-
ing of the sentence shown in example (2), adapted from a sentence in Nespor and
Vogel (1986), the parenthetical sentence it is said forms its own intonational phrase
constituent (labeled with an ‘I’ subscript), resulting in a sentence with two major breaks
separating three intonational phrases, (a) The giant panda, (b) it is said, and (c) eats
only bamboo in its natural habitat. The last intonational phrase is divided into two less
salient but still discrete constituents called phonological phrases (labeled with a ‘P’
subscript), eats only bamboo (a verb phrase), and in its natural habitat (a prepositional
58 I. Phonetics, phonology, and prosody

phrase). In this example, each prosodic constituent corresponds to a syntactic con-


stituent.

(2) [[The giant panda]P]I [[it is said]P]I [[eats only bamboo]P] [in its natural hab-
itat]P]I

Prosodic phrasing can vary and undergo restructuring, depending on such factors as
rate of speech, size of constituent (Nespor/Vogel 1986), and semantic reasons related to
interpretation (Gussenhoven 2004). For these and other reasons, prosodic and syntactic
constituents are not always isomorphic ⫺ they don’t always match up (Bolinger 1989).
Example (3) shows the syntactic constituency of part of the children’s story The House
that Jack Built, and (4) shows that the prosodic constituent structure is different.

(3) syntactic constituents: This is [the cat that ate [the rat that ate [the cheese…
(4) prosodic constituents: This is the cat [that ate the rat [that ate the cheese…

We see such mismatches at the level of the word as well. In the sentence, Robbie’s
been getting on my nerves, Robbie’s is one prosodic word (also called a phonological
word) organized around a single main word stress, but two morphosyntactic words,
Robbie and is. The fact that prosody and (morpho)syntax are not isomorphic motivates
the claim that prosody is a separate component of the grammar. Specific arguments
against subsuming particular intonational markers within the syntactic component are
offered in Sandler and Lillo-Martin (2006) and further developed in Sandler (2011).
There is evidence in the literature for the integrity of each of the constituents in
the hierarchy. Apart from phonetic cues associated with them, certain phonological
rules require particular constituents as their domain. We will look at only one example
of a rule of this sort in spoken language, at the level of the phonological phrase con-
stituent (also called the intermediate phrase). The boundary of this constituent may
be marked phonetically by timing cues such as added duration, sometimes a brief
pause, and a boundary tone. The example, French liaison, occurs within phonological
phrases but not across phonological phrase boundaries (Selkirk 1984; Nespor/Vogel
1986). The [s] in les and the [t] in sont are pronounced when followed by a word
beginning with a vowel in the same phonological phrase (indicated by carats in (5)),
but the [s] in allés is not pronounced, though followed by a word consisting of a vowel,
because it is blocked by a phonological phrase boundary (indicated by a double slash).

(5) [Les^enfants]P [sont^allés]P // à l’école. [French]


‘The children went to school.’

By respecting the phonological phrase boundary, such processes contribute to the tem-
poral patterns of speech, and provide evidence for the existence of the prosodic cat-
egory ‘phonological phrase’ within the prosodic hierarchy. Other rules respect prosodic
constituent boundaries at different levels of the hierarchy, such as the intonational
phrase or the phonological utterance (Nespor/Vogel 1986).
Some clarification of the role that such processes take in our understanding of
prosody is called for. The prosodic constituents are determined on the basis of their
syntactic and/or semantic coherence together with the phonetic marking typically
4. Visual prosody 59

found at the relevant level of structure. Certain postlexical phonological processes,


such as liaison and assimilations across word boundaries, may apply within a domain
so determined. That is, their application is restricted by the domain boundary ⫺ they
do not cross the boundary. Such processes, which may be optional, are not treated as
markers of the boundary ⫺ it is phonetic cues such as phrase-final lengthening and
unmarked prominence patterns that have that role. Rather, the spreading/assimilation
rules are seen as providing further evidence for the existence of the boundaries, which
themselves are determined on independent grounds.
In sum, prosodic constituents are related to syntactic ones but are not always coex-
tensive with them; they are marked by particular phonetic cues; and their boundaries
may form the domain of phonological rules, such as assimilation (external sandhi).

2.2. Prosodic constituents in sign language

Much has been written about the sign language syllable; suffice it to say that there is
such a thing, and that it is characterized by a single movement or more than one type
of movement occurring simultaneously (Coulter 1978; Liddell/Johnson 1989; Sandler
1989, 2012; Brentari 1990, 1998; Perlmutter 1992; Wilbur 1993, 2011 and chapter 3,
Phonology). This movement can be a movement of the hand from one place to another,
movement of the fingers, movement at the wrist, or some simultaneous combination
of these. The words of sign language are typically monosyllabic (see Sandler/Lillo-
Martin 2006, chapter 14 and references cited there). However, the word and the sylla-
ble are distinguishable; novel compounds are disyllabic words, for example. But when
two words are joined, through lexicalization of compounds or cliticization, they may
reduce to the optimal monosyllabic form (Sandler 1993, 1999). In other words, signs
prefer to be monosyllabic.
Figure 4.1 shows how Israeli SL pronouns may cliticize to preceding hosts at the
ends of phrases, merging two morphosyntactic words, each a separate syllable in cita-
tion form, to a single syllable. Under this type of cliticization, called coalescence (San-
dler 1999a), the non-dominant hand articulates only the monosyllabic host sign, shop,
while the dominant hand simultaneously articulates the host and clitic in reduced form
(shop-there), superimposed on the same syllable. This is a type of non-isomorphism
between morphosyntactic and prosodic structure: two lexical words form one prosodic
word. It is comparable to Robbie is / Robbie’s in English.
A study of the prosodic phonology of Israeli Sign Language found evidence for
phonological and intonational phrases in that language (Nespor/Sandler 1999; Sandler
1999b, 2006). Phonological phrases are identified in this treatment on syntactic and
phonetic grounds. Phonetically, the final boundary of a phonological phrase is charac-
terized by hold or reiteration of the last sign in the phrase or pause after it. An optional
phonological process affecting the non-dominant hand provides evidence for the pho-
nological phrase constituent. The process is a spreading rule, called Non-dominant
Hand Spread (NHS), which may be triggered by two-handed signs. In this process, the
non-dominant hand, configured and orientedas in the triggering sign, is present
(though static) in the signing signal while the dominant hand signs the rest of the signs
in the phrase.
60 I. Phonetics, phonology, and prosody

Fig. 4.1: Citation forms of shop, there, and the cliticized form shop-there

The domain of the rule is the phonological phrase: if the process occurs, the spread
stops at the phonological phrase boundary, like liaison in French. Spreading of the
non-dominant hand was first noted by Liddell and Johnson (1986) in their treatment
of ASL compounds, and this spreading occurs in Israeli SL compounds uttered in
isolation as well. However, since compounds in isolation always comprise their own
phonological phrases, a simpler analysis (if ASL is like Israeli SL in this regard) is that
NHS is a post-lexical phonological process whose domain is the phonological phrase.
Unlike French liaison, this rule does not involve sequential segments. Rather, the
spread of the non-dominant hand from the triggering two-handed sign is simultaneous
with the signing of other words by the dominant hand. Figure 4.2 illustrates NHS in a
sentence meaning, ‘I told him to bake a tasty cake, one for me and one for my sister’.
Its division into phonological and intonational phrases is as follows: [[index1 tell-
him]P [bake cake]P [tasty]P]I [[one for-me]P [one for-sister]P]I. In this sentence, the
configuration and location of the non-dominant hand from the sign bake spreads to
the end of the phonological phrase by remaining in the same configuration as in the
source sign, bake, throughout the next sign, cake, which is a one-handed sign. The end
of the phonological phrase is marked by a hold ⫺ holding the hand in position at the
end of the last sign. The signs on either side of this boundary, him and tasty (not
shown here) are not affected by NHS.

[bake cake]P
Fig. 4.2: Non-dominant Hand Spread from bake to cake in the same phonological phrase
4. Visual prosody 61

Similar but not identical spreading behavior is described in ASL (Brentari/Crossley


2002). As that study was based on somewhat different definitions and assumptions,
and used different methodology from the one described here, the results are difficult
to compare at this point. However, like the Nespor and Sandler study, the ASL study
does show spreading of the non-dominant hand beyond the domain of compound
words. Explorations of the role of the non-dominant hand in prosody are found in
Sandler (2006, 2012) and in Crasborn (2011).
The next constituent in the hierarchy is the intonational phrase, marked by a salient
break that delineates certain syntactically coherent elements, such as (fronted) topics,
extraposed elements, non-restrictive relative clauses, the two clauses of conditional
sentences, and parentheticals (Nespor/Vogel 1986). This prosodic constituent achieves
its salience from a number of phonetic cues (on ASL, see Wilbur 2000 and references
cited there). In Israeli SL, in addition to the phonetic cues contributed by the boundary
of the nested phonological phrase, intonational phrase boundaries are marked by
change in the position of the head or body, and a change across the board in all el-
ements of facial expression. An example is provided in section 3, Figures 4.3 and 4.4.
The juncture between intonational phrases in both ASL (Baker/Padden 1978; Wilbur
1994) and Israeli SL (Nespor/Sandler 1999) is often punctuated by an eyeblink.
The whole body participates in the phonetic realization of sign language prosody.
In her study of early and late learners of Swiss German Sign Language (SGSL), Boyes-
Braem (1999) found that both groups tend to evenly match the temporal duration of
two related constituents, while only the early learners produce rhythmic body sways
which mark larger chunks of discourse of particular types. The particular characteristics
and appearance of this body sway may be specific to SGSL. A recent study of prosody
in BSL also documents a higher level of prosodic structure above the intonational
phrase, and tests its perception experimentally (Fenlon 2010).
The validity of hierarchically organized prosodic constituents in a sign language is
lent credence by an early study of pauses in ASL (Grosjean/Lane 1977). The research-
ers found highly significant differences in the length of pauses (for them, pauses are
holds in final position), depending on the level of the constituents separated by them:
between sentences > between conjoined clauses > between NPs and VPs > and within
NPs or VPs.

3. Intonation
The intonational phrase is so named because it is the domain of the most salient pitch
excursions of spoken language intonation. Let us see what this means in spoken lan-
guage, and examine more closely its sign language equivalent: the intonation of the
face.

3.1. Intonation in spoken language

Intonation can express a rich and subtle mélange of meanings in our utterances. Nuan-
ces of meaning such as additive, selective, routine, vocative, scathing, and many others
62 I. Phonetics, phonology, and prosody

have been attributed to particular pitch contours or tunes, and the timing of these
contours can also influence the interpretation (Gussenhoven 1984, 2004). Example (6)
demonstrates how different intonational patterns can yield different interpretations of
a sentence (from Pierrehumbert/Hirschberg 1990). In the notation used in this exam-
ple, H and L stand for high and low tones, the asterisk means that the tone is accented,
and the percent symbol indicates the end of an Intonational Phrase. These examples
are distinguished by two tonal contrasts: the low tone on apple in example (a) versus
the high tone on apple in (b); and the intermediate high tone before the disjunction
or in (b), where there is no intermediate tone in (a).

(6) a. Do you want an apple or banana cake (an apple cake or a banana cake)
L* H* L L%
b. Do you want an apple or banana cake (fruit or cake)
H* H H* L L%

The examples illustrate a number of characteristics of prosody in spoken language.


First, we see here that part of the meaning of the utterance is conveyed by the intona-
tion and the way it aligns with the text. Second, at the phonological level, the atoms
of the intonational system are just two tones, L (low) and H (high), which, accented
or not accented, combine in different ways to form all the tunes of any language (Pier-
rehumbert 1980). Third, we see that pitch accents (asterisked) are assigned to promi-
nent elements within a prosodic constituent, and that intonational contours, made up
of several tones including a pitch accent and constituent boundary tones, like H* L L%
shown here, tend to cluster at the edges of prosodic constituents. Since prosodic struc-
ture is hierarchical and constituents are nested in larger constituents, the combined
tone patterns of phonological and intonational phrases produce the most salient excur-
sions at the Intonational Phrase boundary.
Pierrehumbert and Hirschberg argue that intonation is componentially structured ⫺
that particular meanings or pragmatic functions are associated with individual tones,
and putting them together produces a combined meaning (see also Hayes/Lahiri 1991).
We will see that this componentiality characterizes sign language intonation as well.
Apart from favoring alignment with prosodic over syntactic constituents, a different
kind of non-isomorphism between syntax and prosody is revealed by intonation. While
certain tunes are typically identified with particular syntactic structures, pragmatic con-
text such as shared knowledge, expectation, or uncertainty often result in an atypical
intonational tune. The declarative, You’re going to Poughkeepsie, can get a questioning,
incredulous intonation, You’re going to Poughkeepsie?(!) if the announcement of such
a journey was unexpected. The reverse is also possible, as in rhetorical questions, which
have the syntactic form of questions but declarative intonation. Intonation, then, ex-
presses meaning, often determined by pragmatics. It can convey illocutionary force
(marking declarative, interrogative, or vocative expressions, for example), and other
discourse meanings like shared or expected information. Intonation can also mark
emotional affect, in a system that has been termed paralinguistic (Ladd 1996).

3.2. Intonation in sign language


The idea that facial expression and certain other non-manual markers function in sign
language like intonation in spoken languages has been in the air for a long time (e.g.,
4. Visual prosody 63

Baker/Padden 1978; Reilly/McIntire/Bellugi 1990). On this intonation view, the facial


pattern occurring on an utterance is predicted by semantic and pragmatic factors such
as illocutionary force and other discourse relevant markers and relations, to which we
will return shortly. Other researchers, following Liddell’s (1980) early work on syntax,
treat these markers as explicitly syntactic elements that necessarily occur on structures
defined syntactically (and not pragmatically or semantically), structures such as yes-no
questions, wh-questions, topics, relative clauses (e.g., Petronio/Lillo-Martin 1997; Nei-
dle et al. 2000), and even non-wh A-bar positions (Wilbur/Patschke 1999). There is a
tension between these two possibilities that has only recently begun to be addressed.
The two views can be evaluated by investigating whether it is syntactic structure that
makes the best predictions about the specification and distribution of the relevant
markers, or whether they are best predicted by pragmatic/semantic factors. Proponents
of the latter view argue that particular markers, such as furrowed brows on wh-ques-
tions, cannot be considered part of the syntactic component in sign languages. Here,
only the pragmatic/semantic view is elaborated. See Sandler and Lillo-Martin (2006,
chapters 15 and 23) and Sandler (2011b) for detailed discussion of the two perspectives,
and Wilbur (2009) for an opposing view.
The motivations for viewing facial expression in particular as comparable to intona-
tion in spoken language are shown in (7):

(7) Facial expression as intonation


(a) It fulfills many of the same pragmatic functions as vocal intonation, such
as cuing different types of questions, continuation from one constituent to
another, and shared information.
(b) It is temporally aligned with prosodic constituents, in particular with into-
national phrases.
(c) It can be dissociated from syntactic properties of the text.

In their paper about the acquisition of conditional sentences in ASL, Reilly, McIntire,
and Bellugi (1990) explain that the following string has two possible meanings, disambi-
guated by particular non-manual markers: you insult jane, george angry. With neu-
tral non-manuals, it means ‘You insulted Jane and George got angry’. But the string
has the conditional meaning ‘If you insult Jane, George will be angry’ when the first
clause is characterized by the following markers: raised brows and head tilt throughout
the clause, with head thrust at its close and blink at the juncture between the two
clauses. There is an optional sign for if in ASL, but in this string, only prosody marks
the conditional. It is not unusual for conditionals to be marked by intonation alone
even in spoken languages. While English conditionals tend to have if in the first clause,
conditionals may be expressed syntactically as coordinated clauses (with and) in that
language ⫺ You walk out that door now and we’re through ⫺ or with no syntactic clue
at all and only intonation ⫺ He overcooks the steak, he’s finished in this restaurant.
The description by Reilly and colleagues clearly brings together the elements of
prosody by describing the facial expression and head position over the ‘if’ clause, as
well as the prosodic markers at the boundary between the two phrases. The facial
expression here is raised brows, compatible with Liddell’s (1980) observation that
markers of constituents such as these occur on the upper face, which he associates with
particular types of syntactic constituents. He distinguished these from articulations of
64 I. Phonetics, phonology, and prosody

the lower face, which have adverbial or adjectival meanings, such as ‘with relaxation
and enjoyment’, to which we return in section 5.1.
The Israeli Sign Language prosody study investigates the temporal alignment of
intonational articulations with the temporal and other markers that set off prosodic
constituents. As explained in section 2.2. and illustrated in Figures 4.4 and 4.5 below,
in the sentences elicited for that study, all face articulations typically change at the
boundary between intonational phrases, and a change in head or body position also
occurs there.
There is a notable difference between the two modalities in the temporal distribu-
tion of intonation. Unlike intonational tunes of spoken language, which occur in a
sequence on individual syllables of stressed words and at prosodic constituent bounda-
ries, the facial intonation markers of sign language co-occur simultaneously and typi-
cally span the entire prosodic constituent. The commonality between the two modali-
ties is this: in both, the most salient intonational arrays are aligned with prosodic
boundaries.
Liddell’s early work on non-manuals described configurations involving certain ar-
ticulations, such as brow raise and head tilt, in a variety of different sentence types, as
noted above. Is it a coincidence that the same individual articulations show up in differ-
ent configurations? Later studies show that it is not. In an ASL study, forward head
or body leans are found to denote inclusion/involvement and affirmation, while leans
backward signify exclusion/non-involvement and negation (Wilbur/Patschke 1998). In
Israeli SL, the meanings of individual facial expressions are shown to combine to create
more complex expressions with complex meanings. For example, a combination of the
raised brows of yes/no questions and the squint of ‘shared information’ is found on
yes/no questions about shared information, such as Have you seen that movie we were
talking about? (Nespor/Sandler 1999). Similarly, the furrowed brow of wh-questions
combines with the shared information squint in wh-questions about shared informa-
tion, such as Where is that apartment we saw together? (Sandler 1999b, 2003). Each of
the components, furrowed brow, brow raise, and squint, pictured in Figure 4.3, contrib-
utes its own meaning to the complex whole in a componential system (cf. also chap-
ter 14 on sentence types, chapter 15 on negation and chapter 21 on information struc-
ture).
A semantic/pragmatic explanation for facts such as these, one that links the mean-
ings or pragmatic intents of different constituents characterized by a particular facial

Fig. 4.3: Three common intonational facial elements: (a) furrowed brow (from a typical wh-ques-
tion), (b) brow raise (from a typical yes/no question), and (c) squint (from a typical
‘shared information’ context).
4. Visual prosody 65

expression, was first proposed by Coulter (1979). This line of reasoning is developed
in detail for two Israeli SL intonational articulations, brow raise and squint (Dachkov-
sky 2005, 2008; Dachkovsky/Sandler 2009). Brow raise conveys a general meaning of
dependency and/or continuation, much like high tone in spoken language. In questions,
the continuation marked by brow raise leads to the answer, to be contributed by the
addressee. In conditionals, the continuation marked by brow raise leads from the if
clause to the consequent clause. Brow raise characterizes both yes/no questions and
conditionals in many sign languages. The facial action squint, common in Israeli SL
but not widely reported in other sign languages so far, instructs the interlocutor to
retrieve information that is shared but not readily accessible. It occurs on topics, rela-
tive clauses, and other structures. Put together with a brow raise in conditionals, the
squint conveys a meaning of an outcome that is not readily accessible because it is not
realized ⫺ a counterfactual conditional.
The occurrence of the combined expression, brow raise and squint, is reliable in
Israeli SL counterfactual conditionals (95% of the 39 counterfactual conditionals elic-
ited from five native Israeli SL subjects in the Dachkovsky study). An example is, If
the goalkeeper had caught the ball, they would have won the game. This sentence is
divided into two intonational phrases. Figure 4.4 shows the whole utterance, and Fig-
ure 4.5 is a close-up, showing the change of facial expression and head position on the
last sign of the first intonational phrase and the first sign of the second. Crucially, the
release or change of face and body actions occurs at the phrase boundary.

Fig. 4.4: Counterfactual conditional sentence with partial coding (from Dachkovsky/Sandler 2009)

A neutral conditional facial expression, extracted from the sentence, If he invites


me to the party, I will go, characterized by brow raise without squint, is shown in
Figure 4.6 for comparison.
In addition to non-isomorphism between morphosyntactic and prosodic constitu-
ency, demonstrated for Israeli SL in section 2.2, non-isomorphism is also found be-
tween syntactic structure and intonational meaning. For example, while wh-questions
typically occur with the furrowed brow facial expression shown in Figure 4.3, present
66 I. Phonetics, phonology, and prosody

Fig. 4.5: Intonational phrase boundary

Fig. 4.6: Neutral conditional facial expression

in 92% of the wh-questions in the Dachkovsky (2005) study, other expressions are also
possible. Figure 4.7 shows a wh-question uttered in the following context: You went to
a party in Haifa and saw your friend Yoni there. If you had known he was going, you
would have asked for a ride. The question you ask him is, “Why didn’t you tell me you
were going to the party?” ⫺ syntactically a wh-question. As in spoken language, into-
nation can convey something about the (pragmatic) assumptions and the (emotional)
attitude of the speaker/signer that cannot be predicted by the syntax. Here we do not
see the furrowed brow (Figure 4.3a) typical of wh-questions. Instead, we see an expres-
sion that may be attributed to affect. As in spoken intonation (Ladd 1996), paralinguis-
tic and linguistic intonation are cued by the same articulators, and distinguishing them
is not always easy. See de Vos, van der Kooij, and Crasborn (2009) for a discussion of
the interaction between affective and linguistic intonation in Sign Language of the
Netherlands (NGT).
In sum, facial expression serves the semantic/pragmatic functions of intonation in
sign language; it is componentially structured; and the temporal distribution of linguis-
tic facial intonation is determined by prosodic constituency.
4. Visual prosody 67

Fig. 4.7: Atypical facial expression on a wh-question

4. Prominence
In addition to timing and intonation, prominence or stress is important to the interpre-
tation of utterances. The sentences in (9) are distinguished only by where the promi-
nence is placed:

(9) a. Ron called Jeff an intellectual, and then he insulted him.


b. Ron called Jeff an intellectual, and then he insulted him.

It is the pattern of prominence that tells us whether calling someone an intellectual is


an insult and it also tells us who insulted whom.

4.1. Prominence in spoken language

Typically, languages have default prominence patterns that place prominence either
toward the beginning or toward the end of prosodic constituents, depending on the
word order properties of the language, according to Nespor and Vogel (1982). In Eng-
lish, a head-complement language, the prominence is normally at the end: John gave a
gift to Mary. English is a ‘plastic’ intonation language (Vallduví 1992), allowing promi-
nence to be placed on different constituents if they are focused or stressed, as (9)
showed. The stress placement on each of the following sentences indicates that each
is an answer to a different question: John gave a gift to Mary (either default or with
Mary focused), John gave a gift to Mary, John gave a gift to Mary, or John gave a gift
to Mary. The stress system of other languages, such as Catalan, is not plastic; instead
of roaming freely, the focused words move into the prominent position of the phrase,
which remains constant.

4.2. Prominence in sign language

How do sign languages mark prominence? In the Israeli SL prosody study, the manual
cues of pause, hold, or reiteration and increased duration and size (displacement) con-
68 I. Phonetics, phonology, and prosody

sistently fall on the final sign in the intonational phrases of isolated sentences, and the
authors interpret these as markers of the default phrase-final prominence in Israeli SL.
As Israeli SL appears to be a head-complement language, this prominence pattern is
the predicted one.
A study of ASL using 3-D motion detection technology for measuring manual be-
havior determined that default prominence in ASL also falls at the ends of prosodic
constituents (Wilbur 1999). That study attempted to tease apart the effects of phrase
position from those of stress, and revealed that increased duration, peak velocity, and
displacement are found in final position, but that peak velocity alone correlates with
stress in that language. The author tried to dissociate stress from phrase-final promi-
nence using some admittedly unusual (though not ungrammatical) elicited sentences.
When stress was manipulated away from final position in this way, measurements indi-
cated that added duration still always occurred only phrase-finally, suggesting that du-
ration is a function of phrase position and not of stress. The author reports further that
ASL is a non-plastic intonation language, in which prominence does not tend to move
to focus particular parts of an utterance; instead the words or phrases typically move
into the final prominent position of the phrase or utterance.
Certain non-manual cues also play a role in marking prominence. Contrastive stress
in ASL is marked by body leans (Wilbur/Patschke 1998). In their study of focus in
NGT, van der Kooij, Crasborn, and Emmerik (2006) also found that signers use leans
(of the head, the body, or both) to mark contrastive stress, but that there is a tendency
to lean sideways rather than backward and forward in that language. The authors point
out that not only notions such as involvement and negation (Wilbur/Patschke 1998)
affect the direction of body leans in NGT. Pragmatic aspects of inter-signer interaction,
such as the direction in which the interlocutor leaned in the preceding utterance, must
also be taken into account in interpreting leans. Signers tend to lean the opposite way
from that of their addressee in the context-framing utterance, regardless of the seman-
tic content of the utterance, i.e., positive or negative. Just as pragmatic considerations
underlie prosodic marking of information that is old, new, or shared among interlocu-
tors, other pragmatic factors such as the inter-signer interaction described in the NGT
study must surely play a role in sign language prosody in general.

5. Residual issues

Two additional issues naturally emerge from this discussion, raised here both for com-
pleteness and as context for future research. The first is the issue of the inventory of
phonetic cues in the sign language prosodic system, and the second is the role of
modality on prosody. Since the physical system of sign language transmission is so
different from that of spoken language, the first problem for sign language researchers
is to determine which phonetic cues are prosodic. This chapter attempts to make a
clear distinction between the terms, non-manuals and prosodic markers. The two are
not synonymous. For one thing, the hands are very much involved in prosody, as we
have seen. For another, not all non-manual markers are prosodic. Just as manual articu-
lations encode many grammatical functions, so too do non-manual articulations. This
means that neither ‘manuals’ nor ‘non-manuals’ constitutes a natural class in the gram-
4. Visual prosody 69

mar. Discussion of how the articulatory space is divided up among different linguistic
systems in 5.1 underscores the physical differences between the channels of transmis-
sion for prosody in spoken and sign languages, which brings us to the second issue, in
5.2, the influence of modality on the form and organization of prosody.

5.1. Not all non-manual articulations are prosodic

Physical properties alone cannot distinguish prosodic units from other kinds of el-
ements in language. In spoken language, duration marks prosodic constituent bounda-
ries but can also make a phonemic contrast in some languages. Tones are the stuff of
which intonation is made, but in many languages, tone is also a contrastive lexical
feature. Word level stress is different from phrase level prominence. In order to deter-
mine to which component of the grammar a given articulation belongs, we must look
to function and distribution.
In sign language too, activation of the same articulator may serve a variety of gram-
matical functions (see also Pfau/Quer (2010) for discussion of the roles of non-manual
markers). Not all manual actions are lexical, and not all non-manual articulations are
prosodic. A useful working assumption is that a cue is prosodic if it corresponds to the
functions of prosodic cues known from spoken language, sketched briefly in sec-
tions 2.1, 3.1, and 4.1. A further test is whether the distribution of the cue in question
is determined by the domain of prosodic constituents, where these can be distinguished
from morpho-syntactic constituents.
It is clear that the main function of the hands in sign languages is to articulate the
lexical content, to pronounce the words. But we have seen here that as articulators
they also participate in the prosodic system, by modulating their behavior in accord-
ance with the temporal and stress patterns of utterances. Different phonological proc-
esses involving the non-dominant hand observe prosodic constituent boundaries at the
prosodic word and phonological phrase level, though in the lexicon, the non-dominant
hand is simply part of the phonological specification of a sign. The head and body
perform the prosodic functions of delineating constituency and marking prominence,
but they are also active in the syntax, in their role as a logophoric pronoun expressing
point of view (Lillo-Martin 1995).
Similarly, while articulations that are not manual ⫺ movements of the face, head,
and body ⫺ often play an important role in prosody, not all non-manual articulations
are prosodic. We will first consider two types of facial action, one of which may not
be prosodic at all, while the other, though prosodic, is not part of the grammar; it is
paralinguistic. We then turn to actions of the head and eyes.
Actions of the lower face convey adverbial or adjectival meaning in ASL (Liddell
1980), Israeli SL (Meir/Sandler 2008), and other sign languages. As a group, these may
differ semantically and semiotically from the actions of the upper face attributed to
intonation, and the way in which they align temporally with syntactic or prosodic con-
stituents has yet to be investigated. A range of other articulations are made by the
mouth. Borrowed mouthing from spoken language that accompanies signing may re-
spect the boundaries of the prosodic word in Israeli SL (Sandler 1999), similarly to the
way in which spread of the non-dominant hand respects phonological phrase bounda-
ries. But we do not yet have a clear picture of the range and distribution of mouth
70 I. Phonetics, phonology, and prosody

action with respect to prosodic constituents of sign languages generally (see Boyes-
Braem/Sutton-Spence 2001).
Another type of facial action is affective or emotional facial expression. This system
uses (some of) the same articulators as linguistic facial expression, but has different
properties in terms of temporal distribution, number of articulators involved, and prag-
matic function (Baker-Shenk 1983; Dachkovsky 2005, 2010; de Vos/van der Kooij/Cras-
born 2009). It appears that some intonational facial configurations are affective, and
not part of the linguistic grammar, as is the case in spoken language (Ladd 1996).
Negative headshake is an example of a specific non-manual action whose role in
the grammar is not yet fully determined (cf. chapter 15 on negation). Sometimes attrib-
uted to prosody or intonation, this element is at least sometimes a non-linguistic ges-
ture, as it is for hearing speakers in the ambient culture. It may occur without any
signs, but it may also negate an utterance without a negative manual sign. A compara-
tive study of negation in German Sign Language and Catalan Sign Language indicates
that the distribution of the headshake varies from sign language to sign language (Pfau/
Quer 2007). The authors assume that the signal is part of the syntax. It is not yet clear
whether this signal has prosodic properties ⫺ or even whether it belongs to the same
grammatical component in different sign languages.
Eye gaze is also non-manual, but may not participate in the prosodic system. In
Israeli SL, we have found that this element does not perform any expressly prosodic
function, nor does it line up reliably with prosodic constituency. Researchers have
argued that gaze may perform non-linguistic functions such as turn-taking (Baker 1977)
or pointing (Sandler/Lillo-Martin 2006) and/or syntactic functions related to agreement
(see Neidle et al. (2000) and Thompson et al. (2006) for opposing views of gaze as
agreement, the latter an eye tracking study). The eyebrows and the upper and lower
eyelids participate in prosody, but the eyeballs have something else in mind.

5.2. Prosody in sign and spoken language

Sign language has more articulators to work with, and it seems to utilize all of them
in prosody. The brows, upper and lower eyelids, head and body position, timing and
prominence properties conveyed by the hands, and even the dual articulator, the non-
dominant hand, all participate. The availability of many independent articulators con-
spires with the capacities of the visual system to create a signal with a good deal of
simultaneous information. Prosody in spoken language also involves a more simultane-
ous layering of information than other aspects of language in that modality (hence the
term ‘suprasegmental’), yet it is still quite different in physical organization than that
of sign language.
Pitch contours of spoken intonation are transmitted by the same conduit as the
words of the text ⫺ vocal cord vibration. In sign language, intonation is carried by
articulations of the upper face while the text is conveyed by the hands. In addition,
the upper face has different articulators which may also move independently. This
independence of the articulators has one obvious result: different intonational ‘tones’
(such as brow raise and squint) can co-occur with one another in a simultaneous array
together with the whole constituent with which it is associated. Intonation in spoken
language is conveyed by a linear sequence of tones, most of which congregate at the
4. Visual prosody 71

boundaries of intonational phrases. Do differences such as these result in greater flex-


ibility and range of expression in sign language prosody? Do they influence the gram-
matical organization of the system? These are intriguing questions for future research.
Also of interest is the use of facial signals by speakers, for example, of raised brows
to mark prominence (Swerts/Krahmer 2009), and of raised brows and head tilt to ac-
company questions (Srinivastan/Massaro 2003). In fact, there is experimental evidence
that the upper face has a special role in the visual prosody accompanying spoken
language, as it does in sign language (Swerts/Krahmer 2008). In sign languages, intona-
tion and prosodic constituency are systematically marked, and constitute a linguistic
system. Since it is possible to transmit spoken language effectively without visual cues
(on the telephone, to blind people, or in the dark), it is reasonable to surmise that
the visual prosody of spoken language is augmentative and paralinguistic. However,
empirical comparison of the patterning and role of visual prosody in sign and spoken
language has not yet been attempted.

6. Conclusion
Sign languages have rich prosodic systems, exploiting phonetic possibilities afforded by
their articulators: the face, the hands, the head, and the torso. Each of these articulators
participates in other grammatical components, and their prosodic status is identified
on semantic/pragmatic grounds as well as by the nature of the constituents with which
they are temporally aligned. Utterances are divided into constituents, marked mainly
by the action of the hands, and are modulated by intonation-like articulations, ex-
pressed mainly by the face. The prosodic system is nonisomorphic with syntax, al-
though it interacts with that level of structure, as it does with the phonological level,
in the form of rules such as Non-dominant Hand Spread.
The field is young, and much territory is uncharted. Some controversies are not yet
resolved, many of the facts are not yet known or confirmed, and the prosody of many
sign languages has not been studied at all. Similiarly, all is far from settled in spoken
language research on prosody. Interesting theoretical issues that are the subject matter
of current prosodic research are waiting to be addressed in sign language inquiries
too ⫺ issues related to the nature and organization of the prosodic system, as well as
its interaction with syntax and other components of the grammar.
A key question for future research follows from the non-trivial differences in the
physical form of prosody in the spoken and sign modalities: which properties of pros-
ody are truly universal?

Acknowledgements: Research on prosody in Israeli Sign Language was supported by


grants from the Israeli Science Foundation. I also thank reviewers for helpfull com-
ments on this chapter.

7. Literature
Baker, Charlotte
1977 Regulators and Turn-taking in American Sign Language Discourse. In: Friedman, Lynn
(ed.), On the Other Hand: New Perspectives on American Sign Language. New York:
Academic Press, 215⫺236.
72 I. Phonetics, phonology, and prosody

Baker, Charlotte/Padden, Carol A.


1978 Focusing on the Non-manual Components of ASL. In: Siple, Patricia (ed.), Understand-
ing Language through Sign Language Research. New York: Academic Press, 27⫺57.
Baker-Shenk, Charlotte
1983 A Micro-analysis of the Non-manual Components of American Sign Language. PhD
Dissertation, University of California, Berkeley.
Beckman, Mary E./Pierrehumbert, Janet B.
1986 Intonational Structure in English and Japanese. In: Phonology Yearbook 3, 255⫺310.
Bolinger, Dwight
1989 Intonation and its Uses: Melody in Grammar and Discourse. Stanford, CA: Stanford
University Press.
Boyes Braem, Penny
1999 Rhythmic Temporal Patterns in the Signing of Deaf Early and Late Learners of Ger-
man Swiss Sign Language. In: Sandler, Wendy (ed.), Language and Speech (Special
Issue on Prosody in Spoken and Signed Languages 42(2/3)), 177⫺208.
Boyes Braem, Penny/Sutton-Spence, Rachel (eds.)
2001 The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages.
Hamburg: Signum.
Brentari, Diane
1990 Theoretical Foundations of American Sign Language Phonology. PhD Dissertation,
University of Chicago.
Brentari, Diane
1998 A Prosodic Model of Sign Language Morphology. Cambridge, MA: MIT Press.
Brentari, Diane/Crossley, Laurinda
2002 Prosody on the Hands and Face: Evidence from American Sign Language. In: Sign
Language and Linguistics 5(2), 105⫺130.
Coerts, Jane
1992 Non-manual Grammatical Markers: An Analysis of Interrogatives, Negations, and Topi-
calizations in Sign Language of the Netherlands. PhD Dissertation, University of Am-
sterdam.
Coulter, Geoffrey
1978 Raised Brows and Wrinkled Noses: The Grammatical Function of Facial Expression in
Relative Clauses and Related Constructions. In: Caccamise, Frank/Hicks, Doin (eds.),
American Sign Language in a Bilingual, Bicultural Context. Proceedings of the Second
National Symposium on Sign Language Research and Teaching. Silver Spring: NAD,
65⫺74.
Crasborn, Onno
2011 The Nondominant Hand. In: Oostendorp, Marc van/Ewen, Colin/Hume, Elizabeth/
Rice, Keren (eds.), The Blackwell Companion to Phonology. 5 Volumes. Oxford: Black-
well, 223⫺240.
Dachkovsky, Svetlana
2005 Facial Expression as Intonation in ISL: The Case of Conditionals. MA Thesis. University
of Haifa.
Dachkovsky, Svetlana
2008 Facial Expression as Intonation in Israeli Sign Language: The Case of Neutral and
Counterfactual Conditionals. In: Quer, Josep (ed.), Signs of the Time. Selected Papers
from TISLR 2004. Hamburg: Signum, 61⫺82.
Dachkovsky, Svetlana
2010 Affective and Grammatical Intonation in Israeli Sign Language. Manuscript University
of Haifa.
Dachkovsky, Svetlana/Sandler, Wendy
2009 Visual Intonation in the Prosody of a Sign Language. In: Language and Speech 52(2/
3), 287⫺314.
4. Visual prosody 73

Deuchar, Margaret
1984 British Sign Language. London: Routledge & Kegan Paul.
Engberg-Pedersen, Elisabeth
1990 Pragmatics of Non-manual Behaviour in Danish Sign Language. In: Edmondson, Wil-
liam/Karlsson, Fred (eds.), SLR ’87: Papers from the Fourth International Symposium
on Sign Language Research. Hamburg: Signum, 121⫺128.
Fenlon, Jordan
2010 Seeing Sentence Boundaries: The Production and Perception of Visual Markers Signaling
Boundaries in Sign Languages. PhD Dissertation, University College London.
Fox, Anthony
2000 Prosodic Features and Prosodic Structure: The Phonology of Suprasegmentals. Oxford:
Oxford University Press.
Goldstein, Louis/Whalen, Douglas/Best, Catherine (eds.)
2006 Papers in Laboratory Phonology VIII. Berlin: Mouton de Gruyter.
Grosjean, Francois/Lane, Harlan
1977 The Perception of Rate in Spoken and Sign Languages. In: Perception and Psychophys-
ics 22, 408⫺413.
Gussenhoven, Carlos
1984 On the Grammar and Semantics of Sentence Accent. Dordrecht: Foris.
Gussenhoven, Carlos
2004 The Phonology of Tone and Intonation. Cambridge: Cambridge University Press.
Hayes, Bruce/Lahiri, Aditi
1991 Bengali Intonational Phonology. In: Natural Language and Linguistic Theory 9, 47⫺96.
Janzen, Terry/Shaffer, Barbara
2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard/
Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spo-
ken Languages. Cambridge: Cambridge University Press, 199⫺223.
Johnston, Trevor
1992 The Realization of the Linguistic Metafunctions in a Sign Language. In: Language Sci-
ences 14(4), 317⫺353.
Kooij, Els van der/Crasborn, Onno/Emmerik, Wim
2006 Explaining Prosodic Body Leans in Sign Language of the Netherlands: Pragmatics Re-
quired. In: Journal of Pragmatics 38, 1598⫺1614.
Ladd, Robert
1996 Intonational Phonology. Cambridge: Cambridge University Press.
Liddell, Scott K.
1978 Non-manual Signals and Relative Clauses in American Sign Language. In: Siple, Patri-
cia (ed.), Understanding Language through Sign Language Research. New York: Aca-
demic Press, 59⫺90.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott K./Johnson, Robert E.
1986 American Sign Language Compound Formation Processes, Lexicalization, and Phono-
logical Remnants. In: Natural Language and Linguistic Theory 8, 445⫺513.
Lillo-Martin, Diane
1995 The Point of View Predicate in American Sign Language. In: Emmorey, Karen/Reilly,
Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Erlbaum, 155⫺170.
Meir, Irit/Sandler, Wendy
2008 A Language in Space: The Story of Israeli Sign Language. Mahwah, NJ: Erlbaum.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G.
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
74 I. Phonetics, phonology, and prosody

Nespor, Marina/Vogel, Irene


1982 Prosodic Domains of External Sandhi Rules. In: Hulst, Harry van der/Smith, Norval
(eds.), The Structure of Phonological Representations. Dordrecht: Foris, 225⫺255.
Nespor, Marina/Vogel, Irene
1986 Prosodic Phonology. Dordrecht: Foris.
Nespor, Marina/Sandler, Wendy
1999 Prosody in Israeli Sign Language. In: Sandler, Wendy (ed.), Language and Speech (Spe-
cial Issue on Prosody in Spoken and Signed Languages 42(2/3)), 143⫺176.
Oostendorp, Marc van/Ewen, Colin/Hume, Elizabeth/Rice, Keren
2011 The Blackwell Companion to Phonology. 5 Volumes. Oxford: Blackwell.
Perlmutter, David
1992 Sonority and Syllable Structure in American Sign Language. In: Linguistic Inquiry 23,
407⫺442.
Petronio, Karen/Lillo-Martin, Diane
1997 Wh-movement and the Position of Spec-CP: Evidence from American Sign Language.
In: Language 73(1), 18⫺57.
Pfau, Roland/Quer, Josep
2007 On the Syntax of Negation and Modals in Catalan Sign Language and German Sign
Language. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation:
Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 129⫺162.
Pfau, Roland/Quer, Josep
2010 Non-manuals: Their Prosodic and Grammatical Roles. In: Brentari, Diane (ed.), Sign
Languages. Cambridge: Cambridge University Press, 381⫺402.
Pierrehumbert, Janet/Hirschberg, Julia
1990 The Meaning of Intonational Contours in Interpretation of Discourse. In: Cohen, Philip
R./Morgan, Jerry/Pollack, Martha E. (eds.), Intentions in Communication. Cambridge,
MA: MIT Press, 271⫺311.
Pierrehumbert, Janet
1980 The Phonology and Phonetics of English Intonation. PhD Dissertation, MIT.
Reilly, Judy/McIntire, Marina/Bellugi, Ursula
1990 The Acquisition of Conditionals in American Sign Language: Grammaticized Facial
Expressions. In: Applied Psycholinguistics 11, 369⫺392.
Sandler, Wendy
1989 Phonological Representation of the Sign: Linearity and Non-linearity in American Sign
Language. Dordrecht: Foris.
Sandler, Wendy
1993 A Sonority Cycle in American Sign Language. In: Phonology 10, 243⫺279.
Sandler, Wendy
1999a Cliticization and Prosodic Words in a Sign Language. In: Hall, Tracy/Kleinhenz, Ursula
(eds.), Studies on the Phonological Word. Amsterdam: Benjamins, 223⫺254.
Sandler, Wendy
1999b The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli
Sign Language. In: Sign Language and Linguistics 2, 187⫺216.
Sandler, Wendy
2006 Phonology, Phonetics, and the Non-dominant Hand. In: Goldstein, Louis/Whalen,
Douglas/Best, Catherine (eds.), Papers in Laboratory Phonology VIII. Berlin: Mouton
de Gruyter, 185⫺212.
Sandler, Wendy
2011a The Phonology of Movement in Sign Language. In: Oostendorp, Marc van/Ewen, Colin/
Hume, Elizabeth/Rice, Keren (eds.), The Blackwell Companion to Phonology. 5 Vol-
umes. Oxford: Blackwell, 577⫺603.
4. Visual prosody 75

Sandler, Wendy
2011b Prosody and Syntax in Sign Languages. In: Transactions of the Philological Society
108(3), 298⫺328.
Sandler, Wendy
2012 The Phonological Organization of Sign Languages. In: Language and Linguistics Com-
pass 6(3), 162⫺182.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Selkirk, Elisabeth
1984 Phonology and Syntax: The Relation Between Sound and Structure. Cambridge, MA:
MIT Press.
Selkirk, Elisabeth
1995 Sentence Prosody: Intonation, Stress, and Phrasing. In: Goldsmith, John (ed.), The
Handbook of Phonological Theory. Cambridge, MA: Blackwell, 550⫺569.
Srinivasan, Ravindra/Massaro, Dominic
2003 Perceiving Prosody from the Face and Voice: Distinguishing Statements from Echoic
Questions in English. In: Language and Speech 46(1), 1⫺22.
Swerts, Marc/Krahmer, Emiel
2008 Facial Expression and Prosodic Prominence: Effects of Modality and Facial Area. In:
Journal of Phonetics 36(2), 219⫺238.
Swerts, Marc/Krahmer, Emiel
2009 Audiovisual Prosody: Introduction to the Special Issue. In: Language and Speech 52(2/
3), 129⫺135.
Thompson, Robin/Emmorey, Karen/Kluender, Robert
2006 The Relationship Between Eye Gaze and Agreement in American Sign Language: An
Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604.
Vallduví, Enric
1992 The Informational Component. New York: Garland.
Vos, Connie de/Kooij, Els van der/Crasborn, Onno
2009 Mixed Signals. Combining Linguistic and Affective Functions of Eye Brows in Ques-
tions in Sign Language of the Netherlands. In: Language and Speech, 52(2/3), 315⫺339.
Wilbur, Ronnie
1993 Syllables and Segments: Hold the Movement and Move the Holds! In: Coulter, Geof-
frey R. (ed.), Current Issues in ASL Phonology. New York: Academic Press, 135⫺168.
Wilbur, Ronnie
1994 Eyeblinks and ASL Phrase Structure. In: Sign Language Studies 84, 221⫺240.
Wilbur, Ronnie
1999 Stress in ASL: Empirical Evidence and Linguistic Issues. In: Language and Speech 42(2/
3), 229⫺250.
Wilbur, Ronnie
2000 Phonological and Prosodic Layering of Non-manuals in American Sign Language. In:
Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology
in Honor of Ursula Bellugi and Edward Klima. Mahwah, NJ: Erlbaum, 215⫺244.
Wilbur, Ronnie/Patschke, Cynthia
1998 Body Leans and the Marking of Contrast in American Sign Language. In: Journal of
Pragmatics 30, 275⫺303.
Wilbur, Ronnie/Patschke, Cynthia
1999 Syntactic Correlates of Brow Raise in ASL. In: Sign Language and Linguistics 2(1),
3⫺40.
Wilbur, Ronnie
2011 The Syllable in Sign Language. In: Oostendorp, Marc van/Ewen, Colin/Hume, Eliza-
beth/Rice, Keren (eds.), The Blackwell Companion to Phonology. 5 Volumes. Oxford:
Blackwell, 1399⫺1344.
76 I. Phonetics, phonology, and prosody

Woll, Bencie
1981 Question Structure in British Sign Language. In: Woll, Bencie/Kyle, Jim G./Deuchar,
Margaret (eds.), Perspectives on British Sign Language and Deafness. London: Croom
Helm, 136⫺149.

Wendy Sandler, Haifa (Israel)


II. Morphology

5. Word classes and word formation


1. Introduction
2. The signed word
3. Sign language morphological processes
4. Word classes
5. Word formation
6. Literature

Abstract
This chapter deals with three aspects of words in sign languages: (i) the special nature
of the sub-lexical elements of signed words and the consequences for the relationship
between words; (ii) the classification of words into word classes; and (iii) the morpholog-
ical means for creating new words in the signed modality. It is shown that although
almost all of the structures and phenomena discussed here occur in spoken languages as
well, the visual-spatial modality has an impact on all three aspects in that sign languages
may show different preferences than spoken languages. Three central morphological
operations are discussed: compounding, affixation, and reduplication. Sign languages
endow these operations with flavors that are available only to manual-spatial languages:
the existence of two major articulators, and their ability to move in various spatial and
temporal patterns. These possibilities are exploited by sign languages, resulting in strong
preference for simultaneous morphological structures in both inflectional and deriva-
tional processes.

1. Introduction
Words have to perform several ‘jobs’ in a language: they provide the users of that
language with means to refer to whatever concept the users want to express, be it an
entity, an idea, an event, or a property. Words also have to combine with each other
to allow users to convey information: to say something about something or someone.
In order to fulfill the first task, there must be ways to create new words as the need
arises to refer to new concepts. Regarding the second task, when combined to form
larger units, words should be able to perform different roles, such as arguments, predi-
cates, and modifiers. Different words may be specialized for particular roles, and lan-
guages may have means for creating words for specific roles.
Sign languages are natural languages produced in a physical modality different from
that of spoken languages. Both types of language have to perform the same communi-
cative functions with the same expressive capabilities, yet the physical means available
to each type of language vary greatly. Sign languages are produced by hands, body,
78 II. Morphology

and face; they are transmitted through space, and perceived by the eyes. Spoken lan-
guages are produced by the speech organs, transmitted as sound waves, and are per-
ceived by the ears. Might these disparities make any difference to the nature of the
elements that make up each system? To their organization? To the processes they
undergo? Focusing on words, we ask whether words, the relationship between words,
and the means for creating new words are affected by the particular modality of the
language (see also chapter 25 on language and modality).
This chapter deals with three aspects of words in sign languages: the special nature
of the sub-lexical elements of signed words and the consequences for the relationship
between words; the classification of words into word classes; and the morphological
means for creating new words in the signed modality. The modality issue runs across
the entire chapter. In each section, I examine the ways in which modality affects the
linguistic structures and processes described.

2. The signed word


Sign languages have words, that is, conventionalized units of form-meaning corre-
spondence, like spoken languages. These units have psychological reality for their users
(Zeshan 2002). They are composed of sub-lexical units and are therefore characterized
by duality of patterning (Stokoe 1960). They are characterized by specific phonological
structures and are subject to certain phonological constraints (Sandler 1999; see chap-
ter 3, Phonology). Sign language words are usually referred to as signs, and we will
adopt this terminology here as well.
Obviously, signs differ from words in their physical instantiation. The physical dif-
ferences result in structural differences as well. Signs are much more simultaneously
organized than words (Stokoe 1960), and tend to be monosyllabic (Sandler 1999). But
signs differ from words in another important respect: they are much better at iconically
depicting the concepts they denote (see Taub 2001 and references cited there). Sign
languages make use of this capability. The lexicons of sign languages contain many
more iconic and partly iconic signs than those of spoken languages, since spoken lan-
guages are limited to acoustic iconicity. Iconicity results from the nature of the sub-
lexical elements building up a sign, which in turn has an effect on how signs are related
to each other.

2.1. The nature of sub-lexical units

One of the design features of human language is duality of patterning (Hockett 1960),
the existence of two levels of combinatorial structure, one combining meaningless el-
ements (phonemes) into meaningful elements, the other combining meaningful el-
ements (morphemes and words) into larger meaningful units. Sign languages are also
characterized by duality of patterning. Signs are not holistic units, but are made up of
specific formational units ⫺ hand configuration, movement, and location (Stokoe
1960). However, these formational units are in many cases not devoid of meaning.
Take the verb eat in Israeli Sign Language (Israeli SL) and other sign languages as
5. Word classes and word formation 79

well, for example. The hand assumes a particular shape (G), moving toward the mouth
from a location in front of it, and executes this movement twice. ‘Eat’ means “to put
(food) in the mouth, chew if necessary, and swallow” (Webster’s New Word Dictionary,
Third College Edition). The sign eat is iconic, since there is a regular mapping between
its formational elements and components of its meaning: the G handshape corre-
sponds to holding a solid object (food); the mouth corresponds to the mouth of the
eater, the agent argument; the movement towards the mouth corresponds to putting
the object into the mouth; and the double movement indicates a process. Many signs
are only partially iconic: some formational elements correspond to meaning compo-
nents, but not all. Other signs are arbitrary; none of their formational components can
be said to correspond to a meaning component in any obvious way (though some
researchers claim that no signs are completely arbitrary, and that the sign formational
elements are always meaning-bearing, e.g., Tobin 2008). The lexicon of any sign lan-
guage, then, consists of signs that are arbitrary and signs that are iconic to different
degrees, yet all signs make use of the same formational elements.
Spoken language lexicons are not that different; they also have both arbitrary and
non-arbitrary words. The difference between the two types of languages is in the rela-
tive proportions of the different kinds of words. In spoken languages, non-arbitrary
words are quite marginal, making it possible (and convenient) to ignore them. In sign
languages non-arbitrary signs constitute a substantial part of the lexicon. Boyes Braem
(1986) estimates that at least a third of the lexical items of Swiss-German Sign Lan-
guage are iconic. Zeshan (2000) estimates that the percentage might be even higher
(at least half of the signs) for Indopakistani Sign Language (IPSL).
Iconic signs present a challenge for the traditional division between phonemes and
morphemes, since the basic formational units, the phonemes of sign languages, may be
meaning-bearing and not meaningless. Meaningfulness is usually regarded as the factor
distinguishing phonemes from morphemes: phonemes are meaningless, while mor-
phemes are meaningful units. Yet phonemes are also the basic building blocks of mean-
ing bearing units in a language. But in sign languages, those basic building blocks are
also meaning-bearing. Can they be regarded as morphemes, then? This would also
seem problematic, since they are not composed of more basic formational elements,
and the units they attach to are not words, stems, or roots, but rather other basic
formational units. Johnston and Schembri (1999, 118) propose that these units function
simultaneously as phonemes and morphemes, since they serve as the basic formational
building blocks and at the same time as minimal meaning-bearing units. They propose
the term ‘phonomorphemes’ to capture the nature of these basic elements. This dual
nature of the basic formational units is even more evident in classifier constructions
(see chapter 8 on classifiers).

2.2. The structure of the lexicon: sign families

Leaving theoretical issues aside, the meaningfulness of the formational building blocks
of signs has consequences for the organization of the sign language lexicon. Signs that
share a formational element (or elements) often also share some meaning component.
For example, many signs in Israeli SL that are articulated on the temple express some
kind of mental activity (know, remember, learn, worry, miss, dream, day-dream); signs
80 II. Morphology

articulated on the chest often denote feelings (love, suffer, happy, proud, pity, heart-
ache). Many signs with a W handshape denote activities performed by the legs (jump,
get-up, fall, walk, run, stroll). Fernald and Napoli (2000) enumerate groups of signs,
or sign families, in American Sign Language (ASL) that share formational elements, be
it location, movement, handshape, or any combination of these. They show that the
phenomenon of word families is very robust in ASL, characterizing the entire lexicon.
Works on other sign languages (e.g., Brennan (1990) on British Sign Language (BSL);
Johnston and Schembri (1999) on Australian Sign Language (Auslan); Meir and San-
dler (2008) on Israeli SL) show that this is characteristic of other languages in the
signed modality. Signs in such a ‘family’ are related to each other not by inflectional
or derivational means, yet they are related nonetheless.
Fernald and Napoli posit a new linguistic unit, the ‘ion-morph’, a combination of
one or more phonological features that, within a certain set of signs, has a specific
meaning. Take, for example, the signs mother and father in ASL: they have the same
movement, orientation, and handshape. They differ with respect to the location: chin
for mother, forehead for father. Within this restricted set of signs, the combination of
specific movement, orientation, and handshape have the meaning of ‘parent’. The chin
and the forehead, in turn, are ion-morphs denoting female and male in signs expressing
kinship terms, such as sister-brother, niece-nephew, grandmother-grandfather.
Fernald and Napoli (2000, 41) argue that ion-morphs are relevant not only for sign
languages, but for spoken languages as well. A case in point is phonosymbolism, the
ability of certain sounds or combination of sounds to carry specific ‘sound images’ that
go with particular semantic fields, such as fl- representing a liquid substance in motion,
as in flow, flush, flood, or fluid. Yet one can find word families even in more grammati-
cal domains. For example, most question words in English begin with wh-. The labial
glide carries the interrogative meaning within a specific set of words, and it may con-
trast with the voiced interdental fricative in pairs like ‘then/when’ and ‘there/where’,
the latter carrying the meaning of ‘definiteness’, as in the/that/this/those.
The examples from both sign and spoken languages clearly show that there are
ways other than inflection and derivation to relate words to one another. Whether
these relations are morphological in nature is a difficult theoretical question, which
can be conveniently set aside when dealing with spoken languages, since word families
are less central to the structure of their lexicons. In sign languages, in contrast, they
are an important characteristic of the lexicon. They may also play a role in creating
new words (as suggested by Fernald and Napoli 2000), since language users may rely
on existing ion-morphs when new lexical items are coined. Such cases again raise the
question of whether or not derivational morphology is at play here.
The special nature of the sub-lexical units in signs affects the lexicon in another
respect as well. When phonemes are combined to create a sign, the meaning of the
resulting unit is often componential and transparent. This means that signs in the lexi-
con of a sign language need be less conventionalized than words of a spoken language,
since their meaning can often be computed. Johnston and Schembri (1999, 126) make
a distinction between signs and lexemes, the latter having a meaning “which is (a)
unpredictable and/or somewhat more specific than the sign’s componential meaning
potential even when cited out of context, and or (b) quite unrelated to its componential
meaning components (i.e., lexemes may have arbitrary links between form and mean-
ing).” Lexemes, then, can be completely arbitrary, but more importantly, they are com-
5. Word classes and word formation 81

pletely conventionalized, and can therefore be thought of as stored in the lexicon of


the language. Signs, in contrast, are more productive than lexemes. They can be in-
vented ‘on the spot’, because of the transparency of their components, and are there-
fore less lexicalized and less conventionalized than lexemes. A signer, for example, can
invent a sign meaning ‘the three of them were walking together’ by extending three
fingers and moving the hand in space. Such a sign can be understood in the appropriate
context even if there is no conventional sign with that meaning in the specific sign
language used by the signer. Johnston and Schembri show that signs and lexemes have
different phonological, morphological, and semantic characteristics, and suggest that
only lexemes should be part of the lexicon. An interesting question that arises is
whether signs (as opposed to lexemes) are words, and if they are, whether they form
a separate word class. One specific phenomenon that has been referred to in this con-
text is the issue of classifier constructions, whose word status is an unresolved problem
in sign language literature (see chapter 8, Classifiers). Classifier constructions are often
excluded from analyses of word classification because of their unclear status. We return
to this issue in section 4.
The lesson to be learned from the nature of signs and their components is that the
line between the lexicon and the morphological component may be less definite than
is usually assumed. Having raised the problematic issues, we now turn to those that
are more straightforward within the realm of morphology. We examine which morpho-
logical operations are available to sign languages, and how these operations are used
to distinguish between different types of words and to create new words.

3. Sign language morphological processes

Morphology provides machinery for creating new words and for creating different
forms of a word. The former is the realm of derivation, the latter of inflection. Deriva-
tional and inflectional processes differ in their productivity, regularity, and automatic-
ity. Inflectional processes are regarded as regular and automatic, in that they apply to
all members of a given category, while derivational processes are usually less regular
and non-automatic (though, as with any linguistic categorization, this distinction is
often blurred and not as dichotomous as it is presented). In spite of this functional
difference, the morphological mechanisms used for both derivation and inflection are
the same.
The main three morphological operations are compounding, affixation, and redupli-
cation. Words formed by such operations are complex, in the sense that they contain
additional morphological content when compared to the bases they operate on. How-
ever, morphological complexity need not coincide with added phonological complexity,
since morphological operations can be sequential or simultaneous. A sequential opera-
tion adds phonological segments onto a base, suffixes (as in baker) and prefixes (as in
unhappy). In a simultaneous operation, meaningful units are added not by adding
segments but rather by changing them. The plurality of feet, for example, is encoded
by changing the quality of the vowel of the singular form foot. Both types of operation
are found in spoken and in sign languages, but there is a difference in preference. In
spoken languages, the sequential type is very common while simultaneous operations
82 II. Morphology

(a) (b) (c)


Fig. 5.1: Three forms of the sign learn (Israeli SL): (a) base form (b) iterative (c) durational.
Copyright © 2011 by Sign Language Lab, University of Haifa. Reprinted with permission.

are rarer. Sign languages, in contrast, show a marked preference towards simultaneous
morphological operations. Sequential affixal morphology is very infrequent, and (apart
from compounding) has been reported in only a few sign languages. This tendency
towards simultaneous structuring characterizes all linguistic levels of sign languages,
and has been attributed to the visuo-spatial modality (Emmorey 2002 and references
cited there; Meier et al. 2002).
Sequential morphology in the signed modality is quite similar to its spoken language
counterpart: elements in a sequence (words and affixes) form a complex word by virtue
of being linearly concatenated to one another. The Israeli SL compound volunteer is
formed by combining the two signs heart and offer into a complex lexical unit. In the
process, several changes, some of which are modality-driven, may take place, and these
are described in section 5.1.1. But by and large, sequential operations in both modali-
ties are quite similar.
However, when turning to simultaneous morphology, the analogy is less clear. What
would simultaneous morphology look like in a sign language? Which phonological
features are changed to encode morphological processes? It turns out that it is the
movement component of the sign that is the onemost exploited for morphological
purposes. Take for example the sign learn in Israeli SL (Figure 5.1). The base form
has a double movement of the hand towards the temple. Several repetitions of the sign
with its double movement yield an iterative meaning ‘to study again and again’. If the
sign is articulated with a slower and larger single movement, repeated three times, then
the verb is inflected for a continuative aspect, meaning ‘to study for a long time’.
A change in the movement pattern of a sign distinguishes nouns from formationally
similar verbs in several sign languages (see section 4.4.1). Repetition of a noun sign in
several locations in space denotes plurality (see chapter 6, Plurality). A change in the
direction of a specific class of verbs (agreement verbs) indicates a change in the syntac-
tic arguments of the verb in many sign languages (see chapter 7, Verb Agreement). In
addition to change in movement, change in handshape with classifying verbs can also
be analyzed as simultaneous inflection (and as a certain kind of verb-argument-agree-
ment, see chapter 8, Classifiers).
Thus simultaneous morphology in sign languages is implemented by changing fea-
tures of the movement of the sign, and to a lesser degree by handshape change. It is
5. Word classes and word formation 83

simultaneous in the sense that it does not involve adding phonological segments. The
signs ask and question are related to each other more like the English noun-verb pair
cóntrast-contrást than the pair government-govern. Both signs consist of one syllable.
They differ in the prosodic features imposed on the syllabic structure. This type of
simultaneous morphology is often described as comparable to the templatic morphol-
ogy characteristic of Semitic languages, where morphological distinctions are encoded
by associating phonological material to different prosodic templates (Sandler 1989;
Sandler/Lillo-Martin 2006).
The two types of sign language morphology are characterized by different proper-
ties (Aronoff/Meir/Sandler 2005). Sequential operations are sparse; they are arbitrary
in form; the affixes are related to free forms in the language and therefore can be
regarded as being made grammatical from free words; they are derivational and less
regular. Simultaneous operations are numerous; many of them are productive; they
are related to spatial and temporal cognition, and most of them are non-arbitrary to
various degrees. They can be inflectional or derivational. It follows, then, that there is
partial correlation between simultaneity vs. sequentiality and the inflection vs. deriva-
tion dichotomy: sequential processes in sign languages are derivational. Simultaneous
processes can be both inflectional and derivational. Thus inflection in sign languages
is confined to being simultaneously instantiated. Derivational processes not only make
use of simultaneous morphology, but also take the form of sequential morphology.
These differences are summarized in Table 5.1. Both morphologies play a role in distin-
guishing word classes in sign languages and in deriving new lexical items.

Tab. 5.1: Two types of sign-language morphology


SIMULTANEOUS SEQUENTIAL
⫺ Adds morphological material by changing ⫺ Adds morphological material by adding
features of formational elements (mainly phonological segments to a base
the movement component)
⫺ Preferred in the sign modality ⫺ Less preferred in the sign modality
⫺ Both inflectional and derivational ⫺ Only derivational
⫺ Numerous in different sign languages ⫺ Relatively sparse in different sign languages
⫺ Motivated to various degrees, related to ⫺ Tend to be more arbitrary
spatial cognition
⫺ Not grammaticized from free words ⫺ Grammaticized from free words

4. Word classes

4.1. Introduction

Word classes are often referred to as ‘parts of speech’, from Latin pars orationis, liter-
ally ‘piece of what is spoken’ or ‘segment of the speech chain’. Although the two terms
84 II. Morphology

are used interchangeably in current linguistic practice (a practice which I follow in this
chapter as well) it should be pointed out that, for the Greeks and Romans, the primary
task was to divide the flow of speech into recognizable and repeatable pieces (hence
parse). Categorizing was secondary to identification (Aronoff, p.c.). In this chapter,
however, we will concern ourselves with categorization and classification.
There are various ways to classify the words of a given language. However, the term
‘word classes’ usually refers to classification of words according to their syntactic and
morphological behavior, e.g., the ability to appear in a certain syntactic environment,
to assume a specific syntactic role (argument, predicate, modifier), and to co-occur
with a particular set of inflectional affixes. Many of the words belonging to the same
class also share some aspect of meaning. For example, words which typically occur in
argument position and take number and case inflections often denote entities, whereas
words occurring in predicate position and taking tense inflection often denote events.
Yet there is no full overlap between a semantically based classification and a morpho-
syntactic one, making the classification of any given language challenging, and a cross-
linguistic comparison even more so.
The first major division of words in the lexicon is into content words and function
words. Content word classes are generally open (i.e. they have large numbers and
accept new members easily and regularly) and they tend to have specific meaning,
usually extra-linguistic (they are used to refer to the world or to a possible world).
They tend to be fairly long, and their text frequency is rather low (Haspelmath 2001).
Function words usually belong to small and closed classes. They are usually defined by
their function as they do not have concrete meaning, they tend to be quite short, and
their text frequency is high. A few function word classes in sign languages are explored
in other chapters of this volume: pronouns (chapter 11) and auxiliary verbs (chap-
ter 10). Other function word classes mentioned in the sign language literature are nu-
merals (see e.g., Fuentes/Tolchinsky 2004), question words and negative words (Zeshan
2004a,b; see also chapters 14 and 15). In this chapter the focus is on content class
words. Function words will be mentioned only when they are relevant for diagnosing
specific content class words.
The major content word classes are nouns, verbs, adjectives, and adverbs. It is an
empirical question whether this classification is universal, and whether the same set of
criteria can be applied cross-linguistically to identify and define the different classes in
every language. Clearly, languages vary greatly in their syntactic and morphological
structures. Therefore syntactic and morphological criteria can be applied only on a
language-particular basis. For a cross-linguistic study, a semantically-based classifica-
tion would be much more feasible, since all languages presumably have words to refer
to different concept classes such as entities, events, and properties. But, as pointed out
above, semantic criteria often do not fully overlap with morpho-syntactic criteria for
any particular language. The challenge, then, is to develop a set of criteria that would
be descriptively adequate for particular languages, and at the same time would enable
cross-linguistic comparison. As Haspelmath (2001) points out, the solution that is usu-
ally adopted (often implicitly) is to define word classes on a language-particular basis
using morpho-syntactic criteria, and then use semantic criteria for labeling these
classes: the word class that includes most words for things and persons is called ‘noun’;
the one that includes most words for actions and processes is called ‘verb’; etc. It is
also usually the case that the correspondences ‘thing-noun’ and ‘action-verb’ are the
5. Word classes and word formation 85

unmarked extension of the respective word class. Marked extensions are often indi-
cated by derivational affixes. This methodology implicitly assumes some kind of seman-
tic basis for word classification, and that this basis is universal. Such assumptions should
be tested by studying languages that are typologically diverse as much as possible.
Sign languages, as languages produced in a different modality, constitute a very good
test case.

4.2. Word classes in the signed modality

Sign languages, like spoken languages, have lexicons consisting of lexemes of different
types that refer to different notions (entities, actions, states, properties, etc.) and com-
bine with each other to form larger units, phrases, and sentences. However, as a group,
sign languages differ from spoken languages in three major respects relevant for the
present discussion. Firstly, and most obviously, they are articulated and transmitted in
a different modality from spoken languages. Secondly, sign languages as a group are
much younger than spoken languages. And finally, the field of sign language linguistics
is young, having emerged only a few decades ago.
The modality difference raises several questions:

(i) Would languages in a different modality display different kinds of word classes?
For example, would the spatial nature of sign languages give rise to a word class
that denotes spatial relations?
(ii) Would iconicity play a role in differentiating between word classes?
(iii) Do languages in a different modality have different set of properties to distinguish
between word classes?
(iv) Do we need to develop a totally different set of tools to categorize signs?

Sign languages as a group are also much younger than spoken languages. Spoken lan-
guages are either several millennia or several hundred years old, or they are derived
from old languages. In contrast, the oldest sign languages known to us today are about
300 years old or so (for BSL, see Kyle and Woll 1985; for French Sign Language (LSF),
see Fischer 2002) and some are much younger: Israeli SL is about 75 years old (Meir/
Sandler 2008), and Nicaraguan Sign Language (ISN) is about 35 years old (Senghas
1995). It may very well be that sign languages existed in older times, but they left no
records and therefore cannot be studied. All we know about sign languages comes
from studying the sign languages available to us today, and these are young. Young
spoken languages, creoles, are characterized by dearth of inflectional morphology
(McWhorter 1998). Furthermore, the lexicons of both creoles and pidgins are described
as consisting of many multifunctional words, that is, words used both as nouns and
verbs, or nouns and adjectives. For example, askim in Tok Pisin can function both as a
noun and as a verb (Romaine 1989, 223). As we shall see, multifunctionality is charac-
teristic of sign languages as well. Therefore, word classification in young languages
cannot rely on morphology.
These two factors, modality and young age, contribute to the fact that sign languages
as a group form a distinct typological morphological type (Aronoff/Meir/Sandler 2005).
As new languages they hardly have any sequential morphology. They lack nominal
86 II. Morphology

inflections such as case and gender inflections. They also do not have tense inflections
on verbs. These inflectional categories are key features in determining word classes in
many spoken languages (though, of course, many spoken languages lack such inflec-
tional categories, and therefore similar difficulties for word classification arise). On
the other hand, as visuo-spatial languages, they are characterized by the rich spatial
(simultaneous) morphology described in section 3. Can spatial modulations play a role
in determining word classes as morphological inflections of spoken languages? Would
they identify the same word classes found in spoken languages?
In addition to the youth of the languages, the field of sign language linguistics is
also new, dating back to the early 1960s. In analyzing the linguistic structure of sign
languages, sign linguists often rely on theories and methodologies developed on the
basis of spoken languages. Since linguistics as a field is much older than sign linguistics,
it makes sense to rely on what is known about how to study spoken languages. It also
has the advantage of making it possible to compare findings in the two types of lan-
guages. However, it runs the risk of analyzing sign languages through the lens of spoken
languages, and missing important phenomena if they are unique to sign languages (see,
e.g., Slobin 2008 on this issue).
These three factors ⫺ modality, youth of language, and youth of field ⫺ make the
study of word classes in sign languages challenging and non-trivial. Indeed systematic
studies of word classification in sign languages are very few. Though terms such as
noun, verb, adjective, pronoun, etc. are abundant in the sign language literature, there
have been very few attempts at principled word classification of any studied sign lan-
guage, and very few researchers explicitly state on what grounds the terms ‘noun’,
‘verb’, etc. are used. However, as the sign language linguistics field expands, more
linguistic operations and structures are discovered which can be helpful in determining
word classes in sign languages. We turn to look at some classifications that have been
suggested, and to examine the means by which sign languages differentiate between
word classes.

4.3. Word classifications suggested for sign languages

The earliest attempt to provide criteria for identifying word classes of a sign language
lexicon is found in Padden (1988). She suggests the following criteria for identifying
the three major content word classes in ASL: Nouns can be modified by quantifiers,
adjectives can inflect for intensive aspect, and verbs cannot be pre-modifiers of other
signs. Under this classification, nouns and verbs are defined on distributional syntactic
grounds, and adjectives on morphological grounds. Notice that verbs are only defined
negatively, probably because there is no inflection common to all and only verbs in the
language. Also, it is not clear that this set of criteria applies to all and only the members
of a certain class.
Zeshan (2000) suggests a word classification of IPSL according to the spatial charac-
teristics of signs. One class consists of signs that cannot move in space at all, a second
class consists of signs that are produced in neutral space and can be articulated in
various locations in space, and the third class consists of directional signs, that is signs
that move between locations in space associated with referents. The criterion of spatial
behavior is clearly modality specific, since words in spoken languages do not have
5. Word classes and word formation 87

spatial properties. Therefore, such an analysis, even if it provides a descriptively ad-


equate analysis of a particular language, does not allow for cross-modality comparisons
and generalizations. In addition, it is not clear whether such a classification has any
syntactic and semantic corollaries within the language. For example, the class of signs
that cannot move in space includes signs meaning ‘understand’, ‘woman’ and ‘I’ (Ze-
shan 2000, 58). These signs do not seem to have any semantic commonality, and it is
doubtful whether they have anything in common syntactically. Therefore, the useful-
ness of this classification does not extend beyond a purely formational classification.
Recently, a comprehensive and methodological attempt to establish a set of criteria
for defining word classes in sign languages has been posited by Schwager and Zeshan
(2008). Their goal is to develop a cross-linguistically applicable methodology that
would give adequate descriptive results for individual languages. They explicitly take
the semantics as a starting point, since the semantic classification is cognitively-based
and hence language independent. They compile a set of binary semantic features that
define three basic concept classes: entity, event, and property. After assigning signs to
different classes based on their semantics, Schwager and Zeshan proceed to examine
how signs in each concept class map to syntactic roles and morphological operations.
Four basic syntactic roles are listed: argument, predicate, argument modifier, and predi-
cate modifier. As for morphological criteria, a list of 17 morphological processes that
have been described in the sign linguistics literature is compiled. These processes are
classified according to the concept classes they co-occur with.
In order to test the validity of their approach, they apply it to corpora compiled
from three unrelated sign languages: German Sign Language (DGS), Russian Sign
Language (RSL), and Sign Language of Desa Kolok (KK), a sign language that devel-
oped in a small village community in Bali with high incidence of hereditary deafness.
Words with comparable meanings were identified and extracted from the corpora, and
were analyzed according to the procedure described above. This comparison pinpoints
both similarities and differences between the languages. Even at the semantic level,
signs referring to similar concepts may not belong to the same concept class in the two
languages. For example, the sign deaf in DGS may refer to a person or a property,
while in KK it refers only to a person. Therefore, in DGS this sign will be listed both
as an entity and as a property, while in KK it is classified only as an entity. In consider-
ing the combination of concept classes with syntactic roles, some more interesting dif-
ferences emerge. DGS, but not KK, has event signs that can be used in argument
position. The sign work, for example, can be used in predicate position, but also in
argument position, as in (1) (Schwager/Zeshan 2008, 534, example 26). Also, in DGS
signs denoting properties can assume a modifier or a predicate position, whereas in
KK they are restricted to predicate position.

(1) work find difficult#ints(intensive) [DGS]


‘It is very difficult to find a job.’

The list of morphological modulations serves as a useful tool for identifying the mor-
phological nature of different sign languages. KK has far fewer morphological proc-
esses than DGS and RSL, especially in the event class. Of the 13 processes listed for
events, KK has only 3, while DGS and RSL have 11 each. Therefore KK is much more
isolating than the two other languages, and morphological operations are much less
helpful in establishing word classes in this language.
88 II. Morphology

These results show that, as in spoken languages, different sign languages vary in
terms of their word classes. However, it might be that the variation in the signed
modality is less extreme than that found among languages in the spoken modality.
Further comparative studies of sign languages, and of sign vs. spoken languages, is
needed to assess this intuitive observation.
One type of evidence that is not used in their analysis is distributional evidence,
such as the co-occurrence of signs with certain function word classes. Distributional
properties are language-specific, and hinge on identifying the relevant function words
and syntactic environments for each language. Yet some cross-linguistic generalizations
can be made. For examples, nouns are more likely to co-occur with pointing signs
(often termed index or ix), and can serve as antecedents for pronouns. Verbs are more
likely to co-occur with auxiliary verbs. As I point out below, some such observations
have already been made for different languages, and it is hoped that they will be
incorporated in future investigations of sign language word classes.
In spite of the lack of distributional evidence, Schwager and Zeshan’s analysis shows
that it is possible to arrive at a systematic, theoretically sound approach to word classi-
fication in sign languages. Such an analysis provides descriptions of word classes of
specific languages, but also allows for cross-linguistic and cross-modality comparisons.

4.4. Means for differentiating between specific word classes

Though very few works try to establish general criteria for determining word classes
of the entire lexicon of a sign language, many works target more restricted domains of
the lexicon, and describe certain structures and processes that apply to specific classes
or sub-parts of classes. These involve both morphological and distributional criteria.

4.4.1. Noun-verb pairs

Descriptions of various sign languages often comment that many signs are multifunc-
tional, and can serve both as a nominal and as a verb (denote an entity or an event).
This is not surprising given the young age of sign languages, but it has also been argued
to be modality driven. The following paragraph is from an introduction to the first
dictionary of Israeli SL (Cohen/Namir/Schlesinger 1977, 24):

Two concepts which in spoken language are referred to by words belonging to differ-
ent parts of speech will often have the same sign in sign language. The sign for sew
is also that for tailor, namely an imitation of the action of sewing ... eat and food
are the same sign ... and to fish is like fisherman ... In English, as in many other
languages, words of the same root belonging to different parts of speech (like ‘bake’
and ‘baker’) are often distinguished inflectionally. They are denoted by the same
sign in sign language since it has neither prefixes nor suffixes. These, being non-
iconic, would seem to be out of tune with a language in which many signs have
some degree of transparency of meaning, and are therefore unlikely to arise sponta-
neously in a sign language.
5. Word classes and word formation 89

a. ASL: chair sit

b. Israeli SL: question ask


Fig. 5.2: a. ASL noun-verb pair: chair-sit; b. Israeli SL noun-verb pair: question-ask. Figure a
reprinted wth permissions from Padden (1988). Figure b Copyright © 2011 by Sign Lan-
guage Lab, University of Haifa. Reprinted with permission.

Given the propensity of sign languages towards iconicity, and the non-iconicity of se-
quential derivational affixes, those affixes comparable to, e.g., -tion, -ize, and -al in
English are not expected to be found in sign languages. Yet several studies of noun-
verb pairs show that it is not impossible to distinguish formationally between word
classes in a sign language. However, one has to know what to look for. It turns out
that subtle differences in the quality of the movement component of certain signs may
indicate the word class of specific signs.
The first work to show that nouns and verbs may exhibit systematic formational
differences is Supalla and Newport (1978). They describe a set of 100 related noun-
verb pairs, where the nouns denote an instrument, and the verb an action performed
with or on that instrument, e.g., scissors and cut-with-scissors, chair and to-sit (see
Figure 5.2a) or iron and to-iron. These pairs differ systematically in the properties of
90 II. Morphology

the movement component: in nouns it is reduplicated, restricted, and constrained; the


movement of the related verbs is not.
Following their seminal work, similar phenomena have been attested in various sign
languages. Sutton-Spence and Woll (1999, 109) report that in BSL noun-verb pairs,
e.g., sermon-preach, nouns have a restrained, abrupt end and verbs do not. This spe-
cific example shows that signs exhibiting this alternation are not necessarily restricted
to instrument-action pairs. Similarly, in Israeli SL formationally related nouns and
verbs, the verbs typically have a longer movement, as in question vs. ask (Meir/Sandler
2008, see Figure 5.2b). In Russian Sign Language as well, qualities of the movement
component were the most reliable properties distinguishing nouns from verbs (Kim-
melman 2009): nouns but not verbs (in noun-verb pairs) tend to have repeated move-
ments, and verbs tend to have wider movement amplitude than the corresponding
nouns. Johnston (2001) provides an explanation for the repeated movement of nouns
but not their paired verbs in Auslan. In this language, the best exemplars of the alterna-
tion are signs referring to actions which are inherently reversible, such as open-shut
(e.g., turning a knob, opening and shutting a drawer, turning a key). The signs repre-
senting these actions and entities are iconic, their direction of movement depicting the
direction of the action. It is this iconicity that is the basis for the noun-verb distinction:
a single movement in one of the two possible directions is interpreted as a process
(one of the two possible processes), while a repeated bi-directional movement is inter-
preted as naming a salient participant in the action, the participant on which the action
in both directions is performed (the knob, the drawer, or the key in the actions men-
tioned above).
The formational difference between nouns and verbs may be rooted in iconicity, as
suggested by Johnston, but in some sign languages this formational difference has ex-
panded to non-iconic cases as well, suggesting that the form is taking a life of its own.
Hunger (2006) measured the duration (in terms of numbers of frames) of 15 noun-
verb pairs in Austrian Sign Language (ÖGS) both in isolation and in connected speech.
Her results show that verbs do indeed take twice as long to produce as nouns. Interest-
ingly, the longer duration of verbs characterizes even verbs which are not inherently
durational (e.g., book-open, photograph, lock). Therefore, Hunger concludes that the
longer duration of verbal signs cannot be attributed to iconicity effects. Rather, this
formational difference “can be interpreted as a distinctive marker for verbal or nomi-
nal status” (p. 82).
The lesson to be learned from these studies is that word classes can be distinguished
formationally in the signed modality, by recruiting the movement component of signs
for the task. Although this device may be rooted in iconicity, in some languages it
seems to have already extended beyond the iconically-based set of signs, and is on its
way to becoming a formal morphological entity.

4.4.2. Inflectional modulations

One of the most commonly used criteria for determining word classes in spoken lan-
guages is morphological inflections. Inflectional affixes are very selective with respect
to the lexical base they attach to (Zwicky/Pullum 1983). A group of words that take a
particular inflectional affix can therefore be regarded as belonging to one class. Notice,
5. Word classes and word formation 91

however, that the term ‘affix’, which is commonly used for a concrete sequential mor-
pheme, can be also used to refer to a process or a change in features that is expressed
simultaneously on the inflected word.
In sign languages, inflections take the form of modulations to the movement compo-
nent of the sign. Numerous inflections have been described in the literature, the main
ones being:

Verbs: (a) Encoding arguments: verb agreement; reciprocal; multi-


ple; exhaustive.
(b) Aspect: habitual; durational; continuative; iterative; pro-
tractive; delayed completive; gradual.
Nouns: plurality.
Predicative adjectives: pre-dispositional; susceptative; continuative; intensive; approx-
imative; iterative; protractive.

What all these inflections have in common is that they make use of the movement
component of the sign in order to encode specific grammatical categories. For example,
the intensive inflection of adjectives in Israeli SL imposes lengthening of the movement
on the base sign (Sandler 1999). In ASL this inflection takes the form of increased
length of time in which the hand is held static for the first and last location (Sandler
1993, 103⫺129). Many aspectual modulations, such as the durational and iterative,
impose reduplicated circular movement on the base sign.
Most of the inflections occur on verbs and adjectives, suggesting that inflectional
modulations are restricted to predicate position. Since several inflections occur on both
verbs and adjectives (e.g., continuative, iterative, protractive), it may be that these
inflections are diagnostic of a syntactic position more than a specific word class. This,
however, should be determined on a language-specific basis.
The use of these inflections for determining word classes is somewhat problematic.
Firstly, morphological classes often do not coincide with concept classes. No single
morphological operation applies across the board to all members of a particular con-
cept class. For example, Klima and Bellugi (1979) describe several adjectival inflections,
but these co-occur only with adjectives denoting a transitory state. Verb agreement,
which in many spoken languages serves as a clear marker of verbs, characterizes only
one sub-class of verbs in sign languages, agreement verbs. Secondly, many of these
operations are limited in their productivity, and it is difficult to determine whether
they are derivational or inflectional (see Engberg-Pedersen 1993, 61⫺64, for Danish
Sign Language (DSL); Johnston/Schembri 1999, 144, for Auslan). Thirdly, since all
these inflections involve modulation of the movement component, sometimes their
application is blocked for phonological reasons. Body anchored verbs, for instance,
cannot inflect for verb agreement. Inflectional operations, then, cannot serve by them-
selves as diagnostics for word classes. But, as in spoken languages, they can help in
establishing word classes for particular languages, with corroborative evidence from
semantic, syntactic, and distributional facts.

4.4.3. Word-class-determining affixes

Although a language may lack formational features characterizing the part of speech
of base words, it may still have certain derivational affixes that mark the resulting word
92 II. Morphology

as belonging to a certain part of speech. The forms of English chair, sit, and pretty do
not indicate that they are a noun, a verb, and an adjective respectively. But nation,
nationalize and national are marked as such by the derivational suffixes -tion, -ize, and
-al in their form.
Can we find similar cases in sign languages? In general, sequential affixation is quite
rare in sign languages, as discussed above. Of the descriptions of affixes found in the
literature, very few refer to the part of speech of the resulting words. Two relevant
affixes are described in Israeli SL, and two in Al-Sayyid Bedouin Sign Language
(ABSL), a language that emerged in a Bedouin village in Israel in the past 70 years.
Aronoff, Meir and Sandler (2005) describe a class of prefixes in Israeli SL that derive
verbs. This class includes signs made by pointing either to a sense organ ⫺ the eye, nose,
or ear ⫺ or to the mouth or head. Many of the complex words formed with them can be
glossed ‘to X by seeing (eye)/hearing (ear)/thinking (head)/intuiting (nose)/saying
(mouth)’, e.g., eye+check ‘to check something by looking at it’; nose+sharp ‘discern by
smelling’; mouth+rumors ‘to spread rumors’. But many have idiosyncratic meanings, such
as nose+regular ‘get used to’ and eye+catch ‘to catch red handed’ (see Figure 5.3). Al-
though the part of speech of the base word may vary, the resulting word is almost always
used as a verb. For example, the word eye/nose+sharp means ‘to discern by seeing/smell-
ing’, though sharp by itself denotes a property. In addition to their meaning, distributional
properties of these complex words also support the claim that they are verbs: they co-
occur with the negative sign glossed as zero, which negates verbs in the language. Aronoff,
Meir and Sandler conclude that the prefixes behave as verb-forming morphemes.

eye catch
Fig. 5.3: Israeli SL sign with a verb-forming prefix: eye+catch ‘to catch red handed’. Copyright
© 2011 by Sign Language Lab, University of Haifa. Reprinted with permission.

Another Israeli SL affix is a suffix glossed as -not-exist, and its meaning is more
or less equivalent to English -less (Meir 2004; Meir/Sandler 2008, 142⫺143). This suffix
attaches to both nouns and adjectives, but the resulting word is invariably an adjective:
important+not-exist means ‘of no import’, and success+not-exist ‘without success,
unsuccessful’. The main criterion for determining word class in this case is semantic:
the complex word denotes a property (‘lacking something’).
5. Word classes and word formation 93

a. pray there

b. drink tea+round-object
Fig. 5.4: Two ABSL complex words with suffixes determining word class: a. Locations:
pray+there ‘Jerusalem’; b. Objects: drink-tea+round-object ‘kettle’. Copyright © 2011
by Sign Languge Lab, University of Haifa. Reprinted with permission.

An interesting class of complex words has been described in ABSL, whose second
member is a pointing sign, indicating a location (Aronoff et al. 2008; Meir et al. 2010).
The complex words denote names of locations ⫺ cities and countries, as in long-
beard+there ‘Lebanon’, head-scarf+there ‘Palestinian Authority’, pray-there ‘Jeru-
salem’ (see Figure 5.4a). If locations are regarded as a specific word class, then these
words contain a formal suffix indicating their classification (parallel to English -land or
-ville).
94 II. Morphology

Finally, another set of complex words in ABSL refers to objects, and contains a
component indicating the relative length and width of an object by pointing to various
parts of the hand and arm, functionally similar to size and shape specifiers in other
sign languages (Sandler et al. 2010; Meir et al. 2010). The complex signs refer to objects,
and are therefore considered as nouns, though the base word may be a verb as well:
cut+long-thin-object is a knife, drink-tea+round-object is a kettle (Figure 5.4b).

4.4.4. Co-occurrence with function words

Function words are also selective about their hosts. Therefore, restrictions on their
distribution may serve as an indication of the word class of their neighbors. Padden
(1988) defines the class of nouns on distributional grounds, as the class of signs that can
be modified by quantifiers. Hunger (2006), after establishing a formational difference
between nouns and verbs in ÖGS, notices that there are some distributional corollaries:
modal verbs tend to occur much more often next to verbs than next to nouns. On the
other hand, indices, adjectives, and size and shape classifiers (SASS) are more often
adjacent to nouns than to verbs.
Another type of function words that can be useful in defining word classes is the
class of negation words. Israeli SL has a large variety of negators, including, inter alia,
two negative existential signs (glossed as neg-exist-1, neg-exist-2) and two signs that
are referred to by signers as ‘zero’ (glossed as zero-1, zero-2). It turns out that these
two pairs of signs have different co-occurrence restrictions (Meir 2004): the former co-
occurs with nouns (signs denoting entities, as in sentence (2), below), the latter with
verbs (signs denoting actions, as in sentence 3). In addition, signs denoting properties
are negated by not, the general negator in the language, and cannot co-occur with the
other negators (sentence 4).

(2) ix1computer neg-exist-1/*zero-1/2/*not [Israeli SL]


‘I don’t have a computer.’
(3) ix3 sleep zero1/2/*neg-exist-1/2
‘He didn’t sleep at all/He hasn’t slept yet.’
(4) chair ixA comfortable not/*zero-1/2/*neg-exist-1/2
‘The chair is/was not comfortable.’

Finally, in Israeli SL a special pronominal sign evolved from the homophonous sign
person, and is in the process of becoming an object clitic, though it has not been fully
grammaticalized yet (Meir 2003, 109⫺140). This sign co-occurs with verbs denoting
specific types of actions, but crucially it attaches only to verbs. This conclusion is sup-
ported by the fact that all the signs that co-occur with this pronominal sign are also
negated by the zero signs described above.

4.4.5. Co-occurrence with non-manual features

Non-manual features such as facial expressions, head nod, and mouthing play various
grammatical roles in different sign languages (Sandler 1999). In this, they are quite
5. Word classes and word formation 95

similar to function words, and their distribution may be determined by the word class
of the sign they co-occur with. In various sign languages, some facial expressions have
been described as performing adverbial functions, modifying actions or properties (e.g.,
ASL: Baker/Cokely 1980; Liddell 1980; Anderson/Reilly 1998; Wilbur 2000; Israeli SL:
Meir/Sandler 2008; BSL: Sutton-Spence/Woll 1999). These facial expressions can be
used as diagnostic for word classes, since their meaning is clearly compatible with
specific concept classes. Israeli SL has facial expressions denoting manner such as
‘quickly’, ‘meticulously’, ‘with effort’, ‘effortlessly’, which modify actions, and can be
used as diagnostics for verbs.
In some sign languages (i.e., many European sign languages) signers often accom-
pany manual signs with mouthing of a spoken language word. Mouthing turns out to
be selective as well. In the studies of noun-verb pairs in ÖGS and Auslan, it was
noticed that mouthing is much more likely to occur with nouns rather than with verbs.
In ÖGS, 92% percent of the nouns in Hunger’s (2006) study were accompanied by
mouthing, whereas only 52% of the verbs were. In Auslan, about 70% of the nouns
were accompanied by mouthing, whereas only 13% of the verbs were (Johnston 2002).

4.4.6. Conclusion

At the beginning of this section we questioned whether sign languages are character-
ized by a different set of word classes because of their modality. We showed that it is
possible to arrive at a theoretically based classification that can be applied to both
types of languages, using similar types of diagnostics: meaning, syntactic roles, distribu-
tion, morphological inflections, and derivational affixes. The main diagnostics discussed
in this section are summarized in Table 5.2 below. The main content classes, nouns,
verbs, and adjectives, are relevant for languages in the signed modality as well. On the
other hand, there are at least two types of signs that are clearly spatial in nature: one
is classifier construction (see chapter 8), whose word class status has not been deter-
mined yet, and might turn out to require different classification altogether. The other
type consists of two sub-classes of verbs, agreement verbs and spatial verbs, the classes
of verbs that ‘move’ in space to encode agreement with arguments or locations. These
classes are also sign language specific, though they belong to the larger word class
of verbs.
Are there any properties related to word classes that characterize sign languages as
a type? Firstly, more often than not, the form of the sign is not indicative of its part of
speech. For numerous sign languages, it has been observed that many signs can be
used both as arguments and as predicates, denoting both an action and a salient partici-
pant in the action, and often a property as well. This is, of course, also true of many
spoken languages. Secondly, morphological inflection is almost exclusively restricted
to predicate positions. Nominal inflections such as case and gender are almost entirely
lacking (for number see chapter 6, Plurality). Thirdly, space plays a role in determining
sub-classes within the class of verbs; although not all sign languages have the tri-partite
verb classification into agreement, spatial, and plain verbs, only sign languages have it.
It is important to note that there are also differences between individual sign lan-
guages. The sequential affixes determining word classes are clearly language specific,
as are the co-occurrence restrictions on function words. Inflectional modulations, which
96 II. Morphology

Tab. 5.2: Main diagnostics used for word classification in different sign languages
Nouns Verbs Adjectives
semantic Concept class Entity Event Property
syntactic Syntactic Argument Predicate Modifier
position Predicate Predicate
Syntactic Quantifiers Specific negators
co-occurrences Specific negators Pronominal
Determiners object clitic
morphological Formational Short and/or Longer
characterization reduplicated non-reduplicated
movement (with movement (with
respect to respect to
comparable comparable
verbs) nouns)
Inflectional Plurality (a) Encoding Predispositional;
modulations arguments: verb susceptative;
agreement; continuative;
reciprocal; intensive; appro-
multiple; exhaus- ximative; itera-
tive. tive; protractive.
(b) Aspect:
habitual; dura-
tional; continua-
tive; iterative;
protractive;
delayed comple-
tive; gradual.
Word-class de- SASS suffixes ‘sense’-prefixes Negative suffix
termining affixes (‘not-exist’)
Co-occurrence Mouthing Adverbial facial
with facial expressions
expressions

are pervasive in sign languages, also vary from one language to another. Not all sign
languages have verb agreement. Aspectual modulations of verbs and adjectives have
been attested in several sign languages. Specific modulations, such as the protractive,
predispositional, and susceptative modulations, have been reported of ASL, but
whether or not they occur in other sign languages awaits further investigation.

5. Word formation
Morphology makes use of three main operations: compounding, affixation, and redu-
plication. These operations can be instantiated sequentially or simultaneously. The
visuo-spatial modality of sign languages favors simultaneity, and offers more possibili-
5. Word classes and word formation 97

ties for such structures and operations, which are highlighted in each of the following
sub-sections.
Three additional means for expanding the lexicon are not discussed in this chapter.
The first is borrowing, which is discussed in chapter 35. The second is conversion or
zero-derivation, that is, the assignment of an already existing word to a different word
class. As mentioned above, many words in sign languages are multifunctional, serving
both as nouns and verbs or adjectives. It is difficult to determine which use is more
basic. Therefore, when a sign functions both as a noun and as a verb, it is difficult to
decide whether one is derived from the other (which is the case in conversion), or
whether the sign is unspecified as to its word-class assignment, characteristic of multi-
functionality. Finally, backformation is not discussed here, as I am not aware of any
potential case illustrating it in a sign language.

5.1. Compounding

A compound is a word composed of two or more words. Compounding expands vo-


cabulary in the language by drawing from the existing lexicon, using combinations of
two or more words to create novel meanings. Compounding seems to be necessarily
sequential, as new lexical units are formed by the sequential co-occurrence of more
basic lexical items. Yet sign languages may potentially offer simultaneously structured
compounds too. Since the manual modality has two articulators, the two hands, com-
pounds may be created byarticulating two different signs simultaneously, one with each
hand. We will discuss sequential compounds first, and then turn to examine several
structures that could be regarded as simultaneous compounding.

5.1.1. Sequential compounding

Compounds are words. As such, they display word-like behavior on all levels of linguis-
tic analysis. They tend to have the phonological features of words rather than phrases.
For example, in English and many other languages, compounds have one word stress
(e.g., a gréenhouse), like words and unlike phrases (a greén hóuse). Semantically, the
meaning of a compound is often, though not always, non-compositional. A greenhouse
is not a house painted green, but rather “a building made mainly of glass, in which the
temperature and humidity can be regulated for the cultivation of delicate or out-of-the
season plants” (Webster’s New World Dictionary, Third College Edition). It is usually
transparent and not green. Syntactically, a compound behaves like one unit: members
of a compound cannot be interrupted by another unit, and they cannot be independ-
ently modified. A dark greenhouse is not a house painted dark green. These properties
of compounds may also serve as diagnostics for identifying compounds and distinguish-
ing them from phrases.

Properties of sign language compounds: Sign languages have compounds too. In fact,
this is the only sequential morphological device that is widespread in sign languages.
Some illustrative examples from different languages are given in Table 5.3. As in spo-
ken languages, sign language compounds also display word-like characteristics. In their
98 II. Morphology

seminal study of compounds in ASL, Klima and Bellugi (1979, 207⫺210) describe
several properties that are characteristic of compounds and distinguish them from
phrases. Firstly, a quick glance at the examples in Table 5.3 shows that the meaning of
compounds in many cases is not transparent. The ASL compound blue^spot does not
mean ‘a blue spot’, but rather ‘bruise’. heart^suggest (in Israeli SL) does not mean
‘to suggest one’s heart’ but rather ‘to volunteer’, and nose^fault (‘ugly’ in Auslan)
has nothing to do with the nose. Since the original meaning of the compound members
may be lost in the compound, the following sentences are not contradictory (Klima/
Bellugi 1979, 210):

(5) blue^spot green, vague yellow [ASL]


‘That bruise is green and yellowish.’
(6) bed^soft hard
‘My pillow is hard.’

Compounds are lexicalized in form as well. They tend to have the phonological appear-
ance of a single sign rather than of two signs. For example, they are much shorter than
the equivalent phrases (Klima/Bellugi 1979, 213), because of reduction and deletion
of phonological segments, usually the movement of the first segment. The transitory
movement between the two signs is more fluid. In some cases, the movement of the

Tab. 5.3: Examples of compounds in sign languages


ASL bed^soft ‘pillow’
(Klima/Bellugi 1979) face^strong ‘resemble’
blue^spot ‘bruise’
sleep^sunrise ‘oversleep’
BSL think^keep ‘remember’
(Brennan 1990) see^never ‘strange’
work^support ‘service’
face^bad ‘ugly’
Israeli SL fever^tea ‘sick’
(Meir/Sandler 2008) heart^offer ‘volunteer’
respect^mutuality ‘tolerance’
Auslan can’t^be-different ‘impossible’
(Johnston/Schembri red^ball ‘tomato’
1999) nose^fault ‘ugly’
ABSL car^light ‘ambulance’
(Aronoff et al. 2008) pray^house ‘mosque’
sweat^sun ‘summer’
IPSL father^mother ‘parents’
(Zeshan 2000) understand^much ‘intelligent’
potato^various ‘vegetable’
New Zealand Sign no^germs ‘antiseptic’
Language (NZSL) make^dead ‘fatal’
(Kennedy 2002) ready^eat ‘ripe’
5. Word classes and word formation 99

Fig. 5.5: The ASL signs (a) think and (b) marry, and the compound they form, (c) believe.
Reprinted with permission from Sandler and Lillo-Martin (2006).

second component is also deleted, and the transitory movement becomes the sole
movement of the compound, resulting in a monosyllabic sign with only one movement,
like canonical simplex signs (Sandler 1999).
Changes contributing to the ‘single sign’ appearance of compounds are not only in
the movement component, but also in hand configuration and location. If the second
sign is performed on the non-dominant hand, that hand takes its position at the start
of the whole compound. In many cases, the handshape and orientation of the second
member spread to the first member as well (Liddell/Johnson 1986; Sandler 1989, 1993).
Similar phenomena have been attested in Auslan as well (Johnston/Schembri 1999,
174). They point out that in lexicalized compounds often phonological segments of the
components are deleted, and therefore they might be better characterized as blends.
As a result of the various phonological changes that can take place, a compound
may end up looking very much like a simplex sign: it has one movement and one hand
configuration. In the ASL compound believe (in Figure 5.5), for example, the first
location (L1) and the movement (M) segments of the first member, think, are deleted.
The second location (L2) becomes the first location of the compound, and the move-
ment and final location segments are those of the second member of the compound,
100 II. Morphology

marry. The only indication that believe is a compound is the fact that it involves two
major locations, the head and the non-dominant hand, a combination not found in
simplex signs (Battison 1978). These phonological changes are represented in (7),
based on Sandler (1989):

(7) The phonological representation of the ASL compound believe

Morphological structure: Compounding takes advantage of linear structure, but it also


involves reorganization and restructuring. The members of a compound may exhibit
different types of relationship. Endocentric compounds are those that have a head.
The head represents the core meaning of the compound and determines its lexical
category. The English compound highchair is endocentric, headed by the noun chair.
Semantically, a highchair is a type of a chair, and morphologically it is a noun, the
lexical category of its head. A compound such as scarecrow is exocentric: it is neither
a ‘crow’ nor a ‘scare’. Endocentric compounds are further classified according to the
position of the head in the compound: right-headed (the head occurs in final position,
as in highchair) and left-headed (the head occurs in initial position, as in Hebrew gan-
yeladim ’kindergarten’, literally ’garden-children’). It is commonly assumed that the
position of the head in compounds is systematic in a language (Fabb 1998). English,
for example, is characterized as right-headed, while Hebrew is left-headed.
Not much has been written on headedness in sign language compounds. Of the
ASL examples presented in Klima and Bellugi, many are exocentric, e.g., sure^work
‘seriously’, will^sorry ‘regret’, wrong^happen ‘accidentally’, face^strong ‘resemble’,
wrong^happen ‘fate’. Most of the endocentric compounds described there are left-
headed, eat(food)^noon ‘lunch’, think^alike ‘agree’, flower^grow ‘plant’, sleep^
sunrise ‘oversleep’, but at least one, blue^spot ‘bruise’, is right-headed. In Israeli SL,
compounds that have Hebrew counterparts are usually left-headed (party^surprise
‘surprise party’), though for some signers they may be right-headed. Compounds that
do not have Hebrew counterparts are often exocentric, e.g., fever^tea ‘sick’,
swing^play ‘playground’. Verbal compounds are often right-headed, as in heart^
suggest ‘volunteer’, and bread^feed ‘provide for’.
A third type of compound structure is the coordinate compound, where the mem-
bers are of equal rank, as in hunter-gatherer, someone who is both a hunter and a
gatherer. In a special type of coordinate compounds, the members are basic category-
level terms of a superordinate term. The meaning of the compound is the superordi-
nate term. This class of compounds, called also dvandva compounds (etymologically
derived from Sanskrit dvamdva, literally, a pair, couple, reduplication of dva two), is
not productive in most modern European languages, but occurs in languages of other
families. Such compounds exist in ASL (Kilma/Bellugi 1979, 234⫺235): car^
plane^train ‘vehicle’, clarinet^piano^guitar ‘musical instrument’, ring^bracelet^
necklace ‘jewelry’, kll^stab^rape ‘crime’, mother^father^brother^sister ‘family’.
Like other compounds, they denote one concept, the movement of each component
sign is reduced, and transitions between signs are minimal. However, there is a lot of
individual variation in form and in the degree of productivity of these forms. Younger
signers use them very little, and consider them to be old-fashioned or even socially stig-
matized.
5. Word classes and word formation 101

5.1.2. Simultaneous compounding

In principle, simultaneous compounding in sign languages can be of two types. In the


first, each hand may produce a different sign, but the production is simultaneous. The
second type combines certain phonological parameters from two different sources to
create a single sign. In the latter type not all the phonological specifications of each
compound member materialize, and therefore they may also be characterized as
blends.
Examples of the first type are exceedingly rare. Two BSL examples are mentioned
in the literature: minicom (a machine which allows typed messages to be transmitted
along a telephone line, in Brennan 1990, 151), and space-shuttle (Sutton-Spence/Woll
1999, 103). The compound minicom is composed of the sign type and the sign tele-
phone produced simultaneously: the right hand assumes the d handshape of the sign
telephone, but is positioned over the left hand that produces the sign type.
However, according to some analyses, simultaneous compounding is very wide-
spread in sign languages. Brennan (1990) uses the term ‘classifier compounds’ for signs
in which the non-dominant hand, and sometimes both hands, assumes a handshape of
a classifier morpheme. For example, in the sign aquadiver the non-dominant hand in
a [ handshape represents a surface, and the dominant hand in an upright W handshape
moving downwards represents a person moving downwards from the surface. Accord-
ing to Brennan’s analysis, any sign containing a classifier handshape on the non-domi-
nant hand is a compound, even some so-called ‘frozen’ lexical items. A sign such as
write (in Israeli SL and many other sign languages), whose dominant hand has a
K handshape depicting the handling of a long thin object and moving it over a flat
surface (represented by the non-dominant hand) is also a classifier compound under
this account. Johnston and Schembri (1999, 171) refer to such constructions as “simul-
taneous sign constructions” rather than compounds, because they point out that such
constructions may be phrasal or clausal. It should be pointed out that however these
signs originated, they are lexical signs in every respect, and under most analyses, they
are not regarded synchronically as compounds.
Two types of word formation process combine handshape from one source and
movement and location from another: numeral incorporation, where the handshape
represents a number (Stokoe et al. 1965; Liddell 1996 and works cited there), and
initialization, in which the handshape is drawn from the handshape inventory of the
manual alphabet (Stokoe et al. 1965; Brentari/Padden 2001). In addition, these proc-
esses are not usually analyzed as compounds, but rather as some kind of incorporation,
affixation, or combination of two bound roots (e.g., Liddell 1996 on numeral incorpora-
tion). Whatever the analysis, they both combine elements from two sources, and in this
they resemble compounding, but they do so simultaneously, a possibility available only
for languages in the signed modality.

Numeral incorporation is usually found in pronominal signs and in signs denoting time
periods, age, and money. In these signs the number of fingers denotes quantity. For
example, the basic form of the signs hour, day, week, month, and year in Israeli SL
is made with a @ handshape. By using a W, X, t, or < handshape, the number of units
is expressed. That is, signing the sign for day with a W handshape means ‘two days’. A
X handshape would mean ‘three days’, etc. This incorporation of number in the signs
102 II. Morphology

is limited in Israeli SL to five in signs with one active hand, and to 10 in symmetrical
two-handed signs. Number signs in many sign languages have specifications only for
handshape, and are therefore good candidates for participating in such simultaneous
compounding (but see Liddell 1996 for a different analysis). But there are also restric-
tions on the base sign, which provides the movement and location specifications: usu-
ally it has to have a @ handshape, which can be taken to represent the number one.
However, some counter-examples to this generalization do exist. In DGS, the sign year
has a d handshape, but this handshape is replaced by the above handshapes to express
‘one/two/three etc. years’. Numeral incorporation has been reported on in many sign
languages, e.g., ASL, BSL, Israeli SL, DGS, Auslan, and IPSL, among others. But there
are sign languages that do not use this device. In ABSL numeral incorporation has
not been attested, maybe because time concept signs in the language do not have a
@ handshape (for numeral incorporation see also chapters 6 and 11).

Initialization is another type of simultaneous combination of phonological specifica-


tions from two different sources: a spoken language word and a sign language word.
The handshape of an initialized sign represents a letter of the fingerspelled alphabet,
corresponding to the first letter of the written form of an ambient spoken language
word. This initialized handshape is usually added to a sign that already exists in the
language, lending it an additional ⫺ often more specific ⫺ meaning for which there is
no other sign. For example, the ASL signs family, association, team, and department
all share the movement and location of the sign group, and are distinguished by the
handshapes F, A, T, or D. As Brentari and Padden (2001, 104) point out, some initial-
ized signs in ASL are not built on native signs, but they still form a semantic and a
formational ‘family’. Color terms, such as blue, purple, yellow, and green, are charac-
terized by the same movement and location, although there is no general color sign
on which they are based. The same holds for color terms and kinship terms in LSQ
(Machabee 1995, 29⫺61, 47). In other cases, the movement and location may present
iconically some feature of the concept. In LSQ, the sign for ‘Roman’ is performed with
an R handshape tracing the form of a Roman military helmet above the head (Macha-
bee 1995, 45). Initialization is found in other sign languages as well, e.g., Irish Sign
Language (Ó’Baoill/Matthews 2002) and Israeli SL (Meir/Sandler 2008, 52). However,
it is much less common in languages with a two-handed fingerspelling system, such as
BSL, Auslan, and New Zealand Sign Language. In a one-handed fingerspelling system,
each letter is represented solely by the handshape, which may then be easily incorpo-
rated in other signs, taking their location and movement features. In a two-handed
system, each letter is identified by a combination and location (and sometime move-
ment as well), so that it is much less free to combine with other phonological param-
eters (Cormier/Schembri/Tyrone 2008). More common in these languages are single
manual letter signs, which are based on a letter of an English word, but with very
limited types of movement of the dominant hand against the non-dominant hand.

5.2. Affixation

Though compounding is common in all studied sign languages, sequential affixation is


very rare. This is partly due to the general preference in manual-visual languages for
5. Word classes and word formation 103

more simultaneous structures. However, since compounds are not uncommon, simulta-
neity cannot be the sole factor for disfavoring sequential affixation. Another explana-
tion, suggested by Aronoff, Meir and Sandler (2005), is the relatively young age of sign
languages. Sequential derivational affixes in spoken languages are in many cases the
result of grammaticalization of free words. Grammaticalization is a complex set of
diachronic changes (among them reanalysis, extension, phonological erosion, and se-
mantic bleaching) that take time to crystallize. Sign languages as a class are too young
for such structures to be abundant (but see chapter 36). In addition, it might be the
case that there are more affixal structures in sign languages that haven’t been identified
yet, because of the young age of the sign linguistic field.
How can one identify affixes in a language? What distinguishes them from com-
pound members? First, an affix recurs in the language, co-occurring with many differ-
ent base words, while compound members are confined to few bases. The suffix -ness,
for example, is listed as occurring in 3058 English words (Aronoff/Anshen 1998, 245),
while green (as in greenhouse, greengrocer, greenmail) occurs in about 30. In addition,
affixes are more distant from their free word origin. While members of compounds
usually also occur as free words in the language, affixes in many cases do not. There-
fore, a morpheme that recurs in many lexical items in a language and in addition does
not appear as a free form is an affix and not a compound member. Finally, allomorphy
is much more typical of affixes than of compound members. This is to be expected,
since affixes are more fused with their bases than compound members with each other.
However, the difference between an affix and a compound member is a matter of
degree, not a categorical difference, and can be hard to determine in particular cases.

5.2.1. Sequential affixation in sign languages

Very few sequential affixes have been mentioned in the sign language literature. As
they are so rare, those affixes that were found were assumed to have evolved under
the influence of the ambient spoken language. In ASL the comparative and superlative
affixes (Sandler/Lillo-Martin 2006, 64) and the agentive suffix were regarded as English
loan translations. However, recently Supalla (1998) argued, on the basis of old ASL
films, that the agentive suffix evolved from an old form of the sign ‘person’ in ASL.
For three affixes it has been explicitly argued that the forms are indeed affixes and
not free words or members of a compound: two negative suffixes, one in ASL and the
other in Israeli SL, and a set of ‘sense’ prefixes in Israeli SL. All of these affixes have
free form counterparts that are nonetheless significantly different from the bound
forms, so as to justify an affixational analysis. The affinity between the bound and the
free forms may indicate how these affixes evolved.
The suffix glossed as zero in ASL has the meaning ‘not at all’, and apparently
evolved from a free sign with a similar meaning (Sandler 1996; Aronoff/Meir/Sandler
2005). However, the suffix and the base it attaches to behave like a single lexical unit:
they cannot be interrupted by another element, and for some signers they are fused
phonologically. As is often the case, some combinations of wordCzero have an idio-
syncratic meaning, e.g., touchCzero ‘didn’t use it at all’, and there are some arbitrary
gaps in the lexical items it attaches to. What makes it more affix-like than compound-
like is its productivity: it attaches quite productively to verbs and (for some signers) to
104 II. Morphology

adjectives. Yet its distribution and productivity vary greatly across signers, indicating
that it has not been fully grammaticized.
The Israeli SL negative suffix, mentioned in section 4.4.3, was apparently grammati-
cized from a negative word meaning meaning ‘none’ or ‘not exist’. In addition to other
characteristics typical of affixes, it also has two allomorphes: a one-handed and a two-
handed variant, the distribution of which is determined by the number of hands of
the base.
Another class of affixes is the ‘sense’ prefix described above. Similar forms have
been reported in other sign languages, e.g., BSL (Brennan 1990), where they are
treated as compounds. Indeed, such forms show that sometimes the distinction between
compounds and affixed words is blurred. The reason that Aronoff, Meir and Sandler
(2005) analyze these forms as affixes is their productivity. There are more than 70 such
forms in Israeli SL, and signers often use these forms to create new concepts. In addi-
tion, signers have no clear intuition of the lexical class of prefixes; they are not sure
whether pointing to the eye sign should be translated as ‘see’ or ‘eye’, or pointing to
the nose ‘smell’ or ‘nose’ etc. Such indeterminacy is characteristic of affixes, but not of
words. The fact that these forms are regarded as compounds in other languages may
be due to lesser degree of productivity in other languages (for example, they are less
prevalent in ASL), or to the fact that other researchers did not consider an affix analy-
sis. However, their recurrence in many sign languages indicates that these signs are
productive sources for word formation.
Two potential suffixes exist in ABSL. They were mentioned in section 4.4.3: the
locative pointing signs, and the size and shape signs. At present, it is hard to determine
whether these are affixed words or compounds, since not much is known about the
structure of lexical items in ABSL. However, since these signs recur in a number of
complex signs, they have the potential of becoming suffixes in the language.

5.2.2. Simultaneous affixation in sign language

The term ‘simultaneous affixation’ may seem to be contradictory, since affixation is


usually conceived as linear. However, by now it should be clear that morphological
information may be added not by adding segments, but rather by changing features of
segments. Therefore, all the processes described above in which morphological catego-
ries are encoded by a change in the movement parameters of the base sign may be
regarded as instances of simultaneous affixation.
All inflectional processes identified in the sign language literature to date make use
of this formal device, and a few were described in section 4.4.2 above. But sign lan-
guages use this device for derivational purposes as well, as exemplified by the noun-
verb pairs in section 4.4.1. Quite a few of the derivational processes involve reduplica-
tion, to which we turn in the next section. Here we mention derivational processes that
involve changes to the movement component with no reduplication.
ASL has the means for deriving predicates from nouns. Klima and Bellugi (1979,
296) describe a systematic change to the movement of ASL nouns, forming predicates
with the meaning of ‘to act/appear like X’, as in ‘to act like a baby’ from baby, ‘to seem
Chinese’ from chinese, and ‘pious’ from church. The derived predicates have a fast
and tense movement with restrained onset.
5. Word classes and word formation 105

Klima and Bellugi also point out that the figurative or metaphorical use of signs
often involves a slight change in the movement of the base sign. A form meaning
‘horny’ differs slightly in movement from hungry; ‘to have a hunch’ differs from feel.
Similarly, differences in movement encode an extended use of signs as sentential adver-
bials, as in ‘suddenly’ or ‘unexpectedly’ from wrong, or ‘unfortunately’ from trouble.
Yet in these cases both form and meaning relations are idiosyncratic, and appear only
in particular pairs of words. These pairs show that movement is a very productive tool
for indicating relationships among lexical items. But not all instances of movement
difference are systematic enough to be analyzed as derivational.
Not only may the quality of the movement change, but also its direction. In signs
denoting time concepts in a few sign languages, the direction of movement indicates
moving forward or backwards in time. The signs tomorrow and yesterday in Israeli
SL form a minimal pair. They have the same hand configuration and location, but
differ in the direction of movement. In yesterday the movement is backwards, and in
tomorrow it is forwards. Similarly, if a forward or backward movement is imposed on
the signs week and year, the derived meanings will be ‘next week/year’ and ‘last week/
year’. This process is of very limited productivity. It is restricted to words denoting
time concepts, and may be further restricted by the phonological form of the base sign.
Furthermore, the status of the direction of movement in these signs is not clear. It is
not a morpheme, yet it is a phoneme that is meaning-bearing (see the discussion of
sign families in section 2.2). Nonetheless, within its restricted semantic field, it is
quite noticeable.

5.3. Reduplication

Reduplication is a process by which some phonological segment, or segments, of the


base is repeated. Yet what is repeated may vary. It could be the entire base, as in
Walbiri kurdu-kurdu (‘children’, from kurdu ‘child’; in Nash 1986); a morpheme; a
syllable; or any combination of segments, such as the first CVC segment of the base,
as in Agta tak-takki (‘legs’, from takki ‘leg’, Marantz 1982). Function-wise, reduplica-
tion lends itself very easily to iconic interpretation. Repeating an element creates a
string with several identical elements. When a whole base is repeated, the interpreta-
tion seems quite obvious. Lakoff and Johnson (1980, 180) refer to this as the principle
of “more of form stands for more of content”. The most straightforward iconic uses of
reduplication are plurality and distribution for nouns (see chapter 6, Plurality); repeti-
tion, duration, and habitual activity in verbs (see chapter 9, Tense, Aspect, and Modal-
ity); and increase in the size and/or intensity of adjectives. However, it is also used in
many various non-iconic or less motivated functions, such as to form infinitives, verbal
adjectives, causatives, various aspects, and modalities (Kouwenberg 2003).
The sign modality affords several possibilities of reduplication, some of which do
not have counterparts in spoken languages (see Pfau/Steinbach 2006). It may involve
several iterations of a sign. These iterations may be produced in the same place, or
may be displaced in the signing space. Iterations may be performed by one hand or
both hands. If the latter, the two hands may move symmetrically or in an alternating
fashion. A circular movement may be added to the iterations, in various rhythmic
106 II. Morphology

patterns. Consequently, some phonological features of the base sign may be altered.
Non-manual features may be iterated as well, or a feature may spread over the entire
set of manual iterations. Finally, reduplication may also take a simultaneous form: one
sign can be articulated simultaneously by both hands.
Sign languages certainly make extensive use of reduplication. As the forms may
vary, so can the functions. Reduplication is very common in verbal and adjectival aspec-
tual inflections. Of the 11 adjectival modulations in Klima and Bellugi (1979), seven
involve reduplication; 10 of the 12 aspectual modulations exemplified by look-at and
give also involve reduplication. It is also very commonly used to indicate plurality on
nouns (see Sutton-Spence/Woll 1999, 106 for BSL; Pizzuto/Corazza 1996 for Italian
Sign Language (LIS); Pfau/Steinbach 2006 for DGS as well as LIS and BSL). These
inflectional processes are discussed in the relevant chapters in this volume.
Reduplication is also used in a few derivational processes. Frishberg and Gough
(1973, cited in Wilbur 1979, 81) point out that repetitions of signs denoting time units
in ASL, e.g., week, month, tomorrow, derive adverbs meaning weekly, monthly, ev-
ery-day. Slow repetition with wide circular path indicates duration ‘for weeks and
weeks’. Activity nouns in ASL are derived from verbs by imposing small, quick, and
stiff repeated movements on non-stative verbs (Klima/Bellugi 1979, 297; Padden/Perl-
mutter 1987, 343). The verb act has three unidirectional movements, while the noun
acting is produced with several small, quick, and stiff movements. In noun-verb pairs
(discussed above) in ASL and Auslan, reduplicated movement (in addition to the qual-
ity of the movement) distinguishes between nouns and verbs.
Other derivational processes do not change the category of the base word, but
create a new (although related) lexical item. It should be noticed that in such cases it
is often difficult to determine whether the process is inflectional or derivational. For
example, the two adjectival processes described here are referred to as inflections in
Klima and Bellugi (1979) and as derivation in Padden and Perlmutter (1987). Charac-
teristic adjectives are derived from ASL signs denoting incidental or temporary states,
such as quiet, mischievous, rough, silly, by imposing circular reduplicated movement
on the base sign. Also in ASL, repeated tense movements derive adjectives with the
meaning of ‘-ish’: youngish, oldish, blueish (Bellugi 1980). In Israeli SL verbs denot-
ing a reciprocal action are derived by imposing alternating movement on some verbs,
e.g., say ⫺ conduct conversation; speak ⫺ converse; answer ⫺ ‘conduct a dialogue
of questions and answers’ (Meir/Sandler 2008).
Simultaneous reduplication, that is, the articulation of a sign by both hands instead
of by only one hand is very rare as a word formation device. Johnston and Schembri
(1999, 161⫺163) point out that in Auslan producing a two-handed version of a one-
handed sign (which they term ‘doubling’) very rarely results in a different yet related
lexical item. Usually the added meaning is that of intensification, e.g., bad vs. very-
bad/apalling/horror, or success vs. successful/victorious, but often such intensified
forms are also characterized by specific facial expression and manual stress. Most in-
stances of doubling in Auslan are either free variants of the single-handed version, or
mark grammatical distinctions such as distributive aspect on verbs. Therefore they
conclude that in most cases doubled forms do not constitute separate lexical items in
the language.
5. Word classes and word formation 107

5.4. Conclusion

Sign languages make use of word formation operations that are also found in spoken
languages, but endow them with flavors that are available only to manual-spatial lan-
guages: the existence of two major articulators, and their ability to move in various
spatial and temporal patterns. There is a strong preference for simultaneous operations,
especially in affixation. Inflection is, in fact, exclusively encoded by simultaneous affix-
ation, while derivation is more varied in the means it exploits.
Both inflection and derivation make use of modulations to the movement compo-
nent of the base sign. In other words, sign languages make extensive use of one phono-
logical parameter for grammatical purposes. Although signs in sign families (described
in section 1.2) can share any formational element, systematic relations between forms
are encoded by movement. Why is it that the movement is singled out for performing
these grammatical tasks and not the other parameters of the sign ⫺ the hand configura-
tion or the location?
Using a gating task, Emmorey and Corina (1990) investigated how native ASL
signers use phonetic information for sign recognition. The results indicated the location
of the sign was identified first, followed quickly by the handshape, and the movement
was identified last. These data may suggest that the movement is in a sense ‘extra’: it
adds little to the lexical identity of the sign. But it can be used to add shades of
meaning. Moreover, movement is inherently both spatial and temporal. Many inflec-
tional categories encode temporal and spatial concepts, and therefore movement is the
most obvious formational parameter to express these notions in a transparent way. Yet
the use of movement in derivational processes shows that iconicity is not the entire
story. It might be the case that once a formational element is introduced into the
language for whatever reason, it may then expand and be exploited as a grammatical
device for various functions. The use of movement also has an interesting parallel in
spoken languages, in that non-sequential morphology often makes use of the vowels
of the base word, and not the consonants. Furthermore, it has been argued that vowels
carry out more grammatical roles in spoken languages (both syntactic and morphologi-
cal) while consonants carry more of the lexical load (Nespor/Peña/Mehler 2003). Both
movement and vowels are the sonorous formational elements; both are durational and
less discrete. However, what makes them key elements in carrying the grammatical
load (as opposed to the lexical load) of the lexeme still remains an open issue.
The ubiquity of compounds shows that sequential operations are not utterly disfav-
ored in sign languages. Signed compounds share many properties with their spoken
language counterparts, including the tendency to lexicalize and become more similar
in form to simplex signs. Compounding may also give rise to the development of gram-
matical devices such as affixes. Elements that recur in compounds are good candidates
for becoming affixes, but such developments take time, and are therefore quite sparse
in young languages, including sign languages (Aronoff/Meir/Sandler 2005). Because of
their youth, sign languages actually offer us a glimpse into such diachronic processes
in real time.
108 II. Morphology

6. Literature

Anderson, D. E./Reilly, Judy


1998 PAH! The Acquisition of Adverbials in ASL. In: Sign Language & Linguistics 1(2),
3⫺28.
Aronoff, Mark
1976 Word Formation in Generative Grammar. Cambridge, MA: MIT Press.
Aronoff, Mark/Anshen, Frank
1998 Morphology and the Lexicon: Lexicalization and Productivity. In: Spencer, Andrew/
Zwicky, Arnold M.(eds.), The Handbook of Morphology. Oxford: Blackwell, 237⫺247.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344.
Aronoff, Mark/Meir, Irit/Padden, Carol A./Sandler, Wendy
2008 The Roots of Linguistic Organization in a New Language. In: Interaction Studies: A
Special Issue on Holophrasis vs. Compositionality in the Emergence of Protolanguage
9(1), 131⫺150.
Baker, Charlotte/Cokely, Dennis
1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver
Spring, MD: TJ Publishers.
Bellugi, Ursula
1980 How Signs Express Complex Meanings. In: Baker, Charlotte/Battison, Robbin (eds.),
Sign Language and the Deaf Community. Silver Spring, MD: National Association of
the Deaf, 53⫺74.
Boyes Braem, Penny
1986 Two Aspects of Psycholinguisitc Research: Iconicity and Temporal Structure. In: Ter-
voort, B. T. (ed.), Signs of Life: Proceedings of the Second European Congress on Sign
Language Research. Amsterdam: Publication of the Institute for General Linguistics,
University of Amsterdam 50, 65⫺74.
Brennan, Mary
1990 Word Formation in British Sign Language. Stockholm: The University of Stockholm.
Brentari, Diane/Padden, Carol A.
2001 Native and Foreign Vocabulary in American Sign Language: A Lexicon with Multiple
Origins. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A Cross-
linguistic Investigation of Word Formation. Mahwah, NJ: Lawrence Erlbaum, 87⫺119.
Cohen, Einya/Namir, Lila/Schlesinger, I. M.
1977 A New Dictionary of Sign Language: Employing the Eshkol-Wachmann Movement No-
tation System. The Hague: Mouton.
Cormier, Kearsy/Schembri, Adam/Tyrone, Martha E.
2008 One Hand or Two? Nativisation of Fingerspelling in ASL and BANZSL. In: Sign Lan-
guage & Linguistics 11(1), 3⫺44.
Emmorey, Karen
2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language. Hamburg: Signum.
Fabb, Nigel
1998 Compounding. In: Spencer, Andrew/Zwicky, Arnold M. (eds.), The Handbook of Mor-
phology. Oxford: Blackwell, 66⫺83.
Fernald, Theodore B./Napoli, Donna Jo
2000 Exploitation of Morphological Possibilities in Signed Languages. In: Sign Language &
Linguistics 3(1), 3⫺58.
5. Word classes and word formation 109

Fischer, Renate
2002 The Study of Natural Sign Language in Eighteenth-century France. In: Sign Language
Studies 2, 391⫺406.
Frishberg, Nancy/Gough, Bonnie
1973 Morphology in American Sign Language. In: Salk Institute Working Paper.
Fuentes, Mariana/Tolchinsky, Liliana
2004 The Subsystem of Numerals in Catalan Sign Language. In: Sign Language Studies 5(1),
94⫺117.
Haspelmath, Martin
2001 Word Classes and Parts of Speech. In: Baltes, Paul B./Smelser, Neil J. (eds.), Interna-
tional Encyclopedia of the Social & Behavioral Sciences Vol. 24. Amsterdam: Pergamon,
16538⫺16545.
Hockett, C. F.
1960 The Origins of Speech. In: Scientific American 203, 89⫺96.
Hunger, Barbara
2006 Noun/Verb Pairs in Austrian Sign Language (ÖGS). In: Sign Language & Linguistics
9(1/2), 71⫺94.
Johnston, Trevor/Schembri, Adam
1999 On Defining Lexeme in a Signed Language. In: Sign Language & Linguistics 2(2),
115⫺185.
Johnston, Trevor
2001 Nouns and Verbs in Australian Sign Language: An Open and Shut Case? In: Journal
of Deaf Studies and Deaf Education 6(4), 235⫺257.
Kennedy, Graeme (ed.).
2002 A Concise Dictionary of New Zealand Sign Language, Wellington New Zealand:
Bridget Williams Books.
Kimmelman, Vadim
2009 Parts of Speech in Russian Sign Language: The Role of Iconicity and Economy. In Sign
Language & Linguistics 12(2), 161⫺186.
Klima, Edward S./Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Kouwenberg, Silvia
2003 Twice as Meaningful: Reduplication in Pidgins, Creoles and other Contact Languages.
London: Battlebridge Publications.
Kyle, Jim G./Woll, Benice
1985 Sign Language: The Study of Deaf People and their Language. Cambridge: Cambridge
University Press.
Lakoff, George/Johnson, Mark
1980 Metaphors we Live by. Chicago: University of Chicago Press.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott K.
1996 Numeral Incorporating Roots & Non-incorporating Roots in American Sign Language.
In: Sign Language Studies 92, 201⫺225.
Machabee, Dominique
1995 Signs in Quebec Sign Language. In: Sociolinguistics in Deaf Communities 1, 29⫺61.
Marantz, Alec
1982 Re Reduplication. In: Linguistic Inquiry 13(3), 435⫺482.
McWhorter, John
1998 Identifying the Creole Prototype: Vindicating a Typological Class. In: Language 74,
788⫺818.
110 II. Morphology

Meier, Richard P.
1982 Icons, Analogues, and Morphemes: The Acquisition of Verb Agreement in ASL. PhD
Dissertation, University of California, San Diego.
Meir, Irit
2003 Grammaticalization and Modality: The Emergence of a Case-marked Pronoun in Israeli
Sign Language. In: Journal of Linguistics 39(1), 109⫺140.
Meir, Irit
2004 Question and Negation in Israeli Sign Language. In: Sign Language & Linguistics 7(2),
97⫺124.
Meir, Irit/Sandler, Wendy
2008 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erl-
baum.
Meir, Irit/Aronoff, Mark/Sandler, Wendy/Padden, Carol
2010 Sign Languages and Compounding. In: Scalise, Sergio/Vogel Irene (eds), Compounding.
Benjamins, 301⫺322.
Nash, David G.
1986 Topics in Warlpiri Grammar. New York: Garland.
Nespor, Marina/Sandler, Wendy
1999 Prosody in Israeli Sign Language. In: Language and Speech 42, 143⫺176.
Ó’Baoill, Donall/Matthews, Pat A.
2002 The Irish Deaf Community. Vol. 2: The Structure of Irish Sign Language. Dublin: The
Linguistics Institute of Ireland.
Padden, Carol
1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland.
Padden, Carol/Perlmutter, David
1987 American Sign Language and the Architecture of Phonological Theory. In: Natural
Language and Linguistic Theory 5, 335⫺375.
Pfau, Roland/Steinbach, Markus
2006 Pluralization in Sign and in Speech: A Cross-Modal Typological Study. In: Linguistic
Typology 10, 135⫺182.
Pizzuto, Elena/Corazza, Serena
1996 Noun Morphology in Italian Sign Language. In: Lingua 98, 169⫺196.
Romaine, Suzanne
1989 The Evolution of Linguistic Complexity in Pidgin and Creaole Languages. In: Hawkins,
John A./Gell-Mann, Murray (eds.), The Evolution of Human Languages. Santa Fe Insti-
tute: Addison-Wesley Publishing Company, 213⫺238.
Sandler, Wendy
1989 Phonological Representation of the Sign: Linearity and Nonlinearity in American Sign
Language. Dordrecht: Foris.
Sandler, Wendy
1993 Linearization of Phonological Tiers in American Sign Language. In: Coulter, Geoffrey
R. (ed.), Phonetics and Phonology. Vol. 3. Current Issues in ASL Phonology. San Diego,
CA: Academic Press, 103⫺129.
Sandler, Wendy
1996 A Negative Suffix in ASL. Paper Presented at the 5th Conference on Theoretical Issues
in Sign Language Research (TISLR), Montreal, Canada.
Sandler, Wendy
1999a Cliticization and Prosodic Words in a Sign Language. In: Kleinhenz, Ursula/Hall, Tracy
(eds.), Studies on the Phonological Word. Amsterdam: Benjamins, 223⫺254.
Sandler, Wendy
1999b The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli
Sign Language. In: Sign Language & Linguistics 2 (2), 187⫺215.
5. Word classes and word formation 111

Sandler, Wendy/Aronoff, Mark/Meir, Irit/Padden, Carol


2011 The Gradual Emergence of Phonological Form in a New Language. In Natural Lan-
guage and Linguistic Theory, 29, 503⫺543.
Schwager, Waldemar/Zeshan, Ulrike
2008 Word Classes in Sign Languages: Criteria and Classification. In: Studies in Language
32(3), 509⫺45.
Senghas, Ann
1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation,
MIT.
Slobin, Dan
2008 Breaking the Molds: Signed Languages and the Nature of Human Language. In: Sign
Language Studies 8(2), 114⫺130.
Stokoe, W. C.
1960 Sign Language Structure: An Outline of the Visual Communication Systems of the
American Deaf. Studies in Linguistics Occasional Papers 8. Buffalo: University of Buf-
falo Press.
Supalla, Ted/Newport, Elissa
1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign
Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language
Research. New York: Academic Press, 91⫺132.
Supalla, Ted
1998 Reconstructing early ASL Grammar through Historic Films. Paper Presented at the
6th International Conference on Theoretical Issues in Sign Language Linguistics
(TISLR), Gallaudet University, Washington, D.C.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. Cambridge: Cambridge University Press.
Taub, Sarah F.
2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cam-
bridge: Cambridge University Press.
Tobin, Yishai
2008 Looking at Sign Language as a Visual and Gestural Shorthand. In: Poznań Studies in
Contemporary Linguistics 44(1), 103⫺119.
Wilbur, Ronnie B.
1979 American Sign Language and Sign Systems: Research and Application. Baltimore: Uni-
versity Park Press.
Wilbur, Ronnie B.
2000 Phonological and Prosodic Layering of Nonmanuals in American Sign Language. In:
Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited. Mahwah, NJ:
Lawrence Erlbaum Associates, 215⫺244.
Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Ben-
jamins.
Zeshan, Ulrike
2002 Towards a Notion of ‘Word’ in Sign Languages. In: Dixon, Robert M. W./Aikhenvald,
Alexandra Y. (eds.), Word: A Cross-linguistic Typology. Cambridge: Cambridge Univer-
sity Press, 153⫺179.
Zeshan, Ulrike
2004a Interrogative Constructions in Signed Languages: Crosslinguistic Perspectives. In: Lan-
guage 80(1), 7⫺39.
Zeshan, Ulrike
2004b Hand, Head, and Face: Negative Constructions in Sign Languages. In: Linguistic Typol-
ogy 8(1), 1⫺58.
112 II. Morphology

Zwicky, Arnold M./Pullum, Geoffrey K.


1983 Cliticization vs. Inflection: English N’T. In: Language 59, 502⫺513.

Irit Meir, Haifa (Israel)

6. Plurality
1. Introduction
2. Nouns and noun phrases
3. Pronouns, numeral incorporation, and number signs
4. Verb agreement and classifier verbs
5. Pluralization across modalities
6. Conclusion
7. Literature

Abstract
Both sign and spoken languages make use of a variety of plural marking strategies. The
choice of strategy depends on lexical, phonological, and morphosyntactic properties of
the sign to be modified. The description of basic plural patterns is supplemented by a
typological investigation of plural marking across sign languages. In addition, we discuss
the realization of the plural feature within noun phrases, the expression of plural with
pronouns as well as with agreement and classifier verbs, and the structure of number
systems in sign languages. Finally, we compare pluralization in spoken languages to the
patterns found in sign languages and account for the modality-specific properties of
plural formation in sign languages.

1. Introduction
The topic of this chapter is pluralization in sign language. All natural languages seem
to have means to distinguish a single entity (singular) from a number of entities (plu-
ral). This distinction is expressed by a difference in the grammatical category number.
Typically, the singular is the unmarked form, whereas the plural is the marked form,
which is derived from the singular by specific morphological operations such as affixa-
tion, stem internal change, or reduplication. Plural can be expressed on nouns, pro-
nouns, demonstratives, determiners, verbs, adjectives, and even prepositions. In this
chapter, we will be mainly concerned with singular and plural forms although many
languages have more fine-grained distinctions such as, for example, singular, dual, and
plural (but see sections 3 and 4 that show that sign languages also allow for more fine-
grained distinctions).
6. Plurality 113

Patterns of plural marking have been described for a number of different sign lan-
guages: see Jones and Mohr (1975), Wilbur (1987), Valli and Lucas (1992), and Perry
(2004) for American Sign Language (ASL, also see chapters 7, 11, and 13); Skant et
al. (2002) for Austrian Sign Language (ÖGS); Sutton-Spence and Woll (1999) for Brit-
ish Sign Language (BSL, also see chapter 11); Perniss (2001) and Pfau and Steinbach
(2005b, 2006b) for German Sign Language (DGS); Heyerick and van Braeckevelt
(2008) and Heyerick et al. (2009) for Flemish Sign Language (VGT); Schmaling (2000)
for Hausa Sign Language (Hausa SL); Zeshan (2000) for Indopakistani Sign Language
(IPSL); Stavans (1996) for Israeli Sign Language (Israeli SL); Pizzuto and Corazza
(1996) for Italian Sign Language (LIS); Nijhof and Zwitserlood (1999) for Sign Lan-
guage of the Netherlands (NGT); and Kubuş (2008) and Zwitserlood, Perniss, and
Özyürek (2011) for Turkish Sign Language (TİD). Although there are many (brief)
descriptions of plural marking in individual sign languages (but only a few theoretical
analyses), a comprehensive (cross-modal) typological study on pluralization in the vis-
ual-manual modality is still lacking. Parts of this chapter build on Pfau and Steinbach
(2005b, 2006b), who provide a comprehensive overview of plural marking in DGS and
discuss typological variation and modality-specific and modality-independent aspects
of pluralization in sign languages.
In section 2, we start our investigation with the nominal domain and discuss plural
marking on nouns and noun phrases. We first describe the basic patterns of plural
marking, which are attested in many different sign languages, namely (two kinds of)
reduplication and zero marking. Then we discuss typological differences between sign
languages. In section 3, we address pronouns, number signs, and numeral incorporation.
Section 4 turns to the verbal domain and describes plural marking on agreement and
classifier verbs. Section 5 gives a brief typological survey of typical patterns of plural
formation in spoken languages and discusses similarities and differences between spo-
ken and sign languages. We also try to account for the modality-specific properties of
pluralization in sign languages described in the previous sections. Finally, the main
findings of this chapter are summarized in section 6.

2. Nouns and noun phrases


Descriptions of pluralization in many different sign languages show that within a single
sign language, various plural marking strategies may exist. On the one hand, certain
strategies such as reduplication and the use of numerals and quantifiers are attested in
most sign languages. On the other hand, sign languages differ from each other to a
certain degree with respect to the morphological realization of plural features. Firstly
in this section, we discuss the realization of the plural feature on the noun (section
2.1). Then, we turn to pluralization and plural agreement within noun phrases (section
2.2). We illustrate the basic patterns with examples from DGS but also include exam-
ples from other sign languages to illustrate typological variation.

2.1. Nominal number inflection


Two general patterns of nominal plural formation that are mentioned frequently in the
literature are zero marking and reduplication (or, to be more precise, triplication, see
114 II. Morphology

below) of the noun. Reduplication typically comes in two types: (i) simple reduplication
and (ii) sideward reduplication. Interestingly, both kinds of reduplication only apply
to certain kinds of nouns. We will see that the choice of strategy depends on phonologi-
cal features of the underlying noun (for phonological features, cf. chapter 3, Phonol-
ogy). Hence, we are dealing with phonologically triggered allomorphy and the plurali-
zation patterns in sign languages can be compared to phonologically constrained plural
allomorphy found in many spoken languages. We come back to this issue in section 5.

2.1.1. Phonological features and plural marking strategies

In (1), we provide an overview of the phonological features constraining nominal plural


marking in DGS (and many other sign languages) and the corresponding plural mark-
ing strategies (cf. Pfau/Steinbach 2005b, 2006b). As illustrated in (1), in DGS plural
marking, some of these features depend on others. The distinction between complex
and simple movement, for instance, is only relevant for non-body anchored nouns.
Moreover, the distinction between lateral and midsagittal place of articulation applies
only to non-body anchored nouns performed with a simple movement. Consequently,
we arrive at four different classes (1a⫺d) and potentially four different patterns of
plural marking. However, since all nouns phonologically specified for either complex
movement or body anchored use the same pattern (zero marking) and reduplication
comes in two types, we have basically two strategies of plural marking all together:
(i) (two kinds of) reduplication and (ii) zero marking.

(1) phonological feature plural marking strategy


a. body anchored zero marking
non-body anchored
b. (i) complex movement zero marking
(ii) simple movement
c. (iia) midsagittal place of articulation simple reduplication
d. (iib) lateral place of articulation sideward reduplication

It will become clear in the examples below that plural reduplication usually involves
two repetitions. Moreover, various articulatory factors may influence the number of
repetitions: (i) the effort of production (more complex signs like, e.g., vase tend to be
repeated only once), (ii) the speed of articulation, and (iii) the syllable structure of a
mouthing that co-occurs with a sign since the manual and the non-manual part tend
to be synchronized (cf. Nijhof/Zwitserlood 1999; Pfau/Steinbach 2006b). In addition,
the prosodic structure may influence the number of repetitions, which seems to in-
crease in prosodically prominent positions, for instance, at the end of a prosodic do-
main or in a position marked as focus (Sandler 1999; cf. also chapter 13 on noun
phrases). Finally, we find some individual (and probably stylistic) variation among sign-
ers with respect to the number of repetitions. While some signers repeat the base
noun twice, others may either repeat it only once or three times. Although additional
repetitions may emphasize certain aspects of meaning, we assume that the distinction
between reduplication and triplication is not part of the morphosyntax of plural mark-
ing proper. Because two repetitions (i.e. triplication) appears to be the most common
6. Plurality 115

pattern, the following discussion of the data is based on this pattern. To simplify mat-
ters, we will use the established term ‘reduplication’ to describe this specific morpho-
logical operation of plural marking in sign languages. We will address the difference
between reduplication and triplication in more detail in section 5 below. Let us first
have a closer look at the four classes listed in (1).

2.1.2. Zero marking

In DGS, body anchored nouns (1a) pattern with non-body anchored nouns which are
lexically specified for a complex movement (1b) in that both types do not permit the
overt morphological realization of the plural feature. In both cases, zero marking is
the only grammatical option. As can be seen in Figures 6.1 and 6.2 above, simple as
well as sideward reduplication leads to ungrammaticality with these nouns. Note that
in the glosses, plural reduplication is indicated by ‘CC’, whereby every ‘C’ represents
one repetition of the base form. Hence the ungrammatical form womanCC in Fig-
ure 6.1b would be performed three times in total. ‘>’ indicates a sideward movement,
that is, the combination of both symbols ‘>C>C’ stands for sideward plural reduplica-
tion. The direction of sideward movement depends on the handedness of the signer.
Obviously, in DGS, phonological features may block overt plural marking. Both
kinds of plural reduplication are incompatible with the inherent place of articulation
feature body anchored and the complex movement features repeat, circle, and alternat-

Fig. 6.1: Plural marking with the body anchored noun woman in DGS. Copyright © 2005 by
Buske Verlag. Reprinted with permission.

Fig. 6.2: Plural marking with the complex movement noun bike in DGS. Copyright © 2005 by
Buske Verlag. Reprinted with permission.
116 II. Morphology

ing. Like many other morphological processes in sign languages, such as agreement (cf.
chapter 7) or reciprocal marking (Pfau/Steinbach 2003), plural marking is also con-
strained by phonological features of the underlying sign. We come back to the influ-
ence of phonology in section 5. Interestingly, the features that block plural reduplica-
tion do not block similar kinds of reduplication in aspectual and reciprocal marking.
Hence, it appears that certain phonological features only constrain specific morphologi-
cal processes (Pfau/Steinbach 2006b).

2.1.3. Reduplication

So far, we have seen that reduplication is not an option for DGS nouns that are body
anchored or involve complex movement. By contrast, non-body anchored midsagittal
and lateral nouns permit reduplication. Figures 6.3 and 6.4 illustrate that for symmetri-
cal midsagittal nouns such as book, the plural form is marked by simple reduplication
of the whole sign, whereas the crucial morphological modification of non-body an-
chored lateral nouns such as child is sideward reduplication. Sideward reduplication
is a clear example of partial reduplication since the reduplicant(s) are performed with
a shorter movement. The case of simple reduplication is not as clear. Typically, the
reduplicant(s) are performed with the same movement as the base; in this case, simple
reduplication would be an example of complete reduplication. Occasionally, however,

Fig. 6.3: Plural marking with the midsagittal noun book in DGS. Copyright © 2005 by Buske
Verlag. Reprinted with permission.

Fig. 6.4: Plural marking with the lateral noun child in DGS. Copyright © 2005 by Buske Verlag.
Reprinted with permission.
6. Plurality 117

the reduplicant(s) are performed with a reduced movement and thus, we are dealing
with partial reduplication.
Note that body-anchored nouns denoting human beings have an alternative way of
plural marking that involves reduplication. The plural form of nouns like woman, man,
or doctor can be formed by means of the noun person. Since person is a one-handed
lateral sign, its plural form in (2) involves sideward reduplication. Syntactically, person
is inserted right-adjacent to the noun. Semantically, person is simply an alternative
plural marker for a specific class of nouns without additional meaning.

(2) woman person>C>C [DGS]


‘women’

2.1.4. Typological variation

The basic strategies described for DGS are also found in many other sign languages
(see the references listed at the beginning of this chapter). Typologically, reduplication
and zero marking seem to be the basic strategies of plural marking across sign lan-
guages. Likewise, the constraints on plural formation are very similar to the ones de-
scribed for DGS. In BSL, for example, pluralization also involves reduplication and
sideward movement. According to Sutton-Spence and Woll (1999), the plural form of
some nouns is marked by a ‘distributive bound plural morpheme’, which triggers two
repetitions (i.e. triplication) of the underlying noun. Both repetitions are performed in
different locations. Like sideward reduplication in DGS, sideward reduplication in BSL
is only possible with non-body anchored nouns and signs without inherent complex
movement. The plural of body anchored nouns and nouns with complex movement is
marked without any reduplication, i.e. the only remaining option for these nouns is
zero marking. Likewise, Pizzuto and Corazza (1996) describe pluralization patterns for
LIS, which are very similar to those described for DGS and BSL. Again, reduplication
is the basic means of plural formation. Pizzuto and Corazza also distinguish between
body anchored nouns and nouns signed in the neutral sign space. The latter are subdi-
vided into signs involving simple movement and signs involving complex movement.
As in DGS and BSL, reduplication is only possible for signs performed in the neutral
sign space without complex movement.
Although the patterns of plural formation appear to be strikingly similar across sign
languages, we also find some variation, which mainly results from differences in the
phonological restrictions on plural formation and the available manual and non-man-
ual plural markers. A typological difference in the phonological restrictions can be
found between DGS, on the one hand, and ASL and NGT, on the other. Unlike DGS,
NGT allows simple reduplication of at least some body anchored nouns like glasses
and man (cf. Nijhof/Zwitserlood 1999; Harder 2003; Pfau/Steinbach 2006b). In DGS,
simple reduplication is neither possible for the phonologically identical sign glasses,
nor for the phonologically similar sign man. While there are differences with respect
to the behavior of body anchored nouns, nouns with inherent complex movement and
nouns performed in the lateral sign space or symmetrically to the midsagittal plane
seem to behave alike in DGS and NGT. Only the latter permit sideward reduplication
in both sign languages.
118 II. Morphology

ASL also differs from DGS in that reduplication in plural formation is less con-
strained. Moreover, ASL uses additional plural marking strategies. Only one of the
four strategies of plural formation in ASL discussed in Wilbur (1987) is also found in
DGS. The first strategy applies to nouns articulated with one hand at a location on the
face. With these nouns the plural form is realized by repeating the sign alternately with
both hands. The second strategy applies to nouns that make contact with some body
part or involve a change of orientation. In this case, the plural form is realized by
reduplication. Typically, a horizontal arc path movement is added. The third strategy
holds for nouns that involve some kind of secondary movement. Such nouns are plural-
ized without reduplication by continuing the secondary movement (and possibly by
adding a horizontal arc path movement). The fourth strategy is similar to that which
has been described for DGS above: nouns that have inherent repetition of movement
in their singular form cannot undergo reduplication. Hence, in contrast to DGS, ASL
permits plural reduplication of some body anchored nouns and nouns with complex
movement and has a specific plural morpheme, i.e. a horizontal arc path. Moreover,
plural reduplication of secondary movements is only possible in ASL but not in DGS.
However, both languages permit sideward reduplication of lateral nouns and simple
reduplication of midsagittal nouns.
Skant et al. (2002) describe an interesting plural marking strategy in ÖGS which is
similar to the first strategy found in ASL. With some two-handed signs like high-rise-
building, in which both hands perform a parallel upward movement, the plural is
expressed by a repeated alternating movement of both hands. With one-handed nouns,
the non-dominant hand can be added to perform the alternating movement expressing
the plural feature. This strategy can be analyzed as a modality-specific stem internal
change. A similar strategy is reported in Heyerick and van Braeckevelt (2008) and
Heyerick et al. (2009), who mention that in VGT, two referents (i.e. dual) can be
expressed by articulating a one-handed sign with two hands, i.e. ‘double articulation’.
A non-manual plural marker has been reported for LIS (cf. Pizzuto/Corazza 1996).
With many body anchored nouns the plural form is signed with an accompanying head
movement from left to right (at least three times). In addition, each movement is
marked with a head-nod. Moreover, in LIS inherent (lexical) repetitions tend to be
reduced to a single movement if the non-manual head movement accompanies the
plural form of the noun.
Let us finally turn to two languages that mainly use the zero marking strategy. In
IPSL, all nouns can be interpreted as singular or plural because IPSL does not use
overt plural marking strategies such as simple or sideward reduplication (cf. Zeshan
2000). The interpretation of a noun depends on the syntactic and semantic context in
which it appears. Zeshan points out that the lateral noun child is the only noun in
IPSL with a morphologically marked plural form that occurs with some frequency. Just
like the phonologically similar lateral sign in DGS (cf. Figure 6.4 above), child in
IPSL also permits sideward reduplication. Likewise, Zwitserlood, Perniss, and Özyürek
(2011) report that TİD does not exhibit overt morphological marking of the plural
feature on the noun. Instead, plurality is expressed by a variety of spatial devices,
which reflect the topographic relations between the referents. These spatial devices
will be discussed in section 4 below in more detail. Zwitserlood, Perniss, and Özyürek
argue that although information about the number of referents falls out as a result of
the use of sign space, “the primary linguistic function of these devices is [...] not the
6. Plurality 119

expression of plurality [...], but rather the depiction of referent location, on the one
hand, and predicate inflection, on the other hand”. They conclude that TİD, like IPSL,
does not have a productive morphological plural marker (but see Kubuş (2008) for a
different opinion).
The absence of overt plural marking in IPSL and TİD is, however, not exceptional.
We will see in the next subsection that in most sign languages, overt plural marking
(i.e. reduplication) is only possible if the noun phrase does not contain a numeral or
quantifier. Moreover, in contexts involving spatial localization, it is not the noun but
the classifier handshape that is (freely) reduplicated. Besides, Neidle (this volume)
argues that in ASL “reduplication may be correlated with prosodic prominence and
length” (cf. chapter 13 on noun phrases). Therefore, plural reduplication is more likely
to occur in prosodically prominent positions, i.e. in sentence-final position or in posi-
tions marked as focus. Consequently, reduplication is only grammatical for a small class
of nouns in a limited set of contexts and even with lateral and midsagittal nouns we
frequently find zero marking. Hence, reduplication is expected to be rare although it
is the basic morphological means of plural formation in sign languages (cf. also Baker-
Shenk/Cokely 1980).

2.1.5. Summary

Reduplication and zero marking appear to be two basic pluralization strategies in the
nominal domain attested in many different sign languages. Besides simple and sideward
reduplication, some sign languages have at their disposal (alternating) movement by
the non-dominant hand, reduplication of secondary movements, a horizontal arc path
movement, and non-manual means. The general phonological restrictions on overt plu-
ral marking seem to be very similar across sign languages: sideward reduplication is
restricted to lateral nouns and simple movement to midsagittal nouns. Nouns with
complex movement only allow zero marking. Only within the class of body anchored
nouns do we find some variation between languages: some sign languages permit sim-
ple reduplication of body anchored nouns, while others do not.

2.2. Pluralization and number agreement within noun phrases

This section deals with plural marking within the noun phrase, which is an important
domain for the realization of grammatical features such as gender, case, and number.
Therefore, in many languages, pluralization does not only affect nouns but also other
elements within the noun phrase such as determiners and adjectives. Moreover, we
find a considerable degree of variation in the realization of the number feature within
the noun phrase: while some languages show number agreement between nouns, adjec-
tives, and numerals or quantifiers, others do not. Here we focus on sign languages.
Spoken languages will be discussed in section 6. For number marking and number
agreement within the noun phrase, see also chapter 13 on noun phrases.
Languages with overt plural marking on head nouns have two options: they can
express the plural feature more than once within the noun phrase or they only express
plurality on one element within the noun phrase. In the latter case, plural is usually
120 II. Morphology

(semantically) expressed by a numeral or quantifier and the head noun is not inflected
for number. Most sign languages belong to the second class of languages, i.e. languages
without number agreement within the noun phrase. In the previous subsection, we
have seen that in sign languages, plural reduplication is only found with some nouns
in some contexts and we already mentioned that one reason for this infrequency of
overt nominal plural marking is that simple and sideward reduplication is blocked
whenever a numeral or quantifier appears within the noun phrase, as is illustrated by
the DGS examples in (3ab). Similarly, in noun phrases containing an adjective, the
plural feature is only expressed on the head noun even if the adjective has all relevant
phonological properties for simple or sideward reduplication. Again, noun phrase in-
ternal number agreement is blocked (3c).

(3) a. * many child>C>C a’. many child [DGS]


‘many children’ ‘many children’
b. * five bookCC b’. five book
‘five books’ ‘five books’
c. * child>C>C tall>C>C c’. child>C>C tall
‘tall children’ ‘tall children’

The prohibition against number agreement within the noun phrase is a clear tendency
but not a general property of all sign languages. ASL and Israeli SL are similar to
DGS in this respect (Wilbur 1987; Stavans 1996). In ASL, for instance, quantifiers like
many, which are frequently used in plurals, also block overt plural marking on the head
noun. Nevertheless, sign languages, like spoken languages, also differ from each other
with respect to number agreement within the noun phrase. In NGT, ÖGS (Skant et al.
2002), LIS (Pizzuto/Corazza 1996), and Hausa SL (Schmaling 2000), number agree-
ment within the noun phrase seems to be at least optional.

2.3. Summary

Given the phonological restrictions on plural marking and the restrictions on number
agreement, plural reduplication is correctly predicted to be rare in simple plurals. Al-
though reduplication can be considered the basic morphological plural marker, it is
rarely found in sign languages since it is blocked by phonological and syntactic con-
straints (cf. also section 5 below). Table 6.1 illustrates the plural marking strategies and
the manual and non-manual plural markers used in different sign languages. ‘√’ stands
for overt marking and ‘:’ for zero marking. The strategy that seems to be typologically
less frequent or even nonexistent is given in parentheses. Note that Table 6.1 only
illustrates first tendencies. More typological research is necessary to get a clearer pic-
ture of nominal plural marking in sign languages.
6. Plurality 121

Tab. 6.1: Plural marking strategies in sign languages


phonological feature
body anchored complex midsagittal lateral
movement
: : √ √
noun
(√) (√) (:) (:)
noun with
: :
numeral/ : :
quantifier (√) (√)

– simple – simple – simple – sideward


reduplication reduplication reduplication reduplication
– double – horizontal – alternating
articulation arc path movements
– alternating movement
manual and non-
movements
manual plural
– horizontal
markers
arc path
movement
– head move-
ment and head
nod

3. Pronouns, numeral incorporation, and number signs


In spoken languages, pronouns usually realize at least two morphological features,
namely person and number. Similarly, sign language pronouns also realize these two
features. As opposed to spoken languages, however, sign languages do not employ
distinct forms (cf. English I, you, he/she/it, we, you, they) but systematically use the
sign space to express person and number. Concerning person, there is a long-standing
debate whether sign languages distinguish second and third person. By contrast, the
realization of number on pronouns is more straightforward (for a more detailed discus-
sion of this issue, cf. McBurney (2002), Cormier (2007), and chapter 11, Pronouns).

3.1. Pronouns

Sign languages typically distinguish singular, dual, and distributive and collective plural
forms of pronouns. In the singular form, a pronoun usually points with the index finger
directly to the location of its referent in sign space (the R-locus). The number of
extended fingers can correspond to the number of referents. In DGS, the extended
index and middle finger are used to form the dual pronoun 2-of-us which oscillates
back and forth between the two R-loci of the referents the pronoun is linked to. In
some sign languages, the extension of fingers can be used to indicate up to nine refer-
ents. We come back to numeral incorporation below. The collective plural form of a
122 II. Morphology

pronoun is realized with a sweeping movement across the locations in sign space associ-
ated with the R-loci of the referents. These R-loci can either be in front of the signer
(non-first person) or next to the signer including the signer (first person). By contrast,
the distributive form involves multiple repetitions of the inherent short pointing move-
ment of the pronoun along an arc. Plural pronouns are usually less strictly related to
the R-loci of their referents than singular pronouns. An interesting question is, whether
sign languages have a privileged (lexicalized) dual pronoun, which is not derived by
numeral incorporation. The dual form seems to differ from number incorporated pro-
nouns. While the dual form is performed with a back and forth movement, pronouns
with numeral incorporation are performed with a circular movement. Moreover, num-
ber marking for the dual form seems to be obligatory, whereas the marking of three or
more referents by numeral incorporation appears to be optional (cf. McBurney 2002).

3.2. Numeral incorporation

A modality-specific property of sign languages is the specific kind of numeral incorpo-


ration found with pronouns, as illustrated in (4), and temporal expressions, as illus-
trated in (5). Numeral incorporation has been documented for various sign languages
(see Liddell (1996) for ASL, chapter 11 on pronouns, for BSL, Perniss (2001) and
Mathur/Rathmann (2011) for DGS, Schmaling (2000) for Hausa SL, Zeshan (2000) for
IPSL, Stavans (1996) for Israeli SL, Zeshan (2002) for TİD, and Heyerick/van Braeck-
evelt (2008) and Heyerick et al. (2009) for VGT).

(4) Numeral incorporation with pronouns


2-of-us, 3-of-us, …, 2-of-you, 3-of-you, …, 2-of-them, 3-of-them, … [DGS]
(5) Numeral incorporation with temporal expressions
a. 1-hour, 2-hour, 3-hour, … [DGS]
b. 1-week, 2-week, 3-week, …
c. 1-year, 2-year, 3-year, …
d. in-1-day, in-2-day, in-3-day, …
c. before-1-year, before-2-year, before-3-year, …

Pronouns and temporal expressions have the ability to ‘incorporate’ the handshape of
numerals. Usually, the handshape corresponds to the numeral used in a sign language
(cf. below). Number incorporated pronouns are performed with a small circular move-
ment in the location associated with the group of referents. Because of the physical
properties of the two manual articulators, sign languages can in principle incorporate
numbers up to ten. With pronouns, five seems to be the preferred upper limit of incor-
poration (note, however, that examples with more than five are attested). With tempo-
ral expressions, examples that incorporate numbers up to ten are more frequent. The
specific restrictions on pronominal numeral incorporation may be related to the follow-
ing difference between pronouns and temporal expressions. Unlike temporal expres-
sions, number incorporated pronouns involve a small horizontal circular movement in
a specific location of the sign space. This particular movement between the R-loci the
pronoun is linked to is harder to perform with two hands and may therefore be blocked
6. Plurality 123

for phonetic reasons (cf. also section 4 for phonetic blocking of plural forms of agree-
ment verbs). By contrast, temporal expressions are not linked to loci in the sign space.
Therefore, a two-handed variant is generally easier to perform. Finally note that pho-
nological properties of individual number signs such as the specific movement pattern
of ten in ASL can block numeral incorporation.

3.3. Number signs

So far, we have seen that numeral incorporation targets the handshape of the corre-
sponding number sign. But where do the number signs come from? Number systems
of sign languages are constrained by the physical properties of the articulators. Since
sign languages use two manual articulators with five fingers each, they can directly
express the numbers 1 to 10 by extension of the fingers. Hence, the number systems
used in many sign languages have a transparent gestural basis. For number systems in
different sign languages, see Leybaert and van Cutsem (2002), Iversen, Nuerk, and
Willmes (2004), Iversen et al. (2006), Iversen (2008), Fernández Viader and Fuentes
(2008), McKee, McKee, and Major (2008), and Fuentes et al. (2010).
Since the manual articulators have 10 fingers, the base of many sign language num-
ber systems is usually 10. The DGS number system is based on 10 with a sub base of
5. By contrast, ASL uses a number system that is only based on 10. In addition to this
typological variation, we also find variation within a system. This ‘dialectal’ variation
may affect the use of extended fingers, the use of movement to express numbers higher
than 10, or idiosyncratic number signs. Let us consider the number system of DGS
first. The first five numbers are realized through finger extension on the dominant
hand. one is expressed with one finger extended (either thumb or index finger), two
with two fingers extended (either thumb and index finger or index and middle finger),
three with three fingers extended (thumb, index and middle finger), and four with
four fingers extended (either thumb to ring finger or index finger to pinky). Finally,
five is expressed with all five fingers extend. The number signs six to ten are expressed
on two hands. The non-dominant hand has all five fingers extended and the dominant
hand expresses six to ten just like one to five. Number signs for numbers higher than
10 are derived from this basis. In DGS, the number signs eleven, twelve, thirteen, …
as well as twenty, thirty, … and one-hundred, two-hundred, three-hundred … use
the same handshape as the basic number signs one to nine. In addition, they have a
specific movement expressing the range of the number (i.e. 11 to 19, 20 to 90, 100 to
900, or 1000 to 9000). The signs for 11 to 19 are, for example, performed either with a
circular horizontal movement or with a short movement, changing the facing of the
hand(s) (at the beginning of this short movement, the palm is facing the signer, at the
end it faces down) and the signs for 20 to 90 are produced with a repeated movement
of the extended fingers. Finally note that complex numbers like 25, 225, or 2225 are
composed by the basic number signs: 25 is, for instance, a combination of the signs
five and twenty. An exception are the numbers 22, 33, 44, … which are expressed by
sideward reduplication of two, three, four, …
As opposed to DGS, ASL only uses one hand to express the basic numbers 1 to 10.
one starts with the extended index finger, two adds the extended middle finger, three
the ring finger, four the pinky, and five the thumb. Hence, the ASL number sign for
124 II. Morphology

five is identical to the corresponding sign in DGS. In ASL, the number signs for 6 to
9 are expressed through contact between the thumb and one of the other four fingers:
in six, the thumb has contact with the pinky, in seven with the ring finger, in eight
with the middle finger, and in nine with the index finger. ten looks like one version
of one in DGS, i.e. only the thumb is extended. In addition, ten has a horizontal
movement of the wrist. Other one-handed number systems differ from ASL in that
they use the same signs for the numbers 6 to 9 as one variant in DGS uses for 1 to 5:
six is expressed with the extended thumb, seven with the extended thumb and index
finger, eight with the extended thumb, index, and middle finger, … In ASL, higher
numbers are expressed by producing the signs for the digits in linear order, i.e. ‘24’ =
two + four, ‘124’ = one + two + four. Note that the number system of ASL, just like
that of DGS, also shows some dialectal variation.
A comparison of DGS and ASL shows that two-handed number systems like DGS
only use five different handshapes, whereas one-handed systems like ASL use ten
different handshapes. Moreover, the two-handed system of DGS expresses higher num-
bers through a combination of basic number and movement. The one-handed system
of ASL expresses higher number by a linear combination of the signs for the digits.
And finally, DGS, like German, expresses higher numbers by inversion (i.e. ‘24’ is four
C twenty). In ASL, the linear order must not be inverted.

4. Verb agreement and classifier verbs


In the last section, we have seen that in sign languages, pronouns can express number
by modification of movement (i.e. by the addition of a sweeping movement) or by
repetition of the pronoun (i.e. a distributed pointing motion towards multiple loca-
tions). In this section we will discuss two related phenomena: the plural forms of agree-
ment verbs and classifier verbs. We will see that both use the sign space in a similar
way to express plurality. A comprehensive overview of verb agreement can be found
in chapter 7. Classifier verbs are extensively discussed in Zwitserlood (2003), Benedicto
and Brentari (2004), and in chapter 8 on classifiers.

4.1. Verb agreement

In spoken and sign languages verb agreement seems to have primarily developed from
pronouns (for sign languages see Pfau/Steinbach 2006a, 2011). In both modalities, pro-
nominalization and verb agreement are related grammatical phenomena. Hence, it
comes as no surprise that agreement verbs use the same spatial means as pronouns
to express pluralization. Agreement verbs agree with the referential indices of their
arguments, which are realized in the sign space as R-loci. Verbs, like pronouns, have a
distributive and a collective plural form. The distributive form of plural objects is, for
instance, realized by multiple reduplication along an arc movement in front of the
signer. In some contexts, the reduplication can also be more random and with one-
handed agreement verbs, it can also be performed with both hands. The collective form
is realized with a sweeping movement across the locations associated with the R-loci,
6. Plurality 125

i.e. by an arc movement without reduplication. The plural feature is thus realized spa-
tially in the sign space. In chapter 7, Mathur and Rathmann propose the following
realizations of the plural feature in verb agreement. According to (6), the singular
feature is unmarked and realized as a zero form. The marked plural feature encodes
the collective reading. The distributive plural form in (6ii) may be derived by means
of reduplication of the singular form (for a more detailed discussion, cf. chapter 7 on
verb agreement and the references cited there).

(6) Number
i. Features
Plural (collective): [Cpl] 4 horizontal arc (marked)
Singular: [⫺ pl] 4 Ø
ii. Reduplication: exhaustive (distributive), dual

Note that phonetic constraints may cause agreement gaps. Mathur and Rathmann
(2001, 2011) show that articulatory constraints block first person plural object forms
such as ‘give us’ or ‘analyze us’ in ASL or third person plural object forms with redupli-
cation of the verbs (i.e. distributive reading) like ask in ASL or tease in DGS (for
phonetic constraints, cf. also chapter 2, Phonetics).

4.2. Classifier verbs

Many spoken languages do not mark plural on the head noun but use specific numeral
classifier constructions. Sign languages also have so-called classifier constructions. They
make extensive use of classifier handshapes, which can be used with verbs of motion
and location. Sign language classifiers can be compared to noun class markers in spo-
ken languages. Classifier verbs are particularly interesting in the context of plural
marking since the plurality of an entity can also be expressed by means of a spatially
modified classifier verb. Consider the examples in Figures 6.5, 6.6, and 6.7, which show
the pluralisation of classifier verbs. Figure 6.5 illustrates the sideward reduplication of
the classifier verb. In Figure 6.6, a simple sideward movement is added to the classifier
verb and in Figure 6.7 more random reduplications performed by both hands in alter-
nation are added.

Fig. 6.5: Sideward reduplication of a classifier verb in DGS. Copyright © 2005 by Buske Verlag.
Reprinted with permission.
126 II. Morphology

Fig. 6.6: Simple sideward movement of a classifier verb in DGS.

Fig. 6.7: Random reduplication of a classifier verb in DGS. Copyright © 2005 by Buske Verlag.
Reprinted with permission.

Like verbal agreement inflection, the illustrated spatial modification of classifier


verbs is a clear instance of verbal plural inflection (for a detailed discussion of the
differences between classifier verbs in sign languages and numeral nominal classifica-
tion in spoken languages, cf. Pfau/Steinbach 2006b). Consequently, numerals or quanti-
fiers do not block the reduplication of the classifier handshapes. The examples in Fig-
ures 6.5 to 6.7 would also be grammatical if we added the quantifier many or the
numeral five (i.e. five bike clvertical.pl+>+>). Moreover, the spatial modification of
classifier verbs does not only express the plurality of the referent the classifier verb
agrees with. It usually also induces the additional semantic effect of a particular spatial
localization or arrangement of the referents.
Interestingly, the number of reduplications and the spatial localization of agreement
and classifier verbs are not grammatically restricted and can thus be modified more
freely. Therefore, the whole sign space can be used, as is illustrated in the examples in
Figures 6.5 to 6.7 above. If a right handed signer wants to express that exactly five
bikes are standing in a certain spatial relation on the left, s/he can repeat the classifier
verb five times in the left ipsilateral sign space. Conversely, the simple plural form of
lateral nouns is usually restricted to two repetitions and to the lateral area of the
sign space.
In section 2 we mentioned that in many sign languages midsagittal nouns such as
house or flower also permit sideward reduplication of the whole sign (cf. Figure 6.8).
With these nouns, the semantic effect described for classifier verbs is achieved by side-
ward reduplication of the whole sign. Hence, under certain circumstances, sideward
reduplication can also be found with midsagittal nouns. However, in this case the un-
6. Plurality 127

Fig.6.8: Sideward reduplication of midsagittal nouns in DGS. Copyright © 2005 by Buske Verlag.
Reprinted with permission.

marked plural form, i.e. simple reduplication, blocks the simple plural interpretation.
Like sideward reduplication of classifier verbs, sideward reduplication of midsagittal
nouns does not only express a simple plurality of the entity the noun refers to, but also
a specific spatial configuration of these entities. Again, more than two repetitions and
the use of the whole sign space is possible.
The spatial interpretation of sideward reduplication of agreement and classifier
verbs and certain nouns is clearly modality-specific. Since sign languages make use of
the three-dimensional sign space, they have the unique potential to establish a relation
between plural reduplication and spatial localization of referents (for similar observa-
tions in LIS, NGT, BSL, and TİD, cf. Pizzuto/Corazza 1996; Nijhof/Zwitserlood 1999;
Sutton-Spence/Woll 1999; Zwitserlood/Perniss/Özyürek 2011).

5. Pluralization across modalities


Finally, in this section we compare the expression of plurality in sign languages to
pluralization in spoken languages. First we discuss constraints on plural marking in
spoken language before we turn to differences in the constraints on plural marking
and in the output forms in both modalities.

5.1. Pluralization in spoken languages

Plural marking in spoken languages has some interesting similarities to plural marking
in sign languages (for a detailed discussion of spoken languages, cf. Corbett 2000). As
in sign languages, plural marking in spoken languages is determined by phonological
properties of the noun stem. Moreover, many spoken languages also use reduplication
to express the plural feature. In section 2, we have seen that reduplication is the basic
means of plural marking in sign languages. Sideward reduplication has been described
as a case of partial reduplication and simple reduplication as complete reduplication.
Likewise, in spoken languages, pluralization can also be realized by means of partial
and complete reduplication. Partial reduplication is illustrated in example (7a) from
Ilokano, where only the first syllable of the bisyllabic stem is reduplicated (Hayes/
128 II. Morphology

Abad 1989, 357). The example from Warlpiri in (7b) is an example of complete redupli-
cation (Nash 1986, 130). Although both modalities use complete and partial reduplica-
tion as a means of plural marking, there are also two crucial differences: (i) only sign
languages allow for sideward reduplication since they use a three-dimensional sign
space and (ii) reduplication in sign languages usually involves two repetitions (i.e. tri-
plication) whereas reduplication in spoken languages usually only involves one repeti-
tion (but see Blust (2001) for some rare examples of triplication in spoken languages).

(7) a. púsa a’. pus-púsa [Ilokano]


‘cat’ ‘cats’
b. kurdu b’. kurdu-kurdu [Warlpiri]
‘child’ ‘children’

There are two more obvious similarites between plural marking in both modalities: (i)
both, sign and spoken languages, use zero marking and, (ii) the form of a plural mor-
pheme may be determined by phonological properties of the stem. In German, for
instance, zero marking is quite common (i.e. Segel (‘sail’ and ‘sails’) or Fehler (‘mistake’
and ‘mistakes’). Phonological restrictions can be found, for instance, in English and
Turkish. In English, the plural suffix /z/ assimilates the feature [Gvoice] of the preced-
ing phoneme, i.e. [z] in dogs but [s] in cats). In Turkish, suffix vowels harmonize with
the last vowel of the stem with respect to certain features. In pluralization, the relevant
feature for the plural suffix -ler is [G back], i.e. ev-ler (‘houses’) but çocuk-lar (‘chil-
dren’).
Besides these cross-modal similarities in nominal plural formation, there are two
obvious differences between spoken and sign languages. First, many spoken languages,
unlike sign languages, use affixation and word internal stem change as the basic means
of plural inflection. Affixation is illustrated in the English and Turkish examples above.
An example for stem change is the German word Mütter, which is the plural form of
Mutter (‘mother’). In this example, the plural is only marked by the umlaut, i.e. a stem
internal vowel change. In sign languages, stem-internal changes, which are frequently
observed in other morphological operations, are rarely used for plural marking. Simul-
taneous reduplication of the sign by the non-dominant hand (as attested, for instance,
with some ÖGS signs) is an exception to this generalization. Likewise, sign languages
do not use plural affixes – one exception might be the horizontal arc path movement
that is used to express plurality in some sign languages (cf. section 2). The lack of
affixation in plural marking in the visual-manual modality reflects a general tendency
of sign languages to avoid sequential affixation (cf. Aronoff/Meir/Sandler 2005).
Second, in spoken languages, the choice of a plural form is not always constrained
phonologically but grammatically (i.e. gender), semantically (i.e. semantically defined
noun classes), or lexically (cf. Pfau/Steinbach 2006b). The choice of the plural form in
German is, for instance, to a large extend idiosyncratic and not determined by phono-
logical properties of the stem. This is illustrated by the German examples in (8). Al-
though the two words in (8ab) have the same rhyme, they take different plural suffixes.
In (8cd) we are dealing with two homonymous lexical items, which form their plural
by means of different suffixes where only the former is accompanied by umlaut (cf.
Köpke 1993; Neef 1998, 2000).
6. Plurality 129

(8) a. Haus a’. Häus-er [German]


‘house’ ‘houses’
b. Maus b’. Mäus-e
‘mouse’ ‘mice’
c. Bank c’. Bänk-e
‘bench’ ‘benches’
d. Bank d’. Bank-en
‘bank’ ‘banks’

A further difference concerns number agreement. Unlike in most sign languages, plu-
rality can be realized more than once within a noun phrase in many spoken languages.
The English example in (9a) illustrates that some determiners display at least number
agreement with the head noun (but not with the adjective). The German example in
(9b) illustrates that within the noun phrase, plurality is usually expressed on all el-
ements on the left side of the head noun, i.e. the possessive and the adjective. Note
that in both languages, the presence of a numeral does not block number agreement
within the noun phrase.

(9) a. these (two) old cars [English]


b. mein-e zwei alt-en Auto-s [German]
1sg.poss-pl two old-pl car-pl
‘my (two) old cars’

Other spoken languages pattern with sign languages. In Hungarian, for instance, the
head noun can only be marked for plural if the noun phrase does not contain a numeral
or quantifier, cf. (10) (Ortmann 2000, 251f). Hence, like in sign languages, plurality is
only indicated once within the noun phrase in these languages. Hence, without numer-
als and quantifiers, only the head noun inflects for plural. Multiple realization of the
plural feature within the noun phrase as in example (10c) leads to ungrammaticality
(cf. Ortmann 2000, 2004).

(10) a. hajó a’. hajó-k [Hungarian]


ship ship-pl
‘ship’ ‘ships’
b. öt/sok hajó b’. *öt/sok hajó-k
five/many ship five/many ship-pl
‘five/many ships’ ‘five/many ships’
c. gyors hajó-k c’. *gyors-ak hajó-k
fast ship-pl fast-pl ship-pl
‘fast ships’ ‘fast ships’

Finally note that in some spoken languages, plural cannot be marked on the head noun
but must be marked on other elements within the noun phrase. In Japanese, for in-
stance, a noun does not morphologically inflect for the plural feature. Example (11a)
illustrates that plurality is marked within the noun phrase by means of numerals or
quantifiers, which are accompanied by numeral classifiers, cf. Kobuchi-Philip (2003).
In Tagalog, plurality is also expressed within the noun phrase by means of a number
word, i.e. mga, as illustrated in (11b), cf. Corbett (2000, 133f).
130 II. Morphology

(11) a. [san-nin-no gakusei-ga] hon-o katta [Japanese]


3-cl-gen student-nom book-acc bought
‘Three students bought a book.’
b. mga bahay b’. mga tubig
pl house pl water [Tapalog]
‘houses’ ‘cups/units of water’

Spoken languages like Japanese and Tagalog equal IPSL, where nouns cannot be redu-
plicated and the plural feature must be expressed by a numeral or quantifier. However,
unlike in Japanese and Tagalog, in most sign languages, nouns can be overtly inflected
for plural and numerals and quantifiers only block overt plural marking on the head
noun within the noun phrase.

5.2. Output forms

So far, we discussed differences and similarities in the constraints on plural formation


in spoken and sign languages. Now we turn to the output of plural formation. In plural
formation, we do not only find examples of simple determination but also examples
of under-, over-, and hyperdetermination of the plural feature. Let us first consider
morphological underdetermination. Underdetermined plurals involve zero marking
and are attested in both modalities. The second category, simple determination, is quite
common in spoken languages since in affixation, stem internal change or reduplication,
one morphological marker is usually used to expresses the plural feature overtly (i.e.
an affix, a stem internal change, or a reduplicant respectively). By contrast, in sign
language, there is no case of simple determination of the plural feature. Reconsider
midsagittal nouns, which typically allow simple reduplication. At first sight, the plural
form of the noun book in Figure 6.3 above looks like a case of simple determination.
The plural feature is only expressed once by means of reduplication. No additional
morphological marker is used. However, as already mentioned above, in sign languages
the base noun is not only repeated once but twice, i.e. it is triplicated. Actually, a single
repetition of the base noun would be sufficient to express the plural feature. Therefore,
triplication can be analyzed as an instance of the third category, i.e. overdetermination.
In spoken languages, overdetermination usually involves double marking (i.e. stem
change in combination with affixation) as illustrated in (8a’–c’) above. Double marking
clearly overdetermines the plural form since it would suffice to express the plural form
by one marker only. The fourth category, hyperdetermination, is only attested in sign
language pluralization. Recall that the plural form of lateral nouns such as child in
Figure 6.4 above combine triplication with sideward movement (i.e., the reduplicant is
not faithful to the base with respect to location features). This type of double overde-
termination can be categorized as an instance of hyperdetermination. While overdeter-
mination of morphosyntactic categories (e.g., number, agreement, or negation) is quite
common in spoken languages, hyperdetermination is rare.
The following table taken from Pfau and Steinbach (2006b, 176) summarizes the
main similarities and differences in the strategies, quantities, and morphosyntax of
plural marking in both modalities. Recall that affixation and stem change may not be
6. Plurality 131

Tab. 6.2: Plural marking in spoken and sign languages


spoken languages sign languages
plural marking: strategy
zero marking √ √
affixation √ ⫺
reduplication √ √
stem change √ ⫺
plural marking: quantity
underdetermination √ √
simple determination √ ⫺
overdetermination √ √
hyperdetermination ?? √
expression of plural within the noun phrase
use of numeral classifiers √/⫺ ⫺
number agreement in the noun phrase √/⫺ √/⫺

complete absent in sign languages. Nevertheless, both morphological operations are at


least very rare.

5.3. The impact of modality

How can we account for the differences between spoken and sign languages discussed
in the previous sections? The first obvious difference is that only spoken languages
frequently use affixation in plural formation. We already mentioned that the lack of
affixation in sign languages reflects a tendency of the visual-manual modality to avoid
sequential affixation (cf. Aronoff/Meir/Sandler 2005). Moreover, the use of sign space
in verb agreement and classifier verbs is also directly related to the unique property
of the visual-manual modality to use a three-dimensional sign space in front of the
signer to express grammatical or topographic relations. Another interesting difference
is that the two basic plural marking strategies in sign languages involve either over- or
hyperdetermination. Again, this difference seems to be due to specific properties of
the visual-manual modality (cf. Pfau/Steinbach 2006b). Over- and hyperdetermination
seem to increase the visual salience of signs in the sign space. Since much of the manual
signing is perceived in peripheral vision, triplication as well as spatial displacement
enhances phonological contrasts (cf. Siple 1978; Neville/Lawson 1987). In pluralization,
nouns seem to exploit as many of these options as they can. This line of argumentation
is supported by the claim that movements are functionally comparable to sonorous
sounds in spoken language. Sign language syllables can be defined as consisting of one
sequential movement. Triplication increases the phonological weight of the inflected
sign (for syllables in sign language, see chapter 3 on phonology). Another determining
factor might be that a fair number of signs already inherently involve lexical repetition.
Hence, triplication distinguishes lexical repetition from morphosyntactic modification
and is therefore a common feature in the morphosyntax of sign languages. Various
132 II. Morphology

types of aspectual modification, for instance, also involve triplication (or even more
repetitions, cf. chapter 9 on Tense, Aspect, and Modality).
The clear tendency to avoid number agreement within noun phrases in sign lan-
guages can be related to modality-specific properties of the articulators. Sign language
articulators are relatively massive and move in the transparent sign space (Meier 2002).
This is true especially for the manual articulators involved in plural reduplication.
Therefore, an economy constraint might block reduplication of the head noun in noun
phrases whenever it is not necessary to express the plural feature (i.e. if the noun
phrase contains a numeral or quantifier). Likewise, the strong influence of phonologi-
cal features on plural formation can be explained by these specific properties of the
articulators. In sign languages, many morphological operations such as verb agreement,
classification, or reciprocity depend on phonological properties of the underlying stem
and many morphemes consist of just one phonological feature (cf. Pfau/Steinbach
(2005a) and chapter 3, Phonology; for similar effects on the interface between phonol-
ogy and semantics, cf. Wilbur (2010)).

6. Conclusion

We have illustrated that sign languages use various plural marking strategies in the
nominal and verbal domain. In the nominal domain, plurals are typically formed by
simple or sideward reduplication of the noun or by zero marking. Strictly speaking
sign languages do not use reduplication but triplication, i.e. two repetitions of the base
sign. Besides, some sign languages have specific strategies at their disposal such as an
additional sweep movement, movement alternation or non-manual markers. In all sign
languages investigated so far, the nominal strategies are basically constrained by pho-
nological properties of the underlying nominal stem. Another typical property of many
(but not all) sign languages is that plural reduplication of the head noun is blocked if
the noun phrase contains a numeral or quantifier. Consequently, reduplication is only
possible in bare noun phrases and therefore predicted to be infrequent. In the verbal
domain, sign languages make use of the sign space to inflect agreement and classifier
verbs for plural.
The comparison of sign languages to spoken languages has revealed that there are
some common strategies of pluralization in both modalities but also some modality-
specific strategies and restrictions. Among the strategies both modalities choose to
mark plurality on nouns are reduplication and zero marking. By contrast, affixation
and stem internal changes are a frequent means of spoken language pluralization but
not (or only rarely) found in sign language pluralization. Another similarity between
both modalities is that the choice of strategy may depend on phonological properties
of the underlying noun. Moreover, in both modalities, noun phrase internal number
agreement may be blocked. However, while in sign languages number agreement
within the noun phrase seems to be the exception, number agreement is quite common
in many spoken languages. And finally, while under- and overdetermination can be
found in both modalities, simple determination is attested only in spoken languages
and hyperdetermination only in sign languages.
6. Plurality 133

Of course, much more research on the typology of pluralization in sign languages


is necessary in order to document and understand the extent of phonological, morpho-
logical, and syntactic variation across different sign languages and across spoken and
sign languages.

7. Literature

Aronoff, Mark/Meir, Irit/Sandler, Wendy


2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344.
Baker-Shenk, Charlotte/Cokely, Dennis
1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver
Spring, MD: T.J. Publishers.
Benedicto, Elena/Brentari, Diane
2004 Where Did All the Arguments Go? Argument-changing Properties of Classifiers in
ASL. In: Natural Language & Linguistic Theory 22, 743⫺810.
Blust, Robert
2001 Thao Triplication. In: Oceanic Linguistics 40, 324⫺335.
Corbett, Greville G.
2000 Number. Cambridge: Cambridge University Press.
Cormier, Kearsy
2007 Do All Pronouns Point? Indexicality of First Person Plural Pronouns in BSL and ASL.
In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Compara-
tive Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 63⫺101.
Fernández-Viader, María del Pilar/Fuentes, Mariana
2008 The Systems of Numerals in Catalan Sign Language (LSC) and Spanish Sign Language
(LSE): A Comparative Study. In: Quadros, Ronice M. de (ed.), Sign Languages: Spin-
ning and Unraveling the Past, Present, and Future. Forty-five Papers and Three Posters
from the 9th Theoretical Issues in Sign Language Research Conference, Florianopolis,
Brazil, December 2006. Petrópolis: Editora Arara Azul. [Available at: www.editora-
arara-azul.com.br/EstudosSurdos.php].
Fuentes, Mariana/Massone, María Ignacia/Fernández-Viader, María del Pilar/Makotrinsky, Ale-
jandro/Pulgarín, Francisca
2010 Numeral-incorporating Roots in Numeral Systems: A Comparative Analysis of Two
Sign Languages. In: Sign Language Studies 11, 55⫺75.
Harder, Rita
2003 Meervoud in de NGT. Manuscript, Nederlands Gebarencentrum.
Hayes, Bruce/Abad, May
1989 Reduplication and Syllabification in Ilokano. In: Lingua 77, 331⫺374.
Heyerick, Isabelle/Braeckevelt, Mieke van
2008 Rapport Onderzoeksmethodologie Meervoudsvorming in Vlaamse Gebarentaal.
Vlaamse GebaarentaalCentrum (vgtC), Antwerpen.
Heyerick, Isabelle/Braeckevelt, Mieke van/Weerdt, Danny de/Van der Herreweghe, Mieke/Ver-
meerbergen, Myriam
2009 Plural Formation in Flemish Sign Language. Paper Presented at Current Research in
Sign Linguistics (CILS), Namur.
Iversen, Wiebke
2008 Keine Zahl ohne Zeichen. Der Einfluss der medialen Eigenschaften der DGS-Zahlzei-
chen auf deren mentale Verarbeitung. PhD Dissertation, University of Aachen.
134 II. Morphology

Iversen, Wiebke/Nuerk, Hans-Christoph/Jäger, Ludwig/Willmes, Klaus


2006 The Influence of an External Symbol System on Number Parity Representation, or
What’s Odd About 6? In: Psychonomic Bulletin & Review 13, 730⫺736.
Iversen, Wiebke/Nuerk, Hans-Christoph/Willmes, Klaus
2004 Do Signers Think Differently? The Processing of Number Parity in Deaf Participants.
In: Cortex 40, 176⫺178.
Jones, M./Mohr, K.
1975 A Working Paper on Plurals in ASL. Manuscript, University of California, Berkeley.
Kobuchi-Philip, Mana
2003 Syntax and Semantics of the Japanese Floating Numeral Quantifier. Paper Presented
at Incontro di Grammatica Generativa XXIX, Urbino.
Köpcke, Klaus-Michael
1993 Schemata bei der Pluralbildung im Deutschen: Versuch einer kognitiven Morphologie.
Tübingen: Narr.
Kubuş, Okan
2008 An Analysis of Turkish Sign Language (TİD) Phonology and Morphology. MA Thesis,
The Middle East Technical University, Ankara.
Leybaert, Jacqueline/Cutsem, Marie-Noelle van
2002 Counting in Sign Language. In: Journal of Experimental Child Psychology 81, 482⫺501.
McBurney, Susan Lloyd
2002 Pronominal Reference in Signed and Spoken Language: Are Grammatical Categories
Modality-dependent? In: Meier, Richard/Cormier, Kearsy/Quinto-Pozos, David (eds.),
Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge Uni-
versity Press, 329⫺369.
McKee, David/McKee, Rachel/Major, George
2008 Sociolinguistic Variation in NZSL Numerals. In: Quadros, Ronice M. de (ed.), Sign
Languages: Spinning and Unraveling the Past, Present, and Future. Forty-five Papers
and Three Posters from the 9th Theoretical Issues in Sign Language Research Conference,
Florianopolis, Brazil, December 2006. Petrópolis: Editora Arara Azul. [Available at:
www.editora-arara-azul.com.br/EstudosSurdos.php].
Mathur, Gaurav/Rathmann, Christian
2001 Why not ‘give-us’: An Articulatory Constraint in Sign Languages. In: Dively, Valerie/
Metzger, Melanie/Taub, Sarah/Baer, Anne-Marie (eds.), Signed Languages: Discoveries
from International Research. Washington, DC: Gallaudet University Press, 1⫺26.
Mathur, Gaurav/Rathmann, Christian
2010 Verb Agreement in Sign Language Morphology. In: Brentari, Diane (ed.), Sign Lan-
guages (Cambridge Language Surveys). Cambridge: Cambridge University Press,
173⫺196.
Mathur, Gaurav/Rathmann, Christian
2011 Two Types of Nonconcatenative Morphology in Signed Language. In: Mathur, Gaurav/
Napoli, Donna Jo (eds.), Deaf Around the World. Oxford: Oxford University Press,
54⫺82.
Meier, Richard
2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon
Linguistic Structure in Sign and Speech. In: Meier, Richard/Cormier, Kearsy/Quinto-
Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cam-
bridge: Cambridge University Press, 1⫺25.
Neef, Martin
1998 The Reduced Syllable Plural in German. In: Fabri, Ray/Ortmann, Albert/Parodi, Teresa
(eds.), Models of Inflection. Tübingen: Niemeyer, 244⫺265.
Neef, Martin
2000 Morphologische und syntaktische Konditionierung. In: Booij, Geert et al. (ed.), Mor-
phologie: Ein internationales Handbuch zur Flexion und Wortbildung. Berlin: de Gruy-
ter, 473⫺484.
6. Plurality 135

Neville, Helen J./Lawson, Donald S.


1987 Attention to Central and Peripheral Visual Space in a Movement Detection Task: An
Event-related Potential and Behavioral Study (Parts I, II, III). In: Brain Research 405,
253⫺294.
Nijhof, Sibylla/Zwitserlood, Inge
1999 Pluralization in Sign Language of the Netherlands (NGT). In: Don, Jan/Sanders, Ted
(eds.), OTS Yearbook 1998⫺1999. Utrecht: Utrechts Instituut voor Linguistiek OTS,
58⫺78.
Ortmann, Albert
2000 Where Plural Refuses to Agree: Feature Unification and Morphological Economy. In:
Acta Linguistica Hungarica 47, 249⫺288.
Ortmann, Albert
2004 A Factorial Typology of Number Marking in Noun Phrases: The Tension of Economy
and Faithfulness. In: Müller, Gereon/Gunkel, Lutz/Zifonun, Gisela (eds.), Explorations
in Nominal Inflection. Berlin: Mouton de Gruyter, 229⫺267.
Perniss, Pamela
2001 Numerus und Quantifikation in der Deutschen Gebärdensprache. MA Thesis, University
of Cologne.
Perry, Deborah
2004 The Use of Reduplication in ASL Plurals. MA Thesis, Boston University.
Pfau, Roland/Steinbach, Markus
2003 Optimal Reciprocals in German Sign Language. In: Sign Language & Linguistics 6,
3⫺42.
Pfau, Roland/Steinbach, Markus
2005a Backward and Sideward Reduplication in German Sign Language. In: Hurch, Bernhard
(ed.), Studies on Reduplication. Berlin: Mouton de Gruyter, 569⫺594.
Pfau, Roland/Steinbach, Markus
2005b Plural Formation in German Sign Language: Constraints and Strategies. In: Leuninger,
Helen/Happ, Daniela (eds.), Gebärdensprache. Struktur, Erwerb, Verwendung (Linguis-
tische Berichte Special Issue 13). Opladen: Westdeutscher Verlag, 111⫺144.
Pfau, Roland/Steinbach, Markus
2006a Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 5⫺94.
Pfau, Roland/Steinbach, Markus
2006b Pluralization in Sign and in Speech: A Cross-modal Typological Study. In: Linguistic
Typology 10, 135⫺182.
Pfau, Roland/Steinbach, Markus
2011 Grammaticalization in Sign Languages. In: Heine, Bernd/Narrog, Heiko (eds.), Hand-
book of Grammaticalization. Oxford: Oxford University Press, 681⫺693.
Pizzuto, Elena/Corazza, Serena
1996 Noun Morphology in Italian Sign Language. In: Lingua 98, 169⫺196.
Sandler, Wendy
1999 The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli
Sign Language. In: Sign Language & Linguistics 2(2), 187⫺215.
Schmaling, Constanze
2000 Maganar Hannu, Language of the Hands: A Descriptive Analysis of Hausa Sign Lan-
guage. Hamburg: Signum.
Siple, Patricia
1978 Visual Constraints for Sign Language Communication. In: Sign Language Studies 19,
97⫺112.
Skant, Andrea/Dotter, Franz/Bergmeister, Elisabeth/Hilzensauer, Marlene/Hobel, Manuela/
Krammer, Klaudia/Okorn, Ingeborg/Orasche, Christian/Orter, Reinhold/Unterberger, Natalie
2002 Grammatik der Österreichischen Gebärdensprache. Klagenfurt: Forschungszentrum für
Gebärdensprache und Hörgeschädigtenkommunikation.
136 II. Morphology

Stavans, Anat
1996 One, Two, or More: The Expression of Number in Israeli Sign Language. In: Interna-
tional Review of Sign Linguistics 1, 95⫺114.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Valli, Clayton/Lucas, Ceil
1992 Linguistics of American Sign Language: An Introduction. Washington, DC: Gallaudet
University Press.
Wilbur, Ronnie
1987 American Sign Language: Linguistic and Applied Dimensions. Boston: Little,
Brown & Co.
Wilbur, Ronnie
2010 The Semantics-Phonology Interface. In: Brentari, Diane (ed.), Sign Languages (Cam-
bridge Language Surveys). Cambridge: Cambridge University Press, 357⫺382.
Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Ben-
jamins.
Zwitserlood, Inge
2003 Classifiying Hand Configurations in Nederlandse Gebarentaal. Utrecht: LOT.
Zwitserlood, Inge/Perniss, Pamela/Aslı Özyürek
2011 An Empirical Investigation of Plural Expression in Turkish Sign Language (TİD): Are
There Modality Effects? Manuscript, Radboud University Nijmegen and Max Planck
Institute for Psycholinguistics, Nijmegen.

Markus Steinbach, Göttingen (Germany)

7. Verb agreement
1. Introduction
2. Background on agreement
3. Realization of agreement
4. Candidacy for agreement
5. Conclusion: agreement in sign and spoken languages
6. Literature

Abstract

This chapter compares several theoretical approaches to the phenomenon often labeled
‘verb agreement’ in sign languages. The overall picture that emerges is that cross-modally,
there are both similarities and differences with respect to agreement. Sign languages seem
to be similar to spoken languages in that they realize the person and number features of
the arguments of the verbs through agreement, suggesting an agreement process that is
7. Verb agreement 137

Fig. 7.1: Forms of ask in ASL. The form on the left corresponds to ‘I ask you’ while the form on
the right corresponds to ‘you ask me’.

available to both modalities. However, there are two important cross-modal differences.
First, the agreement process in sign languages is restricted to a smaller set of verbs than
seen in many spoken languages. This difference may be resolved if this restriction is
taken to be parallel to other restrictions that have been noted in many spoken languages.
Second, the properties of agreement are more uniform across many sign languages than
across spoken languages. This peculiarity can be derived from yet another cross-modal
difference: certain agreement forms in sign languages require interaction with gestural
space. Thus, while the cross-modal differences are rooted in the visual-manual modality
of sign languages, sign and spoken languages are ultimately similar in that they both
draw on the agreement process.

1. Introduction
Figure 7.1 shows two forms of the verb ask in American Sign Language (ASL). The
form on the left means ‘I ask you’ while the form on the right means ‘you ask me’.
Both forms have similar handshape (crooking index finger) and similar shape of the
path of movement (straight), which constitutes the basic, lexical form for ask. The only
difference between these two forms lies in the orientation of the hand and the direction
of movement: the form on the left is oriented and moves towards an area to the signer’s
left, while the form on the right is oriented and moves towards the signer’s chest.
The phenomenon illustrated in Figure 7.1 is well documented in many sign lan-
guages, including, but not limited to, ASL (Padden 1983), Argentine Sign Language
(Massone/Curiel 2004), Australian Sign Language (Johnston/Schembri 2007), Brazilian
Sign Language (Quadros 1999), British Sign Language (Sutton-Spence/Woll 1999),
Catalan Sign Language (Quer/Frigola 2006), German Sign Language (Rathmann 2000),
Greek Sign Language (Sapountzaki 2005), Indo-Pakistani Sign Language (Zeshan
2000), Israeli Sign Language (Meir 1998), Japanese Sign Language (Fischer 1996),
Korean Sign Language (Hong 2008), Sign Language of the Netherlands (Bos 1994;
Zwitserlood/Van Gijn 2006), and Taiwanese Sign Language (Smith 1990).
Some researchers have considered the change in orientation and direction of move-
ment to mark verb agreement, since the difference between the two forms corresponds
138 II. Morphology

to a difference in meaning that is often marked in spoken languages by person agree-


ment with the subject and object. However, such an analysis remains controversial and
has occupied a significant portion of the field of sign linguistics.
Labeling this phenomenon as ‘verb agreement’ comes with many theoretical as-
sumptions. In exploring these theoretical assumptions, this chapter addresses the core
issue of whether this phenomenon can indeed qualify as ‘verb agreement’ by taking
a particular angle: whether the phenomenon can be explained as the morphological
realization of verb agreement in sign languages.
For the purpose of this chapter, the following working definition of agreement is
adopted from Steele (1978): “The term agreement commonly refers to some systematic
covariance between a semantic or formal property of one element and a formal prop-
erty of another.” Corbett (2006) expands on this definition by specifying that there are
four main components to the systematic covariance, as listed in (1).

(1) Main components of agreement (Corbett 2006, 1)


(i) controller (an element which determines agreement)
(ii) target (an elemnt whose form is determined by agreement)
(iii) domain (the syntactic environment within which agreement occurs)
(iv) features (the properties that the controller agrees with the target in)

To examine whether the phenomenon in sign languages can be analyzed as verb agree-
ment, the chapter first provides a brief background on the phenomenon depicted in
Figure 7.1. Then, the following section discusses whether this phenomenon can be
analyzed as the morphological realization of person and number features and compares
several theoretical approaches to this issue. Next, on the assumption that the phenom-
enon is indeed the realization of person and number features, the chapter considers
cases when the features are not completely realized and focuses on the issue of deter-
mining which verbs realize these features. Again, this section takes into account the
latest theoretical analyses of this issue. The phenomenon is ultimately used as a case
study to identify linguistic properties that are common to both spoken and sign lan-
guages and to understand the effects of language modality on these properties.

2. Background on agreement

This section provides a brief background on verb agreement in sign languages for
those unfamiliar with the phenomenon. There are many detailed descriptions of the
phenomenon available (see, for example, Lillo-Martin/Meier (2011) and Mathur/Rath-
mann (2010) for a comprehensive description). Due to space, the description is neces-
sarily condensed here.
First, not all verbs undergo a change in orientation and/or direction of movement
to show a corresponding change in meaning. As Padden (1983) observes for ASL,
there are three classes of verbs which she labels ‘plain verbs’, ‘agreeing verbs’, and
‘spatial verbs’, respectively. The above example of ask falls into the class of agreeing
verbs, which undergo the above-described phonological changes to reflect a change in
meaning (specifically, who is doing the action to whom). Spatial verbs, like, for exam-
7. Verb agreement 139

ple, move, put, and drive, change the path of movement to show the endpoints of the
motion (e.g. I moved the piece of paper from here to there). Plain verbs may be inflected
for aspect, but otherwise cannot be changed in the same way as agreeing and spatial
verbs. Two ASL examples are cook and buy. The same tri-partite classification of verbs
has been confirmed in many other documented sign languages.
Within the class of ‘agreeing verbs’, verbs manifest the phenomenon shown in Fig-
ure 7.1 in different ways depending on their phonological shape. Some verbs like tell
mark only the indirect/direct object (called ‘single agreement’), while others like give
mark both the subject and indirect/direct object (called ‘double agreement’) (Meier
1982). Some verbs mark the subject and indirect/direct object by changing the orienta-
tion of the hands only (e.g. pity in ASL), while others show the change in meaning by
changing only the direction of movement (e.g. help in ASL), and yet others show the
change through both orientation and direction of movement (e.g. ask shown in Fig-
ure 7.1) (Mathur 2000; Mathur/Rathmann 2006). The various ways of manifesting the
phenomenon in Figure 7.1 have sometimes been subsumed under the term ‘direction-
ality’.
In addition to marking the changes in meaning through a change in the orientation
and/or direction of movement (i.e. through manual changes), other researchers have
claimed that it can also be marked non-manually through a change in eye gaze and
head tilt co-occurring with the verb phrase (Aarons et al. 1992; Bahan 1996; Neidle et
al. 2000). They claim in particular that eye gaze and head tilt mark object and subject
agreement respectively, while noting that these non-manual forms of agreement are
optional. Thompson, Emmorey, and Kluender (2006) sought to evaluate the claims
made by Neidle et al. (2000) by conducting an eye-tracking study. They found that eye
gaze was directed toward the area associated with the object referent for 74 % of
agreeing verbs and for 11% of plain verbs. Since eye gaze did not consistently co-occur
with plain verbs as predicted by Neidle et al. (2000), Thompson et al. were led to
conclude that eye gaze does not obligatorily mark object agreement.

3. Realization of agreement
One foundational issue concerning the phenomenon illustrated in Figure 7.1 is whether
it can be understood as the realization of verb agreement, and if so, what are the
relevant features in the realization. There have been three general approaches to this
issue: the R-locus analysis (as articulated by Lillo-Martin/Klima 1990), the indicating
analysis (Liddell 2003), and the featural analysis (Padden 1983; Rathmann/Mathur
2008). For each approach, the section considers how the approach understands the
mechanics behind the realization of agreement (e.g. if it is considered ‘agreement’,
which elements agree with which elements in what features). The issue of how the
phenomenon interacts with signing space is also discussed, as well as the implications
of this interaction for cross-linguistic uniformity.

3.1. R-locus analysis


The R-locus analysis was originally inspired by Lacy (1974). It was further articulated
by Lillo-Martin and Klima (1990) in the case of pronouns, applied by Meir (1998, 2002)
140 II. Morphology

and Aronoff, Sandler, and Meir (2005) to the phenomenon under discussion, and fur-
ther elaborated on by Lillo-Martin and Meier (2011). In this analysis, each noun phrase
is associated with an abstract referential index. The index is a variable in the linguistic
system which receives its value from discourse and functions to keep the referent of
the noun phrase distinct from referents of other noun phrases. The index is realized in
the form of a locus, a point in signing space that is associated with the referent of the
noun phrase. This locus is referred to as a ‘referential locus’ or R-locus for short.
There are theoretically an infinite number of R-loci in signing space. By separating the
referential index, an abstract variable, from the R-locus, the analysis avoids the listabil-
ity issue, that is, it avoids the issue of listing each R-locus as a potential form in the
lexicon. For further discussion of the distinction between the referential index and the
R-locus, see Sandler and Lillo-Martin (2006) and Lillo-Martin and Meier (2011).
Following Meir (1998, 2002), Aronoff, Meir, and Sandler (2005) have extended the
R-locus analysis to the phenomenon in Israeli Sign Language (Israeli SL) and ASL
and compare it to literal alliterative agreement in spoken languages like Bainouk, a
Niger-Congo language, and Arapesh, a language spoken in Papua New Guinea. The
mechanics of alliterative agreement is one of a copying mechanism. As an example, an
initial consonant-vowel syllable of the noun phrase is copied onto an adjective or a
verb as an expression of agreement. Similarly, in Israeli SL and ASL, the R-loci of the
noun phrases are copied onto the verb as an expression of agreement. The ASL exam-
ple constructed below illustrates how the copying mechanism works.

(2) [ s-u-e ixa ] [ b-o-b ixb ] aaskb [ASL]

R-locus for Sue R-locus for Bob


‘Sue asked Bob a question.’

Under this analysis, the phenomenon is understood as ‘agreement’ between a noun


phrase and a verb in the sense that they share a referential index, which is realized
overtly as an R-locus. At the same time, Aronoff, Meir, and Sandler (2005, 321) con-
cede one difference from literal alliterative agreement in spoken languages: the R-loci
that (the referents of) nouns are associated with “are not part of their phonological
representations and are not lexical properties of the nouns in any way. Rather, they
are assigned to nouns anew in every discourse.”
While Aronoff et al. (2005) do not explicitly relate the mechanism to the realization
of person and number features, the analysis is compatible with the use of such features
if the features are assumed to be encoded as part of the referential index. One question
is how the analysis would handle the realization of the number feature if it has a plural
value (for plural see also chapter 6). The plural feature can be realized in many sign
languages in two ways: (i) reduplication along an arc (called the ‘exhaustive’ form by
Klima/Bellugi (1979)), which results in a distributive meaning, and (ii) movement along
an arc without reduplication (labeled as the ‘multiple’ form by Klima/Bellugi), which
results in a collective meaning. For the first type of realization, the analysis would need
to posit separate R-loci for each of the referents associated with the plural noun phrase,
while for the second type of realization, the entities referred to by the noun phrase
would be associated as a group with a single R-locus.
7. Verb agreement 141

Cormier, Wechsler, and Meier (1998) use the theoretical framework of Head-driven
Phrase Structure Grammar (HPSG, Pollard/Sag 1994), a lexical-based approach, to
provide an explicit analysis of agreement as index-sharing. In this framework, the noun
phrase (NP) has a lexical entry which specifies the value of its index. The index is
defined with respect to the locus (a location in signing space), and the locus can be
one of three: the location directly in front of the signer’s chest (S), the location associ-
ated with the addressee (A), or ‘other’. This last category is further divided into distinct
locations in neutral space that are labeled as i, j, k, and so forth. Thus, they view the
locus as a phi-feature in ASL, which is a value of the index. The listability issue is
resolved if it is assumed that the index allows an infinite number of values. The possible
values for the index are summarized in (3).

(3) Index values in sign languages in HPSG framework (Cormier et al. 1998)
index: [LOCUS locus]
Partition of locus: S, A, other
Partition of other: i, j, k, ...

According to Cormier, Wechsler, and Meier (1998), a verb has a lexical entry that is
sorted according to single or double agreement and that includes specifications for
phonology (PHON) and syntax and semantics (SYNSEM). The SYNSEM component
contains the verb’s argument structure (ARG-ST) and co-indexes the noun phrases
with their respective semantic roles in CONTENT. For example, the verb see has an
argument structure of <NP1, NP2> and the content of [SEER1 and SEEN2]. NP1 is co-
indexed with SEER, and NP2 with SEEN by virtue of the underlined indexes. This
lexical entry is then expanded by a declaration specific to the verb’s sort (single- or
double-agreement), which specifies the phonological form according to the values of
the loci associated with the noun phrases in the argument structure (see Hahm (2006)
for a more recent discussion of person and number features within the HPSG frame-
work and Steinbach (2011) for a recent HPSG analyzis of sign language agreement).
Neidle et al. (2000) have similarly suggested that phi-features are the relevant fea-
tures for agreement, and that phi-features are realized by such loci. They envision
agreement as a feature-checking process as opposed to an index-copying or -sharing
process in the sense of Aronoff, Meir, and Sandler (2005) or Cormier, Wechsler, and
Meier (1998). McBurney (2002) describes the phenomenon for pronouns in a similar
way, although she reaches a different conclusion regarding the status of the phenom-
enon (see chapter 11 for discussion of pronouns).
A more recent perspective on the R-locus analysis comes from Lillo Martin and
Meier (2011, 122), who argue “that directionality is a grammatical phenomenon for
person marking” and refer to “index-sharing analyses of it. The index which is shared
by the verb and its argument is realized through a kind of pointing to locations which
are determined on the surface by connection to para-linguistic gesture.”

3.2. Indicating analysis

Liddell (1990, 1995, 2000, 2003) challenges the R-locus analysis, arguing that verbs
which display the phenomenon illustrated in Figure 7.1 are best understood as being
142 II. Morphology

Fig. 7.2: Liddell and Metzger’s (1998, 669) illustration of the mappings between three mental
spaces (cartoon space, Real space, and grounded blend). Copyright © 1998 by Elsevier.
Reprinted with permission.

directed to entities in mental spaces. Since these entities do not belong to the linguistic
system proper, Liddell does not consider the phenomenon to be an instance of verb
agreement. Rather, he calls such verbs ‘indicating verbs’, because the verbs ‘indicate’
or point to referents just as one might gesture toward an item when saying “I would
like to buy this”. Other sign language researchers such as Johnston and Schembri
(2007) have adopted Liddell’s analysis in their treatment of similar phenomena in Aus-
tralian Sign Language (Auslan).
Two key points have inspired Liddell to develop the ‘indicating analysis’. First, it is
not possible to list an infinite number of loci as agreement morphemes in the lexicon.
Second, an ASL sign like ‘give-to-tall person’ is directed higher in the signing space,
while ‘give-to-child’ is directed lower, as first noted by Fischer and Gough (1978).
The indicating analysis draws on mental space theory (Fauconnier 1985, 1997) to
generate connections between linguistic elements and mental entities. To illustrate the
mechanics of the indicating analysis, an example provided by Liddell and Metzger
(1998, 669) is given in Figure 7.2 and is reviewed here. Three mental spaces are re-
quired to account for one instance of look-at in ASL: a ‘cartoon space’ where the
interaction between the seated cat Garfield and his owner takes place; a Real space
containing mental representations of oneself and other entities in the immediate physi-
cal environment; and a grounded blend, which blends elements of the two spaces. In
this blended space, the ‘owner’ and ‘Garfield’ are mapped respectively from the
‘owner’ and ‘Garfield’ in the cartoon space. From Real space, the ‘signer’ is mapped
onto ‘Garfield’ in the blended space.
Liddell (2003) assumes that verbs are lexically marked for whether they indicate a
single entity corresponding to the object (notated as VERB/y) or two entities corre-
sponding to the subject and the object, respectively (notated as VERBx/y). He pro-
7. Verb agreement 143

poses a similar notation for other forms involving plurality, as well as for spatial verbs
(VERB/L, where L stands for location). Similarly, constraints on the process of agree-
ment, such as the restriction of the multiple form to the object, would have to be
encoded in the lexicon. The indicating analysis could account for the uniformity of the
properties surrounding the phenomenon across various sign languages by tying the
phenomenon to the act of gesturing toward entities, which is universally available to
every signer.
The indicating analysis does not assume a morphemic analysis of the phenomenon
in Figure 7.1 in terms of person and number features, yet lexicalizes them on some
verb entries, e.g. those involving plurality. If a large number of verbs display such
forms, the indicating analysis would need to explain why it is necessary to lexicalize
the forms rather than treating the realization of the plural feature as a morphological
process.

3.3. Featural analysis

Rathmann and Mathur (2002, 2008) provide another kind of analysis that is somewhat
a hybrid of the R-locus and indicating analyses. In a sense, the featural analysis harks
back to the original analysis of Padden (1983) and suggests that verbs agree with the
subject and the object in the morphosyntactic features of person and number (cf. Nei-
dle et al. (2000) for a similar view). Rathmann and Mathur (2008) propose that the
features are realized as follows.

(4) Morphosyntactic features (Rathmann/Mathur 2008)


a. Person
First: [C1] 4 on/near chest (marked)
Non-first: [⫺ 1] 4 Ø
b. Number
i. Features
Plural (collective): [Cpl] 4 horizontal arc (marked)
Singular: [⫺ pl] 4 Ø
ii. Reduplication: exhaustive (distributive), dual

The features for the category of person follow Meier (1990). First person is realized as
a location on or near the chest, while non-first person is realized as a zero form.
Following Rathmann and Mathur (2002), the zero morpheme for non-first person may
be matched with a deictic gesture within an interface between spatio-temporal concep-
tual structure and the articulatory-phonetic system in the architecture of grammar as
articulated by Jackendoff (2002). This interface is manifested through signing space or
gestural space (as it is called by Rathmann and Mathur). The realization of person
features takes place through a process called ‘alignment’ (Mathur 2000), which is an
instance of a readjustment process (Rathmann/Mathur 2002).
With respect to the category of number, two features are assumed. The plural fea-
ture, which is marked and encodes the collective reading, is realized as the multiple
form. The possibility that the other plural forms are reduced to reduplication of the
singular form is left for further investigation. The singular feature is unmarked and
144 II. Morphology

realized as a zero form. We suggest that the realization of the multiple form occurs
through affixal insertion, as evidenced by the fact that the morphological realization
of number features is necessarily ordered after the realization of person features (Mat-
hur 2002). See chapter 6, Plurality, for further discussion of plurality as it is marked
on verbs.

3.4. Interface with gesture

The three approaches are now compared on the basis of how they account for the
interaction of verb agreement with gestural space. As mentioned earlier, the linguistic
system cannot directly refer to areas within gestural space (Lillo-Martin/Klima 1990;
Liddell 1995). Otherwise, one runs into the trouble of listing an infinite number of
areas in gestural space in the lexicon, an issue which Liddell (2000) raises and which
Rathmann and Mathur (2002) describe in greater detail and call the listability issue. For
example, the claim that certain verbs ‘agree’ with areas in gestural space is problematic,
because that would require the impossible task of listing each area in gestural space
as a possible agreement morpheme in the lexicon (Liddell 2000).
The above analyses have approached the issue of listability in different ways. The
R-locus analysis avoids the listability issue by separating the R-locus from the R-index
(Lillo-Martin/Klima 1990; Meir 1998, 2002). The linguistic system refers to the R-index
and not to the R-locus. The connection between the R-index and the R-locus is medi-
ated by discourse: the R-index receives its value from discourse and links to a referent,
which is in turn associated with an R-locus. While the R-locus approach is clear about
how non-first person is realized, the analysis leaves open the point at which the phono-
logical content of the R-locus enters the linguistic system. On this question, Lillo-
Martin and Meier (2011, 122) clarify that phonological specification of the R-index is
not necessary; the specification is “determined on the surface by connection to paralin-
guistic gesture”.
The indicating analysis (Liddell 2003) takes the listability issue as a signal to avoid
an analysis in terms of agreement. Through mental space theory, Liddell maintains a
formal separation between linguistic elements and gestural elements but permits them
to interact through the blending of mental space entities. At the same time, he proposes
that one must memorize which verbs are mapped with mental entities for first person
forms, for non-first person forms, and for plural forms. One implication is that singular
forms are mapped with mental entities to the same extent as plural forms. On the
other hand, Cormier (2002) has found multiple forms to be less indexic than singular
forms, suggesting that plural forms are not always mapped with mental entities in the
way expected by Liddell.
Another approach is the featural analysis of Rathmann and Mathur (2008), which
agrees with the R-locus analysis in that the phenomenon constitutes agreement. The
featural analysis sees agreement as being mediated through the features of the noun
phrase instead of index-sharing or -copying. The set of features is finite ⫺ consisting
just of person and number ⫺ and each feature has a finite number of values as well.
Importantly, the non-first person feature is realized as a zero morpheme. Neidle et al.
(2000) also recognize the importance of features in the process of agreement. They
separate person from number and offer some contrasts under the feature of number.
7. Verb agreement 145

Whereas they assume many person distinctions under the value of non-first person,
the featural analysis assumes only one, namely a zero morpheme. The use of a zero
morpheme is the featural analysis’s solution to the listability issue.
The different approaches are compatible in several ways. First, while the R-locus
analysis emphasizes the referential index and the featural analysis emphasizes features,
they can be made compatible by connecting the index directly to features as has been
done in spoken languages (cf. Cormier, Wechsler, and Meier 1998). Then the process
of agreement can refer to these indices and features in syntax and morphology. The
indicating analysis, on the other hand, rejects any process of agreement and places any
person and number distinctions in the lexicon. The lexicon is one place where the
indicating analysis and the featural analysis could be compatible: in the featural analy-
sis, features are realized as morphemes which are stored in a ‘vocabulary list’ which is
similar to the lexicon; if one assumes that verbs are combined with inflectional mor-
phemes in the lexicon before syntax (and before they are combined with a gesture),
the featural analysis and the indicating analysis would converge. However, the featural
analysis as it stands does not assume that the lexicon generates fully inflected verbs;
rather, verbs are inflected as part of syntax and spelled out through a post-lexical
morphological component.
Otherwise, all approaches agree that linguistic elements must be allowed to inter-
face with gestural elements. Whereas the R-locus analysis sees the interface as occur-
ring in discourse (the R-index is linked to a discourse referent which is associated with
an R-locus), and whereas the indicating analysis sees the interface as a blending of
mental space entities with linguistic elements, the featural analysis sees the interface as
linking spatio-temporal conceptual structure and articulatory-phonetic systems through
gestural space.
There are then different ways to understand how the process of verb agreement
interacts with gestural space. By investigating the different contexts in which verb
agreement interfaces with gestural space, and by identifying constraints on this inter-
face, we can begin to distinguish among predictions made by the various approaches
to the issue of listability.

3.5. Cross-linguistic uniformity and variation

As mentioned in section 2, there is a tri-partite classification of verbs depending on


whether they show agreement. Moreover, verbs that show agreement vary between
single and double agreement. Then, the way that verbs mark agreement is through a
change in orientation and/or direction of movement, and finally, they interact with
gestural space. It turns that all of these properties, along with other properties, are
attested in many of the sign languages documented to date, as observed by Newport
and Supalla (2000) (see also the references provided in section 1).
To explain the uniformity of these properties across sign languages, the featural
analysis looks to the development of the agreement process. Rathmann and Mathur
(2008) suggest that the process of verb agreement emerges in many sign languages as
a linguistic innovation, meaning that the process takes on the ability to interface with
gestural space and then remains tied to this interface. Consequently, the process does
146 II. Morphology

not become lexicalized, unlike the affixation of segmental morphemes which have po-
tential to diverge in form across languages.
While mature sign languages are relatively uniform with respect to the properties
discussed above, there is also some cross-linguistic variation. For instance, sign lan-
guages vary in whether they use an auxiliary-like element to mark agreement whenever
the main verb is incapable of doing so due to phonological or pragmatic reasons (Rath-
mann 2000; see chapter 10 for discussion of agreement auxiliaries). Then, some sign
languages, e.g. those in East Asia such as Japanese Sign Language (NS), use a kind of
buoy (in the sense of Liddell 2003) to direct the form to. The buoy is realized by the
non-dominant hand, and instead of the dominant hand being oriented/directed to an
area within gestural space, the dominant hand is oriented/directed to the buoy. The
buoy could represent an argument, and could take a @-handshape (or a 0- or N-hand-
shape in NS for male and female referents, respectively). Finally, there are sign lan-
guages which have been claimed not to show the range of agreement patterns discussed
above, such as Al-Sayyid Bedouin Sign Language, a sign language used in a village in
the Negev desert in Israel (Aronoff et al. 2004), and Kata Kolok, a village sign language
of Bali (Marsaja 2008) (see chapter 24, Shared Sign Languages, for further discussion
of these sign languages).
The cross-linguistic variation across sign languages can again be accounted for by
the diachronic development of the agreement process. Meier (2002) and Rathmann
and Mathur (2008) discuss several studies (e.g. Engberg-Pedersen 1993; Supalla 1997;
Senghas/Coppola 2001) which suggest that verb agreement becomes more sophisti-
cated over time, in the sense that a language starts out by marking no or few person
and number features and then progresses to marking more person and number fea-
tures. That is, the grammaticalization of verb agreement seems to run in the direction
of increasing complexity. Pfau and Steinbach (2006) have likewise outlined a path of
grammaticalization for agreement, in which agreement marking and auxiliaries emerge
only at the end of the path. The cross-linguistic variation across sign languages with
respect to certain properties of verb agreement then can be explained by positing that
the sign languages are at different points along the path of grammaticalization.

4. Candidacy for agreement

Even when morphological realization of person and number features is predicted, it


does not always occur. This section seeks to explain why such realization does not
always occur. Rathmann and Mathur (2005) demonstrate that phonetic/phonological
constraints are not the only reason that morphological realization fails to occur.
Another reason is that it takes time for some verbs to become grammaticalized to
realize the features of agreement (for further discussion of grammaticalization, see
chapter 34).
If a feature is not overtly realized on the verb, a sign language may use one of
several strategies to encode the featural information. One way is to use overt pronouns
(see chapter 11 on pronouns). Another way is to use word order (Fischer 1975). Yet
another strategy is the insertion of an auxiliary-like element, a Person Agreement
Marker (Rathmann 2000) (see chapter 10 on agreement auxiliaries).
7. Verb agreement 147

This section focuses on the issue of how to determine which verbs participate in the
process of agreement, since across sign languages only a small set of verbs participate
in this process. Several approaches to this issue are considered: Padden (1983), Janis
(1992, 1995), Meir (1998, 2002), Rathmann and Mathur (2002), and Quadros and
Quer (2008).

4.1. Padden (1983)

Padden (1983) takes a lexical approach to determining which verbs participate in


agreement: verbs are marked in the lexicon as agreeing, spatial, or plain, and only
those verbs that are marked as agreeing participate in the process of agreement. Corm-
ier, Wechsler, and Meier (1998) follow this approach within the HPSG framework:
each verb is sorted by its lexical entry as plain, spatial, or agreeing. If the verb is
agreeing, it is further sorted as single agreement or double agreement. Liddell (2003)
takes a similar approach in relegating class membership to the lexicon, as verbs are
marked with diacritic symbols indicating whether they require blending with a men-
tal entity.
Such a lexical approach faces several problems. First, some verbs change their status
over time. Some verbs start as plain and become agreeing over time (e.g. ASL test).
Other verbs start as spatial and become agreeing (e.g. move-a-piece-of-paper becomes
give in ASL). The lexical approach misses the generalization that the boundaries be-
tween the classes are not fixed, and that verbs can migrate from one class to another
in principled ways.
A second issue is that some verbs have dual status. That is, a verb can be agreeing
in one context (cf. teach friend) and plain in another context (cf. teach linguistics).
(All examples in this paragraph are from ASL.) Likewise, a verb can be agreeing in
some contexts (e.g. look-at friend) or spatial in other contexts (look-at (across) ban-
ner). There are also verbs which seem spatial sometimes (drive-to school) and plain
other times (drive-to everyday). Under a lexical approach, a verb would either receive
two specifications or a verb would have to be listed twice, each with a unique specifica-
tion. The lexical approach then raises the issue of learnability, placing the burden on
the child to learn both specifications (for the acquisition of agreement, see chapter 28).
The lexical approach also leaves open the issue of when to use one of these verbs in
a given context.
Since the lexical approach assumes that the class membership of each verb is unpre-
dictable, it allows the possibility that each sign language assigns different verbs to each
class. In fact, sign languages are largely similar with respect to the verbs that belong
in each class. Thus, the lexical approach does not capture the generalization that verbs
in each class share certain properties in common.

4.2. Janis (1992, 1995)

Recognizing the issues that a lexical approach to the class membership of verbs is faced
with, Janis (1992, 1995) has developed an account that seeks to relate the conditions
148 II. Morphology

on verb agreement to the case properties of the controller using the agreement hier-
archy in (5).

(5) Agreement Hierarchy (Janis 1995)


case: direct case < locative case
|
GR: subject < direct object < indirect object
SR: agent, experiencer', patient", recipient
' only with a verb that is not body-anchored
" only if animate

Janis (1992, 1995) distinguishes between agreement in ‘agreeing’ verbs and that in
spatial verbs. She links the distinction to the case of the nominal controlling agreement.
A nominal receives locative case “if it can be perceived either as a location or as being
at a location that affects how the action or state expressed by the verb is characterized”
(Janis 1995, 219). Otherwise, it receives direct case. If a nominal receives direct case,
it controls agreement only if it has a feature from the list of grammatical roles (GR)
as well as a feature from the list of semantic roles (SR). This requirement is indicated
by a line connecting direct case to the two lists. In contrast, a nominal with locative case
does not have to meet this requirement and can control agreement in any condition.
If a verb has only one agreement slot (i.e. if there is single agreement), and there
are two competing controllers, the higher ranked nominal controls agreement. For
example, in a sentence with a subject and a direct object, the direct object will control
agreement because it is ranked above the subject in the agreement hierarchy. To ac-
count for optional subject agreement (as in double agreement), Janis (1995, 219) stipu-
lates another condition as follows: “The lowest ranked controller cannot be the sole
controller of agreement.” The lowest ranked controller in the above hierarchy is the
subject. Thus, the effect of this condition is that whenever the subject controls an
agreement slot, another nominal (e.g. the direct object) must control another agree-
ment slot.
The agreement hierarchy proposed by Janis (1992, 1995) comes closer to capturing
the conditions under which agreement occurs. At the same time, there are at least two
issues facing this approach. First, the hierarchy is complex and contains a number of
conditions that are difficult to motivate. For example, the hierarchy references not
only case but also grammatical roles and semantic roles. Then, the hierarchy includes
stipulations like “an experiencer controls agreement only with a verb that is not body-
anchored” or “a patient controls agreement only if it is animate”. A second issue is
that agreement has similar properties across many sign languages. To account for this
fact, the hierarchy can be claimed to be universal for the family of sign languages. Yet
it remains unexplained how the hierarchy has come into being for each sign language
and why it is universal for this particular family.

4.3. Meir (1998, 2002)

To simplify the constraints on the process of verb agreement in sign languages, Meir
(1998, 2002) developed another account that is inspired by the distinction between
7. Verb agreement 149

regular and backwards verbs. As mentioned earlier in the chapter, the direction of
movement in many verbs displaying agreement is from the area associated with the
subject referent to the area associated with the object referent. However, there is a
small set of verbs that show the opposite pattern. That is, the direction of movement
is from the area associated with the object referent to the area associated with the
subject referent.
There have been several attempts to account for the distinction between regular
and backwards verbs, starting with Friedman (1976) and continuing with Padden
(1983), Shepard-Kegl (1985), Brentari (1988), Janis (1992, 1995), Meir (1998, 2002),
Mathur (2000), Rathmann and Mathur (2002), and Quadros and Quer (2008), among
others. In short, Friedman (1976) and Shepard-Kegl (1985) propose a semantic analysis
unifying regular and backwards verbs: both sets of verbs agree with the argument
bearing the semantic role of source and the argument bearing the role of goal.
Padden (1983) points out that such verbs do not always agree with the goal, as in
ASL friend 1invite3 party (‘My friend invited me to a party’), where party is the goal
yet the verb agrees with the implicit object me. This led Padden (1983) to argue for a
syntactic analysis on which the verb generally agrees with the subject and the object
and the backwards verbs are lexically marked for showing the agreement in a different
way than regular verbs.
Brentari (1988, 1998) and Janis (1992) hypothesize that a hybrid of semantic and
syntactic factors is necessary to explain the distinction. Brentari (1988), for example,
proposes a Direction of Transfer Rule which states that the path movement of the verb
is away from the locus associated with the referent of the subject (syntactic) if the
theme is transferred away from the subject (semantic), or else the movement is toward
the locus of the subject referent.
Meir (1998, 2002) expands on the hybrid view and proposes the two Principles of
Sign Language Agreement Morphology given in (6).

(6) Principles of Sign Language Agreement Morphology (Meir 2002, 425)


(i) The direction of the path movement of agreement verbs is determined by
the thematic roles of the arguments: it is from the R-locus of the source
argument to the R-locus of the goal argument.
(ii) The facing of the hand(s) is determined by the syntactic role of the argu-
ments: the facing is towards the object of the verb (indirect object in the
case of ditransitive agreement verbs).

According to Meir, the direction of movement realizes the morpheme DIR which also
appears with spatial verbs like ASL move, put, and drive-to and reflects the semantic
analysis. This element unifies regular and backwards verbs, since they both move from
the R-locus of the source to the R-locus of the goal (in most cases). The facing of the
hand(s) realizes a case-assigning morpheme and represents the syntactic analysis. The
case assigner also unifies regular and backwards verbs, since both face the R-locus of
the object. The difference between regular and backwards verbs lies in the alignment
between the thematic and syntactic roles: in regular verbs, the source and goal are
aligned with the subject and the object respectively, while it is the other way around
for backwards verbs.
The analysis using the DIR morpheme and the case-assigning morpheme provides
a straightforward way to categorize verbs with respect to whether they display agree-
150 II. Morphology

ment. Plain verbs are those that do not have DIR or the case-assigning morpheme,
while spatial verbs have only DIR and agreeing verbs have both. Since the case as-
signer is related to the notion of affectedness in the sense of Jackendoff (1987, 1990),
it is predicted that only those verbs which select for an affected possessor show agree-
ment. The analysis accounts for the uniformity of the properties of verb agreement
across sign languages by attributing iconic roots to the morpheme DIR, which uses
gestural space to show spatial relations, whether concrete or abstract. Presumably, the
case-assigning morpheme has iconic roots such that the patterns of agreeing verbs
(along with spatial verbs) are also universal.

4.4. Rathmann and Mathur (2002)

To predict which verbs participate in agreement, Rathmann and Mathur (2002) pro-
pose an animacy analysis, inspired by Janis (1992, 1995), that imposes a condition on
the process of verb agreement: only those verbs which select for two animate argu-
ments may participate in the process. The featural analysis refers to Rathmann and
Mathur (2008), which focuses on the features that are involved in agreement and the
emergence of agreement as a process, while the animacy analysis refers to Rathmann
and Mathur (2002), which seeks to characterize the set of verbs that participate in
agreement and the modality differences between sign and spoken languages with re-
spect to agreement. To support the animacy analysis, they offer a number of diagnostic
tests independent of argument structure to determine whether a verb participates in
the process of agreement: the ability to display the first person object form (reversibil-
ity), the ability to display the multiple form, and the ability to co-occur with pam (Per-
son Agreement Marker, an auxiliary-like element) in sign languages that use such
an element.
The animacy analysis predicts that regular verbs like ASL ask and help and back-
wards verbs like take and copy participate in agreement. It also predicts that verbs
like ASL buy or think which select for only one animate argument do not participate
in agreement. It also correctly predicts that a verb like ASL teach or look-at can
participate in agreement only if the two arguments are animate. This suggests that
agreement is not tied to specific classes of lexical items but relates to their use in
particular sentences. Thus it is possible to use the multiple form with these verbs only
in a sentence like I taught many students or I looked at many students but not in a
sentence like I taught many subjects or I looked across a banner. While the latter
sentences look similar to the agreement forms in that the orientation and direction of
movement in the verbs reflect areas associated with a referent (as in I looked at a
book), they are claimed to involve a different process than agreement, since they do
not take the multiple form or co-occur with pam.
To account for backwards verbs, the animacy analysis assumes that the backwards
movement in those verbs is lexically fixed, which may be motivated by an account like
Meir (1998) or Taub (2001). When the process of agreement applies to this lexically
fixed movement, the resulting form yields the correct direction of movement and orien-
tation. Further factors such as discourse considerations, phonetic and phonological con-
straints, and historical circumstances determine whether the agreement form in both
regular and backwards verbs is ultimately realized.
7. Verb agreement 151

Whereas the thematic analysis of Meir (1998, 2002) takes the DIR morpheme and
the case-assigning morpheme to participate in the process of agreement, the animacy
analysis assumes that verbs themselves participate in the process of agreement and
that they do not require a complex morphological structure to do so.

4.5. Quadros and Quer (2008)

Quadros and Quer (2008) revisit the conditions on verb agreement by considering the
properties of backwards verbs and auxiliaries in Brazilian Sign Language (LSB) and
Catalan Sign Language (LSC). They argue against a thematic account of agreement in
light of examples from LSB and LSC that share the same lexical conceptual structure
but have lexicalized movements that run in the opposite direction: for instance, ask is
regular in LSB but backwards in LSC, and ask-for is backwards in LSB but regular
in LSC. In addition, they note that the same lexical conceptual structure in the same
language can show both agreeing and non-agreeing forms, e.g. borrow in LSC. More-
over, they claim that Rathmann and Mathur’s (2008) diagnostics for distinguishing
between agreeing and spatial verbs do not work in LSB and LSC, leading Quadros
and Quer to question whether it is necessary to distinguish between agreeing and
spatial verbs. This question will need to be addressed by carefully re-examining the
diagnostic criteria for agreeing and spatial verbs across sign languages.
Quadros and Quer (2008) offer an alternative view in which two classes of verbs
can be distinguished according to their syntactic properties: agreeing and non-agreeing.
Their class of agreeing verbs includes what have been called agreeing and spatial verbs
in Padden’s (1983) typology. Semantic factors distinguish between agreeing and spatial
verbs; thus, agreeing verbs (in the sense of Padden 1983) agree with R-loci which
manifest person and number features, while spatial verbs agree with spatial features.
Otherwise, the agreement form in both types of verbs is realized as a path. To support
this view, they claim that it is possible for a verb to agree with both a nominal and a
locative. By unifying the process of agreement across agreeing and spatial verbs, they
remove the need for a special condition on the process of agreement.
Quadros and Quer provide two pieces of evidence that agreement with R-loci con-
stitutes syntactic agreement. First, along with Rathmann and Mathur (2008), they ob-
serve that when an auxiliary appears with a backwards verb, the direction of movement
in the auxiliary is from the area associated with the subject referent to the area associ-
ated with the object referent, even when the direction of movement in the backwards
verb is the opposite. Second, they note with Rathmann and Mathur (2008) that auxilia-
ries appear only with those backwards verbs that take animate objects and not with
backwards verbs that take inanimate objects.
Quadros and Quer (2008) take a view on backwards verbs that is different from
that of Meir (1998) and Rathmann and Mathur (2008): they treat backwards verbs as
handling verbs with a path that agrees with locations as opposed to syntactic argu-
ments; that is, they treat them as spatial verbs. Otherwise, backwards verbs are still
grouped together with regular verbs, because they adopt a broader view of verb agree-
ment in sign languages: it is not just restricted to person and number features but
also occurs with spatial features. While this broader view can explain cross-linguistic
similarities with respect to properties of verb agreement, it has yet to overcome the
152 II. Morphology

issue of listability. It is possible to resolve the listability issue with person and number
issues by having a minimum of two contrastive values. It is, however, less clear whether
it is possible to do the same with spatial features.

4.6. Discussion

Several approaches regarding the conditions on agreement have been presented. One
approach, exemplified by Padden (1983) and Liddell (2003), lets the lexicon predict
when a verb participates in agreement. Janis (1992) argues that an agreement hierarchy
based on case and other grammatical properties determines which verbs display agree-
ment. Meir (1998) seeks to simplify this mechanism through a thematic approach: verbs
that contain a DIR morpheme and a case-assigning morpheme qualify for agreement.
Rathmann and Mathur (2008) suggest doing away with the case-assigning morpheme
and restricting the process of agreement to those verbs that select for two animate
arguments. Quadros and Quer (2008), on the other hand, group backwards verbs with
spatial verbs and agreeing verbs with spatial verbs, thus removing the need for a special
condition. Another possibility, which has recently been proposed by Steinbach (2011),
is that verb agreement should be considered as part of a unified agreement process
along with role shift and classifier agreement.
The issue of whether verb agreement in sign languages needs a particular condition
awaits further empirical investigation of the argument structure of verbs that undergo
agreement and those that do not across a number of sign languages. If it turns out that
there is a condition (however it is formulated) on the process of agreement, as argued
by Janis (1992), Meir (1998), and Rathmann and Mathur (2008), this would be one
instance in which verb agreement in sign languages differs from that in spoken lan-
guages.

5. Conclusion: agreement in sign and spoken languages


We now go back to the questions raised at the beginning and consider how sign lan-
guages compare with one another and with spoken languages with respect to the reali-
zation of person and number features or the lack thereof. The preceding sections sug-
gest the following picture.
With regard to similarities across signed and spoken languages, the requirement
that the set of person and number features of the arguments be realized in some way
appears to be universal. The realization of person and number features can be ex-
plained through an agreement process that is common to both modalities. The agree-
ment process, as well as the features underlying the process, may be made available
by universal principles of grammar, so that it appears in both signed and spoken lan-
guages.
At the same time, there are important differences between signed and spoken lan-
guages with regard to agreement. First, the agreement process in sign languages is
restricted to a smaller set of verbs, whereas agreement in spoken languages, if it is
marked at all, is usually marked on the whole set of verbs (setting aside exceptions).
7. Verb agreement 153

This cross-modal difference could be resolved if the agreement process in sign lan-
guages is understood to be one of several distinct agreement processes available to
sign languages, and that the choice of a particular agreement process depends on the
argument structure of the verb. If that is the case, and if one takes into account that
there are likewise restrictions on the agreement process in many spoken languages
(Corbett 2006), sign languages are no different to spoken languages in this regard.
Another cross-modal difference is that the properties of agreement are more uni-
form across sign languages than across spoken languages. This difference can be ex-
plained by yet another cross-modal difference: specific agreement forms in sign lan-
guages, in particular the non-first person singular form, require interaction with
gestural space, whereas this interaction is optional for spoken languages. Since gestural
space is universally available to all languages, and since it is involved in the realization
of certain person and number features in sign languages, these considerations would
explain why verb agreement looks remarkably similar across mature sign languages.
The cross-modal similarities can then be traced to universal principles of grammar,
while the cross-modal differences are rooted in the visual-manual modality of sign lan-
guages.

6. Literature
Aarons, Debra/Bahan, Ben/Kegl, Judy/Neidle, Carol
1992 Clausal Structure and a Tier for Grammatical Marking in American Sign Language. In:
Nordic Journal of Linguistics 15, 103⫺142.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344.
Aronoff, Mark/Padden, Carol/Meir, Irit/Sandler, Wendy
2004 Morphological Universals and the Sign Language Type. In: Booij, Geert/Marle, Jaap
van (eds.), Yearbook of Morphology 2004. Dordrecht: Kluwer Academic Publishers,
19⫺40.
Bahan, Ben
1996 Non-manual Realization of Agreement in American Sign Language. PhD Dissertation,
Boston University.
Bos, Heleen
1994 An Auxiliary Verb in Sign Language of the Netherlands. In: Ahlgren, Inger/Bergman,
Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure: Papers from the
Fifth International Symposium on Sign Language Research, Vol. 1. Durham: Interna-
tional Sign Linguistic Association, 37⫺53.
Brentari, Diane
1988 Backwards Verbs in ASL: Agreement Re-opened. In: MacLeod, Lynn (ed.), Parasession
on Agreement in Grammatical Theory (CLS 24, Vol. 2). Chicago: Chicago Linguistic
Society, 16⫺27.
Cormier, Kearsy
2002 Grammaticization of Indexic Signs: How American Sign Language Expresses Numeros-
ity. PhD Dissertation, The University of Texas at Austin.
Cormier, Kearsy/Wechsler, Stephen/Meier, Richard
1998 Locus Agreement in American Sign Language. In: Webelhuth, Gert/Koenig, Jean-
Pierre/Kathol, Andreas (eds.), Lexical and Constructional Aspects of Linguistic Expla-
nation. Stanford, CA: CSLI, 215⫺229.
154 II. Morphology

Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space
in a Visual Language. Hamburg: Signum.
Fauconnier, Gilles
1985 Mental Spaces: Aspects of Meaning Construction in Natural Language. Cambridge, MA:
MIT Press.
Fauconnier, Gilles
1997 Mappings in Thought and Language. Cambridge: Cambridge University Press.
Fischer, Susan
1975 Influences on Word Order Change in American Sign Language. In: Li, Charles (ed.),
Word Order and Word Order Change. Austin, TX: The University of Texas Press, 1⫺25.
Fischer, Susan
1996 The Role of Agreement and Auxiliaries in Sign Languages. Lingua 98, 103⫺120.
Fischer, Susan/Gough, Bonnie
1978 Verbs in American Sign Language. Sign Language Studies 18, 17⫺48.
Friedman, Lynn
1976 The Manifestation of Subject, Object, and Topic in the American Sign Language. In:
Li, Charles (ed.), Subject and Topic. New York, NY: Academic Press, 125⫺148.
Hahm, Hyun-Jong
2006 Person and Number Agreement in American Sign Language. In: Müller, Stefan (ed.),
Proceedings of the 13 th International Conference on Head-Driven Phrase Structure
Grammar. Stanford, CA: CSLI, 195⫺211.
Hong, Sung-Eun
2008 Eine Empirische Untersuchung zu Kongruenzverben in der Koreanischen Gebärdenspra-
che. Hamburg: Signum.
Jackendoff, Ray
2002 Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford: Oxford Uni-
versity Press.
Janis, Wynne
1992 Morphosyntax of ASL Verb Phrase. PhD Dissertation, State University of New York,
Buffalo.
Janis, Wynne
1995 A Cross-linguistic Perspective on ASL Verb Agreement. In: Emmorey, Karen/Reilly,
Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 195⫺223.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Lacy, Richard
1974 Putting Some of the Syntax Back Into Semantics. Paper Presented at the Annual Meet-
ing of the Linguistic Society of America, New York.
Liddell, Scott
1990 Four Functions of a Locus: Re-examining the Structure of Space in ASL. In: Lucas,
Ceil (ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet Uni-
versity Press, 176⫺198.
Liddell, Scott
1995 Real, Surrogate, and Token Space: Grammatical Consequences in ASL. In: Emmorey,
Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erl-
baum, 19⫺42.
Liddell, Scott
2000 Indicating Verbs and Pronouns: Pointing Away from Agreement. In: Emmorey, Karen/
Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula
Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 303⫺320.
7. Verb agreement 155

Liddell, Scott
2003 Grammar, Gesture and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Liddell, Scott/Metzger, Melanie
1998 Gesture in Sign Language Discourse. In: Journal of Pragmatics 30, 657⫺697.
Lillo-Martin, Diane/Klima, Edward
1990 Pointing out Differences: ASL Pronouns in Syntactic Theory. In: Fischer, Susan/Siple,
Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chi-
cago: University of Chicago Press, 191⫺210.
Lillo-Martin, Diane/Meier, Richard
2011 On the Linguistic Status of ‘Agreement’ in Sign Languages. In: Theoretical Linguistics
37, 95⫺141.
Marsaja, I Gede
2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen:
Ishara Press.
Massone, Maria I./Curiel, Monica
2004 Sign Order in Argentine Sign Language. In: Sign Language Studies 5, 63⫺93.
Mathur, Gaurav
2000 Verb Agreement as Alignment in Signed Languages. PhD Dissertation, Massachusetts
Institute of Technology.
Mathur, Gaurav
2002 Number and Agreement in Signed Languages. Paper Presented at the Linguistics Asso-
ciation of Great Britain Spring Meeting, Liverpool.
Mathur, Gaurav/Rathmann, Christian
2006 Variability in Verb Agreement Forms Across Four Sign Languages. In: Goldstein,
Louis/Best, Catherine/Whalen, Douglas (eds.), Laboratory Phonology VIII: Varieties of
Phonological Competence. Berlin: Mouton de Gruyter, 285⫺314.
Mathur, Gaurav/Rathmann, Christian
2010 Verb Agreement in Sign Language Morphology. In: Brentari, Diane (ed.), Sign Lan-
guages: A Cambridge Language Survey. Cambridge: Cambridge University Press,
173⫺196.
McBurney, Susan
2002 Pronominal Reference in Signed and Spoken Language: Are Grammatical Categories
Modality-dependent? In: Meier, Richard/Cormier, Kearsy/Quinto-Pozos, David (eds.),
Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge Uni-
versity Press, 329⫺369.
Meier, Richard
1982 Icons, Analogues, and Morphemes: The Acquisition of Verb Agreement in American
Sign Language. PhD Dissertation, University of California, San Diego.
Meier, Richard
1990 Person Deixis in American Sign Language. In: Fischer, Susan/Siple, Patricia (eds.),
Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University
of Chicago Press, 175⫺190.
Meir, Irit
1998 Thematic Structure and Verb Agreement in Israeli Sign Language. PhD Dissertation,
The Hebrew University of Jerusalem.
Meir, Irit
2002 A Cross-modality Perspective on Verb Agreement. In: Natural Language and Linguistic
Theory 20, 413⫺450.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
156 II. Morphology

Newport, Elissa/Supalla, Ted


2000 Sign Language Research at the Millennium. In: Emmorey, Karen/Lane, Harlan (eds.),
The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward
Klima. Mahwah, NJ: Lawrence Erlbaum, 103⫺114
Padden, Carol
1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation,
University of California, San Diego [Published 1988 by Garland Outstanding Disserta-
tions in Linguistics, New York].
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 5⫺98.
Pollard, Carl/Sag, Ivan
1994 Head-Driven Phrase Structure Grammar. Chicago: University of Chicago Press.
Quadros, Ronice de
1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifíca Universidade
Católica do Rio Grande do Sul, Porto Alegre.
Quadros, Ronice de/Quer, Josep
2008 Back to Back(wards) and Moving on: On Agreement, Auxiliaries and Verb Classes. In:
Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past,
Present, and Future. Forty-five Papers and Three Posters from the 9th Theoretical Issues
in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópo-
lis: Editora Arara Azul.
[Available at: www.editora-arara-azul.com.br/EstudosSurdos.php].
Quer, Josep/Frigola, Santiago
2006 Cross-linguistic Research and Particular Grammars: A Case Study on Auxiliary Predi-
cates in Catalan Sign Language (LSC). Paper Presented at Workshop on Cross-linguis-
tic Sign Language Research, Max Planck Institute for Psycholinguistics, Nijmegen.
Rathmann, Christian
2000 The Optionality of Agreement Phrase: Evidence from Signed Languages. MA Thesis,
The University of Texas at Austin.
Rathmann, Christian/Mathur, Gaurav
2002 Is Verb Agreement the Same Cross-modally? In: Meier, Richard/Cormier, Kearsy/
Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 370⫺404.
Rathmann, Christian/Mathur, Gaurav
2005 Unexpressed Features of Verb Agreement in Signed Languages. In: Booij, Geert/Gue-
vara, Emiliano/Ralli, Angela/Sgroi, Salvatore/Scalise, Sergio (eds.), Morphology and
Linguistic Typology. On-line Proceedings of the 4th Mediterranean Morphology Meeting
(MMM4). Università degli Studi di Bologna, 235⫺250.
Rathmann, Christian/Mathur, Gaurav
2008 Verb Agreement as a Linguistic Innovation in Signed Languages. In: Quer, Josep (ed.),
Signs of the Time: Selected Papers from TISLR 2004. Hamburg: Signum, 191⫺216.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Sapountzaki, Galini
2005 Free Functional Elements of Tense, Aspect, Modality and Agreement as Possible Auxilia-
ries in Greek Sign Language. PhD Dissertation, University of Bristol.
Senghas, Ann/Coppola, Marie
2001 Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial
Grammar. In: Psychological Science 12, 323⫺328.
Shepard-Kegl, Judy
1985 Locative Relations in American Sign Language: Word Formation, Syntax, and Discourse.
PhD Dissertation, Massachusetts Institute of Technology.
7. Verb agreement 157

Smith, Wayne
1990 Evidence for Auxiliaries in Taiwan Sign Language. In: Fischer, Susan/Siple, Patricia
(eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: Uni-
versity of Chicago Press, 211⫺228.
Steinbach, Markus
2011 Dimensions of Sign Language Agreement: From Phonology to Semantics. Invited Lec-
ture at Formal and Experimental Advances in Sign Language Theory (FEAST), Venice.
Supalla, Ted
1997 An Implicational Hierarchy in Verb Agreement in American Sign Language. Manu-
script, University of Rochester.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Taub, Sarah
2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cam-
bridge: Cambridge University Press.
Thompson, Robin/Emmorey, Karen/Kluender, Robert
2006 The Relationship Between Eye Gaze and Agreement in American Sign Language: An
Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604.
Zeshan, Ulrike
2000 Sign Language in Indopakistan: A Description of a Signed Language. Amsterdam: Ben-
jamins.
Zwitserlood, Inge/Gijn, Ingeborg van
2006 Agreement Phenomena in Sign Language of the Netherlands. In: Ackema, Peter/
Brandt, Patrick/Schoorlemmer, Maaike/Weerman, Fred (eds.), Arguments and Agree-
ment. Oxford: Oxford University Press, 195⫺229.

Gaurav Mathur, Washington, DC (USA)


Christian Rathmann, Hamburg (Germany)
158 II. Morphology

8. Classifiers
1. Introduction
2. Classifiers and classifier categories
3. Classifier verbs
4. Classifiers in signs other than classifier verbs
5. The acquisition of classifiers in sign languages
6. Classifiers in spoken and sign languages: a comparison
7. Conclusion
8. Literature

Abstract

Classifiers (currently also called ‘depicting handshapes’), are observed in almost all sign
languages studied to date and form a well-researched topic in sign language linguistics.
Yet, these elements are still subject to much debate with respect to a variety of matters.
Several different categories of classifiers have been posited on the basis of their semantics
and the linguistic context in which they occur. The function(s) of classifiers are not fully
clear yet. Similarly, there are differing opinions regarding their structure and the structure
of the signs in which they appear. Partly as a result of comparison to classifiers in spoken
languages, the term ‘classifier’ itself is under debate. In contrast to these disagreements,
most studies on the acquisition of classifier constructions seem to consent that these are
difficult to master for Deaf children. This article presents and discusses all these issues
from the viewpoint that classifiers are linguistic elements.

1. Introduction

This chapter is about classifiers in sign languages and the structures in which they
occur. Classifiers are reported to occur in almost all sign languages researched to date
(a notable exception is Adamorobe Sign Language (AdaSL) as reported by Nyst
(2007)). Classifiers are generally considered to be morphemes with a non-specific
meaning, which are expressed by particular configurations of the manual articulator
(or: hands) and which represent entities by denoting salient characteristics. Some ex-
amples of classifier constructions from different sign languages are shown in (1): Jorda-
nian Sign Language (LiU; Hendriks 2008, 142); Turkish Sign Language (TİD); Hong-
Kong Sign Language (HKSL; Tang 2003, 153); Sign Language of the Netherlands
(NGT); Kata Kolok (KK); German Sign Language (DGS); American Sign Language
(ASL; Brentari 1999, 21); and French Sign Language (LSF; Cuxac/Sallandre 2007, 18).
Although little cross-linguistic work has been undertaken so far, the descriptions
and examples of classifiers in various sign languages appear quite similar (except for
the classifier inventories, although there, too, many similarities exist). Therefore, in this
chapter, the phenomenon of classifiers will be described as comparable in all sign
8. Classifiers 159

languages for which they have been reported. The future will show to what extent
cross-linguistic differences exist.
Initially, classifier structures were considered mime-like and pantomimic, and their
first descriptions were as visual imageries (e.g., DeMatteo 1977; Mandel 1977). Soon after
that, however, these structures started to become analyzed as linguistic, morphologically
complex signs. Notable is Supalla’s (1982, 1986) seminal work on classifiers in ASL. Nu-
merous studies of classifiers in various sign languages have been undertaken since.
Currently, classifiers are generally considered to be meaningful elements in morpho-
logically complex structures, even though the complexity of these structures is not yet
clear, and there is much controversy about the way in which they should be analyzed.
The controversy is partly due to the fact that different studies use varying and some-
times unclear assumptions about the kinds of linguistic elements that classifiers in sign
languages are, as well as about their function, and the types of constructions in which
they occur. Space limitations do not allow extensive discussion of the various views.
The main points in the literature will be explained and, where possible, related to the
different views in order to obtain as much clarity as possible.
This chapter is structured as follows. The next section focuses on categories of classi-
fiers in sign languages. This is followed by a section on classifier verbs. Section 4 dis-
cusses signs in which the classifiers can be recognized but differ in various respects
from the classifier verbs that are the topic of section 3. Two sections follow with an
160 II. Morphology

overview of acquisition of classifiers in sign languages (section 5) and a comparison of


classifiers in spoken and sign languages (section 6), respectively. Finally, section 7 con-
tains some further considerations and conclusions.

2. Classifiers and classifier categories

The start of the study of classifiers in sign languages coincided with (renewed) interest
in classifiers in spoken languages. Research of the latter traditionally focused on the
semantics of classifiers, i.e. studies were made on the assignment of nouns to particular
classes, in order to understand the ways in which humans categorize the world around
them. On the basis of these assignments, various categories were suggested according
to which nouns are classified in different languages. In addition, different types of
classifier languages (or systems) were suggested. An overview article of the characteris-
tics, typology, and classification in 50 different classifier languages (Allan 1977) has
had a large influence on research on sign language classifiers. First, (as will be further
exemplified in section 6), sign languages seemed to fall into one of the four types of
classifier languages suggested by Allan, viz. predicate classifier languages, where classi-
fiers occur with verbs (in contrast to appearing with numerals, nouns, or in locative
constructions as in Allan’s other three types of classifier languages). Second, in the
spoken language literature, several semantic dimensions were distinguished according
to which nouns were classified, such as material (including animacy), shape, consist-
ency, size, location, arrangement, and quanta (see Allan 1977; but also Denny 1979;
Denny/Creider 1986; Adams 1986). Similarly, much of the initial work on sign language
classifiers has focused on semantic classification.

2.1. Classifier categories

Supalla (1982, 1986) considers ASL a predicate classifier language in Allan’s categori-
zation and categorizes the classifiers of ASL into five main types, some of which are
divided into subtypes:

1. Semantic classifiers, which represent nouns by some semantic characteristic of their


referents (e.g., belonging to the class of humans, animals, or vehicles);
2. Size and Shape Specifiers (SASSes), which denote nouns according to the visual-
geometric features of their referents. SASSes come in two subtypes:
⫺ static SASSes, which consist of a handshape (or combination of two hands) that
indicates the size/shape of an entity;
⫺ tracing SASSes, which have a movement of the hand(s) that outlines an entity’s
size/shape, and in which the shape of the manual articulator denotes the dimen-
sionality of that entity;
3. Instrumental classifiers, which also come in two types:
⫺ instrumental hand classifiers, in which the hand represents a hand that holds
and/or manipulates another entity; and
⫺ tool classifiers, in which the hand represents a tool that is being manipulated;
8. Classifiers 161

4. Bodypart classifiers: parts of the body represent themselves (e.g., hands, eyes) or
limbs (e.g., hands, feet); and
5. A Body classifier: the body of the signer represents an animate entity.

This categorization is not only based on semantics (as in spoken language classifica-
tions), but also on different characteristics of the classifiers within each type (in con-
trast to studies on spoken language classifiers). Basically, SASSes classify referents
with respect to their shape, Instrumental classifiers on the basis of their function as
instruments/tools, and the Body classifier represents animate entities. In addition,
SASSes and Instrumental classifiers are claimed to be morphologically complex, in
contrast to Semantic classifiers, and Body classifiers are a special category because
they cannot be combined with motion or location verbs, in contrast to classifiers of
other types (e.g., Supalla 1982, 1986; Newport 1982; Schick 1990a).
Since then similar as well as new categorizations have been suggested for ASL and
a number of other sign languages (see, amongst others, McDonald (1982), Liddell/
Johnson (1987), and Benedicto/Brentari (2004) for ASL; Johnston (1989) and Schembri
(2001, 2003) for Australian Sign Language (Auslan); Corazza (1990) for Italian Sign
Language (LIS); Brennan (1990a,b) for British Sign Language (BSL); Hilzensauer/
Skant (2001) for Austrian Sign Language (ÖGS); and Fischer (2000) for Japanese Sign
Language (NS)), and the categories have received various different terms. There is
some overlap between them, which shows that the categorizations are problematic.
This is important because the suggested categories have a large impact on the interpre-
tation of classifiers and the structures in which they occur.
Currently two main categories of classifiers are distinguished, called ‘Whole Entity
classifiers’ and ‘Handling classifiers’. The first category contains classifiers that directly
represent referents, by denoting particular semantic and/or shape features. By and
large, this category comprises Supalla’s Semantic classifiers, static SASSes, some Body-
part classifiers, and Tool classifiers. In the category of Handling classifiers we find
classifiers that represent entities that are being held and/or moved; often (but not
exclusively) by a human agent. This category contains classifiers that were previously
categorized as Instrumental classifiers and some Bodypart classifiers.
Examples of Whole Entity classifiers (WECL) and Handling classifiers (HCL) from
TİD and DGS, are shown in (2) and (3), where the manual articulator represents a
flattish entity (a book) and a cylindrical entity (a mug), respectively. In (2a) and (3a),
Whole Entity classifiers are used for these entities ⫺ the hands directly represent the
162 II. Morphology

entities; Handling classifiers are used for the same entities in (2b) and (3b), the hands
indicating that the entities are held in the hand.
The Body classifier category proposed by Supalla (1982, 1986), which consists of only
one element (the only classifier that is not represented phonologically by a configuration
of the manual articulator but by the signer’s body), is currently no longer considered a
classifier by most researchers but a means for referential shift (e.g., Engberg-Pedersen
1995; Morgan/Woll 2003; see also chapter 17 on utterance reports and constructed action).
Although some researchers still count the category of tracing SASSes (viz. the sub-
set of elements that consist of a tracing movement and a manual articulator, see (4))
among the classifiers, these differ in various aspects from all other classifiers. In con-
trast to other classifiers, tracing SASSes (i) are not expressed by a mere hand configu-
ration, they also need the tracing movement to indicate the shape of the referent; (ii)
they cannot be combined with verbs of motion; (iii) they denote specific shape informa-
tion (in fact all kinds of shapes can be outlined, from square to star-shaped to Italy-
shaped); and, most importantly, (iv) they can be used in a variety of syntactic contexts:
they appear as nouns, adjectives, and (ad)verbs, and do not seem to be used anaphori-
cally (as will be exemplified in the next section). For these reasons, tracing SASSes are
better placed outside the domain of classifiers.
Thus, ASL and most other sign languages researched to date can be argued to have
two main categories of classifiers: Whole Entity classifiers and Handling classifiers.
This categorization is not exactly based on the semantics of the units, but rather on
their function in the grammar, which will be discussed in more detail in section 4.
Evidence from syntax and discourse will be given to sustain the necessity to distinguish
these two types.
8. Classifiers 163

2.2. Classifiers: forms, denotation, and variation

Entities are categorized according to semantic dimensions, as in spoken languages.


Material (viz. animacy) and shape appear to be outstanding in all the sign languages
that have classifiers. As for Whole Entity classifiers, most sign languages appear to
have separate classifiers for animate entities, although the forms of the classifiers may
differ. There is a @ -form (e.g., in ASL, NGT, DGS, DSL, and Auslan), and a %-form
has been reported in e.g., HKSL, Taiwan Sign Language, and Thai Sign Language.
Some languages also have a 0 -form for animate entities (e.g. DSL). Many sign lan-
guages have a classifier for legged entities (including humans and animals); represented
by a -form (a variant is the form with bent fingers , mostly used for animals). Some
languages have a special classifier for vehicles, viz. ASL ( ), LiU ( ). However, some
of the classifiers mentioned here may not be restricted to a particular class, for example
vehicles, but may also include other types of entities, e.g. the vehicle classifier reported
in some languages (*) may also include wide, flattish entities. Many sign languages
have a special classifier for airplanes ( or ) and trees (< or plus lower arm).
Most sign languages have rather extensive sets of classifiers denoting shapes: long and
thin, solid, round (of various sizes), flat, cylindrical, bulky, tiny ⫺ and some even have
a classifier for square entities (e.g., TİD; see (1b)). All these shape-denoting classifiers
are formed by varied numbers of extended, spread and/or bent fingers. Some research-
ers (such as Supalla 1982, 1986; Newport 1982; Schick 1990a,b) assume that these classi-
fiers are themselves morphologically complex; each finger forms a separate morpheme.
Some sign languages are reported to have default or general classifiers (e.g., a form
where the tip of the index finger is important) that do not denote any characteristic of
an entity, or a flat form (*) (e.g., NGT, ASL, and HKSL). Examples of classifiers
from various sign languages were shown in (1)⫺(3). Few classifier inventories are avail-
able; many available classifier studies focus on explanations of the denotations and
properties of the classifiers and use a subset of the classifier forms to illustrate these.
It is therefore not quite possible to indicate the variety and the extent of the sets of
classifiers in various sign languages.
What becomes clear from the literature is that signers in most sign languages can
use more than one classifier to represent a particular entity, in order to focus on a
particular (different) characteristic of that entity (or to defocus it). For instance, a
person can be represented with a classifier for animate entities, but a legs classifier will
be used when the focus is on a person standing, or on the manner of locomotion
(walking, sliding). A plate or a CD can be represented by a flat form (*), but also by
a round form (J). A car can be represented by a specific vehicle classifier in some
sign languages, but signers may also choose to use a flat form (*), for example when
indicating that there is something on top of the car (by placing another classifier on
top of the classifier representing the car).
The sets of Handling classifiers in the various languages seem so far to be quite
similar, although full inventories of these classifiers are not often provided. The form
of these classifiers indicates the shape of an entity by the way in which it is held, e.g.
thin or tiny entities are often represented by a M -form, long and thin entities as well
as entities that are held by a kind of handle use a -form. Cylindrical entities are held
with a :-form, flattish entities are held with a -form, thicker ones with a -form,
and bulkier entities with one or two -forms. A signer can choose to use a special
164 II. Morphology

form when the entity is held in a different way than normal, e.g. because handling
needs (more) force or the signer indicates that an entity requires controlled or delicate
handling, as when it is fragile or filthy. Although the manual articulator usually repre-
sents the hand of a human agent holding an entity, in some cases the manipulator is
not a human agent, but, for example, a hook or a grabber. It is possible to indicate the
shape of such manipulators, too (in this instance by a - and a = -form, respectively).
Thus, many sign languages share sets of classifier forms, but there are also language-
specific forms. In Whole Entity classifiers these forms often denote material and shape
characteristics. In both classifier categories, some variation in the choice of a classifier
is possible, which serves to focus on particular aspects of the referent.

3. Classifier verbs

For a good understanding, linguistic elements need to be investigated in linguistic con-


texts. Classifiers in sign languages often occur in combination with verbs, specifically
verbs that indicate (i) a referent’s motion through space, a change of posture, and its
location or existence somewhere in space, and (ii) the handling of referents (Supalla
1982, 1986; Schembri 2001; Engberg-Pedersen 1993; Wallin 1996, 2000; Tang 2003; and
many others). These, and particularly the first type of verbs, have been the focus of
most of the research on classifiers in sign languages. Verb-classifier combinations bear
a variety of terms in the literature (such as spatial-locative predicates, polymorphemic
predicates/verbs, productive signs, highly iconic structures, i.e. transfers of situation, to
mention a few). The terms used often reflect a particular view on the structure of these
combinations. In this chapter, they will be referred to as ‘classifier verbs’. Studies vary
with respect to what they consider as classifier verbs. For example, verbs of geometrical
description (or tracing SASSes) that are made at particular locations in space are some-
times counted among the classifier verbs; sometimes verbs expressing the manner of
locomotion are included, and some studies do not restrict the occurrence of classifiers
to motion verbs but also include other verbs in which the manual articulator is mean-
ingful. Different analyses of classifiers and classifier verbs result. We will focus here on
verbs that express a directed motion of a referent through space, a change of posture of
a referent, the localization of a referent in sign space, and the existence of a referent
at a location in sign space, for both Whole Entity and Handling classifiers.
Let us look at a typical example of classifier verbs in context from ASL in (5) (from
Emmorey 2002, 87): In such structures, a referent is initially introduced by a noun,
then followed by a verb with a classifier representing the referent of the noun (signs 1
and 3 introduce a referent and signs 2 and 4 contain classifier verbs). If more than one
referent is represented in space, the bigger/backgrounded entity is introduced first (the
‘Ground’ in the literature on language and space, e.g., Talmy 1985), and then the
smaller entity, which is in the focus of attention (the ‘Figure’). The simultaneous repre-
sentation of the referents in a classifier construction, the particular positioning of which
expresses the spatial relation between the referents, is reported to be obligatory in
some sign languages (see Supalla 1982; Perniss 2007; Morgan/Woll 2008; Chang/Su/Tai
2005; and Tang/Sze/Lam 2007). In the following sections, we will focus on the structure
of classifier verbs.
8. Classifiers 165

3.1. The matter of morphological complexity of classifier verbs

The morphological structure of classifier verbs is rather underinvestigated, which is


surprising in view of the fact that sign languages are generally claimed to have complex
morphology, and classifier verb formation is considered a very productive process. Su-
palla’s (1982, 1986) work gives an extensive morphological analysis of classifier verbs.
A classifier verb, in his view, is one (or a combination) of a small subset of verb roots,
which can be combined with large numbers of affixes. The most prominent of these
affixes is the classifier, that he considers an agreement marker for a noun argument of
the verb root. Some classifiers are morphologically complex. They can be combined
with orientation affixes as well as affixes indicating how the referent is affected (e.g.,
‘wrecked’ or ‘broken’). The verb root can, furthermore, be combined with various
manner and placement affixes. In Supalla’s analysis (and in others to follow), sign
parameters that in other signs are considered mere phoneme values are morphemic as
well as phonemic. Unfortunately, no complex signs with complete morphological analy-
sis are provided in Supalla’s work, nor are considerations given as to why particular
parts of signs have a particular morphological status rather than another (or are not
morphemic at all).
Supalla’s analysis has been criticized as being too complex, since he considers every
aspect of the signs under discussion that might contribute meaning to the whole as
morphemic. As a result, the suggested morphological structure is huge in view of the
fact that classifier verbs enhance multiple aspects of motion and location events, espe-
cially in comparison to spoken languages (even spoken languages that are renowned
for their morphological complexity). Liddell (2003, 204⫺206) attempts to give a mor-
phological analysis of a two-handed classifier construction (glossed as person1-walk-
to-person2) based on the morphemes suggested by Supalla and counts four roots and
minimally 14 and maximally 24 affixes in this sign. This shows that Supalla’s morpho-
logical analysis of these verbs is indeed extremely complex, but also that it is not
detailed enough since the morpheme status of ten aspects in this particular sign is not
clear. One can, therefore, wonder whether too much morphology was assumed and
whether some aspects of these structures can be accounted for without necessarily
assigning them morphological value. Nevertheless, at least parts of Supalla’s analysis
hold valid for many researchers: it is generally assumed that at least the movements/
locations and the manual articulator are meaningful. The analyses of the morphological
166 II. Morphology

structure of such verbs differ, however. Liddell (2003), for example, presents the view
that although the articulator and movement may be morphemes in such verbs, the
process by which the verbs are formed is not very productive, and in many verbs that,
at first sight, contain meaningful manual articulators and meaningful movements, these
sign parts behave idiosyncratically and are not productively combined with other sign
parts to form new structures. McDonald (1982) and Engberg-Pedersen (1993) observe
that the interpretation of classifier verbs seems to be in part dependent on the classifier
that is used. Engberg-Pedersen (1993) furthermore points out that particular move-
ments do not combine well with particular classifiers and suggests that the classifier is
the core element in these structures rather than the movement (although no further
claims are made with respect to the morphological status or structure of the verbs).
Slobin et al. (2003) suggest that classifier verbs may be similar to bipartite verb stems
in spoken languages (e.g., Klamath; Delancey 1999), in which the contribution of classi-
fier and movement (and other) components is of equal importance in the complex
verb. Many studies, however, merely indicate that the classifier and the movement are
morphemes, although it is generally assumed that other aspects of the classifier verb
that convey information about the event (such as manner of locomotion and locations)
are (or at least can be) expressed by morphemes. More detailed discussion of the
structure of the sign is usually not given. Still, all studies agree that these constructions
are verbs, referring to an event or state in the real world.
It is recognized in most investigations that there is an anaphoric relation between
the classifier and the referent that is involved in the event. As stated in the previous
section, the referent is usually introduced before the classifier verb is sign, although in
some cases the referent is clear from the (previous or physical) context and need not
be mentioned. After introduction of the referent, it can be left unexpressed in the
further discourse (e.g. in narratives) since the classifier on the verb suffices to track
the referent involved. The relation is deemed systematic. Supalla (1982) and some of
the subsequent researches (e.g., Benedicto/Brentari 2004; Chang/Su/Tai 2005; Cuxac
2003; Glück/Pfau 1998, 1999; Zwitserlood 2003, 2008), consider the classifier an agree-
ment marker or a proform for the referent on the verb. In these accounts, the move-
ment (or localization) in the sign is considered a verb root or stem, and the classifier
as well as the locus in space as functional elements (i.e. inflectional affixes). These
views will be discussed in more detail in the next section.

3.2. Verb roots, (in)transitivity, and the classifier category

As was stated in section 2, researchers generally distinguish two main categories of


classifiers: Whole Entity classifiers and Handling classifiers. The first are seen in verbs
that express a motion of a referent, its localization in space, or its existence in space.
In these verbs, the classifiers represent the referent directly. Handling classifiers, in
contrast, occur with verbs that show the manipulated motion or the holding of a refer-
ent. The contrast between the two has already been shown in (2) and (3), and is further
illustrated in (6), from DGS.
The signer uses two verbs with Whole Entity classifiers ( , in signs 13 and 15) and
two verbs with Handling classifiers ( , in signs 8 and 14), each classifier representing
the old woman. When he uses the verbs with Whole Entity classifiers, he describes an
8. Classifiers 167

independent motion of the woman, who wants to move up, onto the bus, and the
Handling classifiers are used for a manipulated motion of the old woman by a human
agent (the man). There is a close connection between the category of classifier and the
transitivity of the verb: Whole Entity classifiers occur with intransitive verbs, whereas
Handling classifiers are used with transitive verbs (in chapter 19, the use of classifier
types is discussed in connection with signer’s perspective; see also Perniss 2007). Fol-
lowing Supalla (1982), Glück and Pfau (1998, 1999), Zwitserlood (2003), and Benedicto
and Brentari (2004), consider the classifier in these verbs as a functional element: an
agreement marker, which functions in addition to agreement by use of loci in sign
space (see chapters 7 and 10 for details on agreement marking by loci in sign space).
Benedicto and Brentari (2004) furthermore claim that the classifier that is attached to
the verb is also responsible for its (in)transitivity: a Handling Classifier turns a (basi-
cally intransitive) verb into a transitive verb.
The analysis of classifiers as agreement markers is not uncontroversial. Counterar-
guments are given by observations that classifiers are not obligatory (as they should
be if they were agreement markers), and that there is variability in the choice of a
classifier (as discussed in section 2.2), which should not be possible if classifiers were
agreement markers. These arguments, however, are not valid. First, marking of agree-
ment is not obligatory in many languages in the world that can have agreement mark-
168 II. Morphology

ing (Corbett 2006). Second, and connected to the first point, the fact that classifiers do
not occur with verbs other than verbs of motion and location verbs may have phono-
logical/articulatory reasons: it is not possible to add a morpheme expressed by a partic-
ular configuration of the manual articulator to a verb that already has phonological
features for that articulator. This is only possible with verbs that have no phonological
specification for the manual articulator, i.e. motion and location verbs (in the same
vein it is argued that many plain verbs cannot show agreement by loci in sign space
because they are body anchored (i.e. phonologically specified for a location); see also
chapter 7 on agreement).
Finally, variability in the choice of a classifier is, in part, the result of the verb’s
valence: a different classifier will be combined with an intransitive and a transitive
verb: Whole Entity classifiers appear on intransitive verbs, and transitive ones will be
combined with Handling classifiers. Also, some variability in choice of agreement
markers is also observed in other (spoken) languages. This issue, however, is still un-
der debate.

3.3. The phonological representation of the morphemes in


classifier verbs

Classifiers in sign languages are often described as bound morphemes, i.e. affixes (see,
among others, Supalla 1982; Meir 2001; Tang 2003; Zwitserlood 2003). They are gener-
ally considered to be expressed by a particular shape of the manual articulator, possibly
combined with orientation features. Classifiers thus lack phonological features for
place of articulation and/or movement. It may be partly for this reason that they are
bound. Researchers differ with respect to their phonological analysis of the verbs with
which classifiers occur. In some accounts (e.g., Meir 2001; Zwitserlood 2003, 2008),
classifier verbs contain a root that only has phonological specifications for movement
(or location) features, not for the manual articulator. Classifier verb roots and classifi-
ers, then, complement each other in phonological specification, and for this reason
simultaneous combination of a root and a classifier is always possible. In other accounts
(e.g., Glück/Pfau 1998, 1999), verbs are assumed to be phonologically specified for
movement and handshape features. The affixation of a classifier triggers a phonological
readjustment rule for handshape features, which results in a modification of the ver-
bal stem.
Some attention has been given to the apparent violations of well-formedness con-
straints that classifier verbs can give rise to (e.g., Aronoff et al. 2003, 70f). It has
also been observed that classifier verbs are mostly monosyllabic. However, apart from
Benedicto and Brentari (2004), there have been no accounts of phonological feature
specifications of classifiers and classifier verbs; in general classifiers are referred to as
‘handshapes’. Recent phonological models (e.g., Brentari 1998; van der Kooij 2002) as
well as new work on phonology may be extended to include classifier verbs.
To sum up, there are a few studies with argued suggestions for a (partial) morpho-
logical structure of classifier verbs. In general, these signs are considered as verb roots
or verb stems that are combined with other material; classifiers are argued to be sepa-
rate morphemes, although the status of these morphemes is still a debated issue. They
8. Classifiers 169

are not specified, or claimed to be roots or affixes (e.g., agreement markers). Handling
classifiers occur in transitive classifier verbs, where the classifier represents a referent
that is being held/manipulated (as well as a referent that holds/manipulates the other
referent); Whole Entity classifiers, in contrast, occur in intransitive verbs and represent
referents that move independently of manipulation or simply exist at particular loca-
tions in sign space. Phonological representation of classifier verbs in sign languages has
received little attention to date.

4. Classifiers in signs other than classifier verbs

Not only do classifier verbs contain meaningful manual articulators; they are also en-
countered in other signs. Some examples from NGT are shown in (7), in which we
recognize the hand configuration representing long and thin entities, i.e. knitting nee-
dles, legs, rockets, and thermometers (@), and a hand configuration often used in NGT
for manipulation of long and/or thin entities (with control), such as keys, fishing rods,
toothbrushes, and curtains ( ):

There are different views of the structure of such signs, as explained below: some
researchers consider them monomorphemic, while others claim that they are morpho-
logically complex. These views are discussed in the next section.

4.1. Complex or monomorphemic signs?

Traditionally, signs in which the manual articulator (and other parameters) are mean-
ingful, but which are not classifier verbs, are called ‘frozen’ signs. This term can be
170 II. Morphology

interpreted widely, for example as ‘signs that are monomorphemic’, ‘signs that one
may find in a dictionary’, and ‘signs that may be morphologically complex but are
idiosyncratic in meaning and structure’. Most researchers adhere to the view that these
signs originate from classifier verbs that have been formed according to productive
sign formation processes, and that have undergone a process of lexicalization (e.g.,
Supalla 1980; Engberg-Pedersen 1993; Aronoff et al. 2003), i.e. the interpretation of
the sign has become more general than the classifier verb, and the hand configuration,
location, and movement parts no longer have distinct meanings, and therefore can no
longer be interchanged with other parts without radically changing the meaning of the
whole sign (in contrast to classifier verbs). Often the signs do not express (motion or
location) events any more, in contrast to classifier verbs (e.g., Supalla 1980; Newport
1982), they obey particular phonological restrictions that can be violated by classifier
verbs, and they can undergo various morphological processes that are not applicable
to classifier verbs, such as affixation of aspectual markers (Sandler/Lillo-Martin 2006;
Wilbur 2008) and noun derivation affixes (Brentari/Padden 2001).
There are also studies claiming that many such signs are not (fully) ‘frozen’, but,
on the contrary, morphologically complex. In some studies it is implied that sign lan-
guage users are aware of the meaningfulness of parts of such signs, such as the hand-
shape (Brentari/Goldsmith 1993; Cuxac 2003; Grote/Linz 2004; Tang/Sze/Lam 2007;
Sandler/Lillo-Martin 2006). Some researchers suggest that such signs are actually the
result of productive processes of sign formation (e.g., Kegl/Schley 1986; Brennan
1990a,b; Johnston/Schembri 1999; Zeshan 2003; Zwitserlood 2003, 2008). Signers of
various sign languages are reported to coin new signs on the spot when they need
them, for instance when the language does not have a conventional sign for the concept
they want to express or when they cannot remember the sign for a particular concept,
and these signs are usually readily understood by their discourse partners. Some of
these newly coined signs are accepted in the language community and become conven-
tionalized. This does not necessarily mean that they started out as productively formed
classifier constructions that are lexicalized in the conventionalization process (lexicali-
zation in this context meaning: undergoing (severe) phonological, morphological, and
semantic bleaching). Even though lexicalization as well as grammaticalization proc-
esses take place in all languages and sign languages are no exception, sign languages
are relatively young (see chapter 34 on lexicalization and grammaticalization). In addi-
tion to the fact that there may be other sign formation processes besides classifier verb
formation involved, it is not very plausible that diachronic lexicalization processes have
taken place at such a large scale as to result in the large numbers of signs in which
meaningful hand configurations occur (as well as other meaningful components) in
many sign languages, especially in the younger ones. Besides this, it has not been pos-
sible to systematically verify the claim of diachronic lexicalization of signs for most
sign languages because of a lack of well-documented historic sources.
Some phonological studies have recognized that the ‘frozen’ lexicon of sign lan-
guages contains many signs that may be morphologically complex. These studies recog-
nize relations between form and meaning of signs and sign parts, but lack morphologi-
cal accounts to which their phonological descriptions may be connected (Boyes Braem
1981; Taub 2001; van der Kooij 2002; see also chapter 18 for discussion of iconicity).
8. Classifiers 171

4.2. The structure of ‘frozen’ signs

A few studies discuss the structure of ‘frozen’ signs; these are briefly sketched below
(see chapter 5 for a variety of other morphological processes in sign languages). Bren-
nan’s (1990a,b) work on sign formation in BSL is comprehensive and aims at the
denotation of productively formed signs, i.e. the characteristic(s) of an entity or event
that are denoted in such signs and the way in which this is done, especially focusing
on the relation of form and movement of the manual articulator on the one hand and
aspects of entities and events on the other. Although Brennan indicates that sign parts
such as (changes of) hand configurations, movements, and locations are morphemes,
she does not provide morphological analyses of the signs in which they appear. She
roughly states that they are kinds of compounds, and distinguishes two types: simulta-
neous compounds and ‘mix ‘n’ match’ signs. Brennan argues that simultaneous com-
pounds are blends of two individual signs (many of which contain classifiers), each of
which necessarily drops one or more of its phonological features in the compounding
process, in order for the compound to be pronounceable. Mix ‘n’ match signs are
combinations of classifiers, symbolic locations, and meaningful non-manual compo-
nents. According to Brennan, the meaning of both types of sign is not always fully
decomposable.
Meir (2001) argues that Israeli Sign Language (Israeli SL) has a group of noun
roots (also called ‘Instrumental classifiers’) ⫺ free morphemes that are fully specified
for phonological features, and that can undergo a lexical process of Noun Incorpora-
tion into verbs. This process is subject to the restriction that the phonological features
of noun root and verb do not conflict. The output of this process is a compound.
Examples of such compounds are the signs glossed as spoon-feed, fork-eat, needle-
sew, and scissors-cut. According to Meir, the differences between the processes and
outputs of Noun Incorporation and classifier verb formation are the following: (i) the
former are combinations of free morphemes (verb and noun roots) whereas the latter
are combinations of verbs and affixes; (ii) combinations of classifier verbs and classifi-
ers are always possible because their phonological features never conflict, whereas
Noun Incorporation is blocked if the phonological features of the verb and noun root
conflict; (iii) in the compounding process, the incorporated Noun root constitutes a
syntactic argument, which cannot be expressed with a separate noun phrase in the
sentence after incorporation, whereas after classifier verb formation, both the classifier
representing a referent and the noun referring to that referent can be present in the
sentence.
An analysis that is reminiscent of Brennan’s (1990a,b) and Meir’s (2001) work is
provided in Zwitserlood (2003, 2008) for NGT. There it is argued that all manual sign
parameters (handshape, orientation, movement, and location) can be morphemic (as in
Brennan 1990a,b). All these morphemes are considered roots that are phonologically
underspecified (in contrast to Meir’s (2001) view) and that can combine into complex
signs called ‘root compounds’. Zwitserlood argues that the roots in these compounds
do not have a grammatical category. The signs resulting from combinations of these
roots are morphologically headless and have no grammatical category at first instance.
The grammatical category is added in syntax, after the sign has been formed.
In this view, the differences between root compounds and classifier verbs, and the
processes by which they are formed are the following: (i) the former is a lexical (com-
172 II. Morphology

pounding) process; the latter a grammatical (inflectional) process; (ii) classifier verbs
consist of only one root that is phonologically specified for a movement. This root is
assigned the grammatical category of verb in syntax, after which various affixes, such
as the classifier (which is considered an agreement marker), are added. Root com-
pounds, in contrast, contain more than one root, one of which may be a classifier, and
they can be assigned different grammatical categories; (iii) the classifier in a classifier
verb is always related to a syntactic argument of the verb, i.e. the Theme (moving)
argument; the classifier in root compounds is not systematically related to a syntactic
argument (in case the root compound is a verb); and (iv) whereas intransitive classifier
verbs combine with Whole Entity classifiers and transitive ones with Handling classifi-
ers in classifier verbs, a classifier in a verbal root compound is not connected with the
verb’s valence. Zwitserlood’s account shows similarities to Brennan’s work and shares
some ideas with Meir’s analysis. It is also somewhat reminiscent of the idea of bipartite
(or rather, multipartite) stems suggested by Slobin et al. (2003), with the difference
that the root compounding process is not restricted to verbs.
To summarize, although in most sign languages classifiers are recognized in many
signs that are not classifier verbs, the morphological structure of these signs has been
investigated only rarely to date. This is largely due to the fact that these signs are
reminiscent of classifier verbs while they do not show the patterns and characteristics
observed in constructions with classifier verbs. As a result, the signs in question are
generally taken to be lexicalized forms without internal morphology. The literature
contains a few studies that recognize the fact that classifiers as well as other sign param-
eters are used systematically and productively in new sign formation in many sign
languages and that some of the signs thus formed enter the established lexicon (see
also Johnston/Schembri 1999). Signers also appear to be sensitive to the meaningful
elements within the signs. The general assumption that these signs are monomorphemic
may be partly due to the gloss tradition in sign language research, where signs are
labeled with a word or word combination from the local spoken language and/or Eng-
lish that often does not match the internal structure of the signs. Unintentionally, re-
searchers may be influenced by the gloss and overlook sign-internal structure (see
Hoiting/Slobin 2002; Zwitserlood 2003). There are several accounts of sign-internal
morphology (e.g., Padden/Perlmutter 1987; Fernald/Napoli 2000; Frishberg/Gough
2000; Wilbur 2008; as well as others mentioned in this section) along the lines of which
more morphological studies of signs and new sign coinage can be done. Also, psycholin-
guistic studies of sign processing are important in showing awareness of morphological
structure in users of sign languages.

5. The acquisition of classifiers in sign languages


Chapter 28 of this volume gives a general overview of sign language acquisition. In
addition, this section will focus particularly on research into the acquisition of classifier
structures by Deaf children. Many of these studies concentrate on production of classi-
fiers by Deaf children, i.e. on the age and the order in which they acquire the different
classifiers in their target language. Mostly elicitation tasks are used (e.g., Supalla 1982;
Kantor 1980; Schick 1990b; Fish et al. 2003). In a few studies, the movements within the
classifier verbs are also taken into account (e.g., Newport 1988; Tang/Sze/Lam 2007).
8. Classifiers 173

The children in these studies are generally aged three years and older, and the tasks
are often designed to elicit Whole Entity classifiers (including SASSes), although stud-
ies by Schick (1990b) and Slobin et al. (2003) also look at Handling classifiers. All
studies are cross-sectional.

5.1. Production studies

The general results of the production studies are that the youngest children initially
use different strategies in expressing the events presented in the stimuli. They use
lexical verbs of motion as well as classifier verbs, and sometimes they do not use a
verb at all. Older children use more classifier verbs than younger children. Although
the classifiers used by these children are often quite iconic, children initially do not
seem to make use of the possibility of iconic mapping that most sign languages offer
between motion events and spatial situations in real life on the one hand, and the use
of space and iconic classifier forms on the other (but see Slobin et al. (2003) for argu-
ments for iconic mapping in spontaneous (possibly gestural) utterances by children
between one and four years of age). As for the movements within the verbs, children
seem to represent complex path movements sequentially rather than simultaneously,
unlike adults (Supalla 1982; Newport 1988). Young children often use a general classi-
fier instead of a more specific one or a classifier that is easier to articulate than the
target classifier (e.g., < instead of the -form representing vehicles in ASL). Never-
theless, target classifiers that are considered motorically simple are not always acquired
earlier than those that are more complex (note that it is not always clear which hand-
shapes are simple and which are complex). In many cases where the spatial scene to
be described contains a Figure and a Ground object, children do not represent the
Ground referent simultaneously with the Figure referent, while in some cases in which
the Ground referent is present, it is not appropriate (e.g., the scale between the Ground
and the Figure referents is not felicitous). The correct use of classifiers is not mastered
before eight to nine years of age.
The conclusions of the studies are not unequivocal. In some studies (even studies
of acquisition of the same target language) the children appear to have acquired a
particular classifier earlier than in others, or a particular classifier category has been
acquired earlier than stated in another study (e.g., Tang/Sze/Lam 2003). Many research-
ers indicate that young children rarely use complex classifier constructions, i.e. con-
structions in which each hand represents a different entity. Studies that discuss errors
that are made by the children provide an interesting outlook on their development,
for example apparent overgeneralization of morphological structure in lexical signs
(e.g., Bernardino 2006; Tang/Sze/Lam 2007).

5.2. Comprehension studies

Few comprehension studies of acquisition of classifier constructions in sign languages


have been undertaken to date. The existing studies focus on comprehension of the
motions and (relative) locations of referents in (intransitive) classifier verbs, rather
174 II. Morphology

than on the classifier handshapes. For BSL, Morgan et al. (2008) conclude that verbs
containing path movements are better and earlier understood than those containing
localizations, and that both movements and localizations are not yet mastered at five
years of age. Martin and Sera (2006) report that comprehension of locative relations
between referents (both static and dynamic) is still not fully acquired by children learn-
ing ASL at nine years of age.

5.3. Interpretation of the results

Because of the different approaches, the studies cannot easily be compared, and inter-
pretation of the results of the available acquisition studies is rather difficult. More
importantly, the results are somewhat obscured by the different assumptions about the
structures under research which underlie the designs and scorings. For example, al-
though the term ‘SASS’ is used in several studies, what the term covers is not described
in detail; therefore its interpretation may differ in these studies. Also, from descriptions
of test items it appears that these may involve classifier verbs as well as verbs that do
not express a motion or location of a referent (such as signs for looking and cutting).
One of the most important issues in this respect is the fact that in most studies vital
information is missing about the targets of the test items. Thus, it is often unclear how
these were set and how the children’s data were scored with respect to them. Since
adult language is the target for the children acquiring the language, language use and
comprehension of adults should be the target in acquisition tests. It can be seen in a
few studies (e.g., Fish et al. 2003) that the children’s classifier choices for referents
show variation, some of which indicates a particular focus on the referent. However,
it is not clear how this is related to adult variation on these test items. For instance,
Martin and Sera (2006) compared comprehension of spatial relations by children ac-
quiring ASL and children acquiring English, in which the children’s scores were also
compared to adult scores on the same test items (in ASL and English). As expected,
the English-speaking adults scored 99 % correct. However, the ASL using adults had
a mere 78 % mean correct score. Apparently, in this case the test targets were not the
adult patterns, and it is unclear, therefore, what patterns were selected as targets. This
also holds for most other classifier acquisition studies.

5.4. Summary

Research shows that the acquisition of classifier constructions in sign languages is a


very complex task, in which the child makes little use of the iconic mapping between
event and linguistic representation. Correct use of classifier verbs is not fully acquired
until children are in their early teens. Further research with broader scope, taking
context, different strategies, and variation in the choice of classifier into account and
clearly relating the results to adult comprehension and performance is necessary to
shed more light on the acquisition of these constructions.
8. Classifiers 175

6. Classifiers in spoken and sign languages: a comparison

6.1. Overview of recent research on spoken language classifiers

Research into classifiers in spoken languages began well in the 1970s. It became clear
that there are different classifier systems in the world’s languages. As stated in sec-
tion 2, early study of sign language classifiers was much influenced by the then avail-
able literature on spoken language classifiers. In an overview article by Allan (1977)
languages with classifiers were distinguished into four types, one of which is a ‘predi-
cate classifier language’ (e.g., Navajo). Classifiers in sign languages seemed to match
this type, and similar structures in Navajo and ASL were used to exemplify this. How-
ever, the comparison does not hold on two points: first, Navajo is a language with
classificatory verbs rather than classifier verbs, the difference being that in classifier
verbs a separate verb stem and classifier can be distinguished, while in classificatory
verbs the verb stem itself is responsible for classification of the referent involved in
the event and no separate classifying morpheme can be discerned (Young/Morgan
1987; Aikhenvald 2000; Grinevald 2000). Second, and related to the previous point,
the early comparisons between structures in Navajo and ASL were based on misinter-
pretation of the Navajo classificatory verbs (Engberg-Pedersen 1993; Schembri 2001;
Zwitserlood 1996, 2003).
Recent studies, particularly work by Aikhenvald (2000) and Grinevald (2000) give
much more, and newer, information about classifiers in a variety of spoken languages,
covering their semantics, pragmatics, function, and morphological realization. If we
take as a premise that a classifier be a distinct morpheme, four major categories of
classifiers can be distinguished (which are not quite the same as those suggested by
Allan (1977)). These have the following characteristics:

1) Noun classifiers are free morphemes that occur within a noun phrase (more than
one classifier may occur within the noun phrase). The noun classifiers’ semantics
are often based on animacy and physical properties of the referent. The choice of
a noun classifier is based on semantics and can vary, when a speaker focuses on
different characteristics of the noun referent. Not all nouns in a language take a
classifier. The sets of noun classifiers in different languages can vary from small
(even two, e.g. in Emmi, Australia) to (very) large (several hundreds in Asian
languages). These classifiers function as determiners but can also be used pronomi-
nally (in which case the NP does not contain a noun).
2) Numeral classifiers are free or bound morphemes that are obligatory in numeral
and quantified noun phrases. They also occur occasionally with adjectives and de-
monstratives. The semantics of these classifiers includes animacy, social status, di-
rectionality, and physical and functional properties. The choice of a numeral classi-
fier is predominantly semantic and some nouns have alternative choices of
classifiers, depending on the property of the noun that is in focus. Every noun with
a countable referent has a classifier, although there may be some abstract nouns
that are not classified. The number of classifiers may range from few (e.g., 14 in
Tashkent, Uzbek) to large numbers (e.g., an estimate of 200 in Thai and Burmese).
Their main function is to individuate nouns (typically ‘concept’ or mass nouns in
176 II. Morphology

the languages with this classifier system) in a quantificational environment, but


they can also have an anaphoric function.
3) Genitive (or: possessive or relational) classifiers are bound morphemes that occur
in noun phrases with possessive constructions. They generally refer to the semantic
class of the possessed nouns. Not all nouns are categorized by a classifier; nouns
that are classified often belong to a particular semantic group. The semantics con-
cerns physical and functional properties, nature, and sometimes animacy. Some
languages with a system of genitive classifiers have a ‘generic’ or ‘default’ classifier
that can be used instead of more specific ones. This type of classifier can consist
of independent words or affixes. The choice of a classifier is strictly semantic and
the size of the classifier inventories is variable. The function of this type of classifier
is the expression of possession.
4) Verbal classifiers are bound morphemes that are affixed to verbs and are linked to
verb arguments (usually subjects or objects, but sometimes even peripheral argu-
ments), in terms of their inherent properties. The semantics of these classifiers has
a wide range, usually based on physical and functional properties, nature, direction-
ality/orientation, quanta, and sometimes animacy. The number of classifiers ranges
from several dozen (e.g., Terena, a language spoken in Brazil) to over one hundred
(e.g., Mundurukú, a Tupi language of north central Brazil). Usually only a subset
of verbs in a language takes a classifier. Not all nouns are classified, but a noun
can have more than one classifier. The main function of this type of classifier is
referent tracking.

A note of caution is needed here: the characteristics of the classifier systems outlined
above are generalizations, based on descriptions of (large) sets of data from languages
that employ one or more of these classifier systems. There is, however, much variation
within the systems. Also, some classifier systems have been well studied, whereas oth-
ers, particularly verbal classifier systems, are still under-researched in comparison to
other systems (such as numeral classifiers), which complicates a comparison between
classifier systems in spoken and sign languages considerably.

6.2. A comparison between (verbal) classifiers in spoken


and sign languages

As stated in section 3, classifiers in sign languages typically occur on verbs. Thus, a


comparison between sign and spoken languages should focus primarily on verbal classi-
fiers. Classifiers in sign languages share a number of characteristics with verbal classifi-
ers in spoken languages. In some characteristics, however, they differ. We will now
focus on the main characteristics of classifiers in both modalities and discuss their
similarities and differences.
First, verbal classifiers are affixes attached to a verb stem (Aikhenvald 2000, 428;
Grinevald 2000, 67). For example, in the Australian language Gunwinggu the classifier
bo: (for liquid referents) is bound to the verb stem mangan (‘fall’) (Oates 1964, in
Mithun 1986, 389):
8. Classifiers 177

(8) gugu ga- bo:- mangan [Gunwinggu]


water it- cl:liquid- fall
‘Water is falling.’

Classifiers in sign languages are also considered as affixes by many researchers (e.g.,
Supalla 1982, 24; Sandler/Lillo-Martin 2006, 77), while others do not specify their mor-
phological status.
Second, verbal classifiers in spoken languages are linked to the subject or object
argument of the verb to which they are affixed and they are used to maintain reference
with the referent throughout a discourse (Aikhenvald 2000, 149). The verb determines
which argument the classifier represents: the classifiers represent the subject in intran-
sitive verbs and the object in transitive verbs. This is illustrated with the classifier n-
for round entities in the North Athabaskan language Koyukon, which represents a
rope. The rope is the subject of the intransitive verb in (9a) and the object of the
transitive verb in (9b) (Thompson 1993, in Aikhenvald 2000, 168):

(9) a. tl’ool n- aal’onh [Koyukon]


rope cl:round.thing- be.there
‘A rope is there.’
b. tlool n- aan- s- ’onh
rope cl:round.thing- pref- 1sg- arrive.carrying
‘I arrived carrying a rope.’

As we have seen in examples (5) and (6) in section 3, a signer can use a classifier after
its referent has been introduced (or when it is clear from the context), to relate the
referent’s motions through space, a change in its posture, or its existence and/or loca-
tion in sign space. The classifier suffices to maintain the reference through long
stretches of discourse, and thus no overt nouns are necessary (though they may they
still occur, e.g. to re-establish reference). Thus, similarly to verbal classifiers in spoken
languages, classifiers in sign languages function as referent tracking devices. Some re-
searchers claim that classifiers represent verb arguments and function as agreement
markers of the arguments on the verbs. A difference between the two modalities is
that there are generally no separate classifiers for transitive and intransitive verbs in
spoken languages, whereas such a difference is found in sign languages: Whole Entity
classifiers appearing on intransitive verbs versus Handling classifiers that appear on
transitive verbs.
Third, although verbal classifiers in spoken languages have an anaphoric function,
their use is not obligatory. They typically occur on a subset of a language’s verbs, and
are sometimes used for special effects (e.g., stressing that a referent is completely in-
volved in the event in Palikur (an Arawak language used at the mouth of the Amazon
river), as stated by Aikhenvald (2000, 165)). This characteristic is rather difficult to
compare with classifiers in sign languages. Apparently classifiers in sign languages only
occur on a subset of verbs, but this may be a result of the articulatory possibilities of
the manual-visual modality as described above in sections 3.3 and 4.2. Classifiers in sign
languages can only co-occur with verbs that do not have phonological specifications for
the manual articulator (usually verbs of motion and location), not on verbs that have
inherent phonological specifications for the hand. It is interesting, though, that verbs
178 II. Morphology

that take classifiers in spoken languages are also often motion verbs, positional verbs,
verbs expressing the handling of an object, as well as verbs that describe physical
properties of the referent. Whether or not sign language classifiers are obligatory on
the subset of motion/location verbs is still a matter of debate. For example the fingertip
that is sometimes used for localization of referents in space or for tracing the motion
of a referent through space is regarded by some as a kind of ‘default’ classifier, used
when a signer does not focus on any particular characteristic of the referent (see also
section 2.2). In this view, it can be argued that verbs of motion that appear with this
shape of the articulator have a classifier indeed, and that classifiers, thus, are obligato-
rily attached to these verbs. In other views, the finger(tip) is considered a (default)
phonetic articulation, spelled out simply because the expression of the location or
movement needs an articulator, or the finger(tip) handshape is considered as one of
the phonological features of the verb, that undergoes a change when a classifier mor-
pheme is added (e.g., Glück/Pfau 1998, 1999). More research is necessary for any of
these views to prove correct.
Fourth, verbal classifier systems (as well as other classifier systems) in spoken lan-
guages allow variability in the choice of a classifier. Thus a noun can be categorized
with more than one classifier (this is sometimes called ‘reclassification’). The variability
range is to some extent dependent on the size of the inventory of classifiers, and on
the semantic range of the categorization. An example of this variability from Miraña
(also called Bora; a Witotoan language spoken in Brazil, Peru, and Colombia) is shown
below. In this instance, a more general classifier appears on the verb in (10a) and a
classifier that focuses on the shape in (10b) (Seifart 2005, 80):

(10) a. kátX:βí -ni i: -ni pihhX́ -ko [Miraña]


fall -cl:inanimate dist -cl:inanimate fish.nmz -cl:pointed
‘It (inanimate) fell, that (pointed) fishing rod.’
b. kátX:βí -ko i: -ko pihhX́ -ko
fall -cl:pointed dist -cl:pointed fish.nmz -cl:pointed
‘It (pointed) fell, that (pointed) fishing rod.’

As discussed in section 2, classifier variation is also possible in sign languages, both for
Whole Entity and Handling classifiers. This variability has been one of the reasons for
proposing other, and different, terms for these elements. Slobin et al. (2003) state that
the term ‘classifier’ is in fact a misnomer, because choosing a particular form of the
manual articulator is an act of indicating some property of the referent rather than of
classifying the referent. This holds true not only for classifiers in sign, but also in
spoken languages. Traditionally, the main function of these elements was considered
categorization. However, recent work by among others Croft (1994), Aikhenvald
(2000), and Grinevald (2000) shows that categorization is not the main function, but
that it is necessary for the various primary functions of each classifier category (e.g.,
individuation for numeral classifiers, reference tracking for verbal classifiers). In this
respect, then, classifiers in sign and spoken languages are rather similar, despite the by
now infelicitous term.
Example (10) also shows that the classifiers in Miraña do not only occur on the
verb, but also on nouns and determiners. This is a frequent observation in spoken
languages; languages with verbal classifiers often have multiple classifier systems. This
is in contrast to sign languages, which only have verbal classifiers.
8. Classifiers 179

A further characteristic of spoken languages with verbal classifier systems is that


not all nouns are classified. Even though it is becoming clear that classification does
not so much concern nouns but rather entities it can still be stated that not all entities
are represented by a classifier in spoken languages. As for sign languages, it has not
been mentioned in the literature that there are entities that are not classified by a
particular hand configuration. This does not imply that all entities in sign languages
can be represented by a classifier. Studies so far have used narrative data, often elicited
by pictures, stories, and movies, that feature (restricted sets of) concrete entities. It is
possible that other, particularly more abstract, entities might not take classifiers, or
that they may not be represented by classifiers since they are less likely to move
through space or to enter spatial relations. On the other hand, it is just as plausible
that abstract entities can be assigned particular characteristics, such as shape or ani-
macy, and enter metaphoric spatial relations. For the moment the issue remains unre-
solved.
Finally, we have seen that sign language classifiers do not only occur with motion
and location verbs, but that they are also used in lexicogenesis (section 4), even though
this issue still needs extensive research. It has been claimed (e.g., Engberg-Pedersen
1993; Schembri 2003) that this is not the case in spoken languages and that this is a
point where sign and spoken language classifiers differ. However, classifiers in spoken
languages can be used in word formation, too. This has not been focused on in the
overview literature on classifiers, but is discussed in studies of particular spoken lan-
guages with classifier systems (e.g., Senft 2000; Seifart 2005; van der Voort 2004). The
following examples from Miraña (Seifart 2005, 114) show that a noun root (X́hI ‘ba-
nana’) can be combined with one or more classifiers. Seifart states that such combina-
tions are compounds.

(11) a. X́hı́ [Miraña]


banana (general: fruit, plant, bunch, …)
b. X́hI -kó
banana -cl:pointed
‘banana plant’
c. X́hI -kó -ʔámı̀
banana -cl:pointed -cl:leaf
‘leaf of a banana plant’
d. X́hI -ʔó
banana -cl:oblong
‘banana (fruit)’
e. X́hI -ʔó -βí:X́
banana -cl:oblong -cl:chunk
‘chunk of a banana’

Seifart (2005, 121) indicates that the meaning of the resulting compounds is not always
componential and may even differ substantially from the combined meanings of the
component parts. This has also been reported for signs that contain classifiers (e.g.,
Brennan 1990a,b; Johnston/Schembri 1999) and may be one of the grounds for the
assumption that such signs are ‘frozen’. Apparently, verbal classifiers in sign and spo-
ken languages are similar in this respect.
180 II. Morphology

To summarize, recent findings in the spoken language literature on classifiers re-


veals that there are a number of similarities between verbal classifiers in spoken and
sign languages, contrary to what has been claimed previously in the literature (e.g.,
Engberg-Pedersen 1993; Slobin et al. 2003; Schembri 2003). These similarities concern
the main functions of classifiers: the lexical function of word/sign formation and the
grammatical function of reference-tracking. Also, in both spoken and sign languages it
is possible to choose a particular classifier in order to focus on a particular characteris-
tic of an entity, although the entity may have a preferred classifier. A difference lies
in the observation that sign languages only have verbal classifiers, whereas there are
at least four different classifier systems in spoken languages and spoken languages may
combine two or more of these systems (especially languages with a system of verbal
classifiers). Comparison of some characteristics of verb classifiers in the different mo-
dalities remains unclear so far, e.g. questions such as whether there are referents that
are not classified in sign languages, and whether the use of a classifier is optional, as
in spoken language verbal classifier systems.

7. Conclusion

Various aspects of classifiers in sign languages have been discussed in this chapter, and
compared with classifiers in spoken languages. Although classifiers have been the focus
of much attention in sign language research (much more than verbal classifiers in
spoken languages), many unresolved issues remain. Also, because of this focus, the
phenomenon of classifiers may have received a larger role in sign languages than it
deserves. There seem to be particular expectations with respect to classifier verbs: since
the process of classifier verb formation is considered productive, many more forms
and greater use of these signs are expected than actually may occur (whereas another
productive process of sign formation concerning classifiers as described in section 4 is
rather neglected). Like speakers, signers have several means to express spatial relations
between entities and the movements of entities through space; classifier verbs are only
a subset of these. Users of sign languages have a range of devices at their disposal for
the expression of existence, location, motion, and locomotion, as well as the shape and
orientation of entities. These devices can be combined, but signers may also use only
one of these devices, focusing on or defocusing a particular aspect of an event. Finally,
most work on classifiers in sign languages is based on narrative data, much of which
has been elicited by pictures, comics, and movies. Use of particular stimuli ascertained
the presence of classifiers in the data and it is convenient for cross-linguistic compari-
son, but it also biases the resulting generalizations, and consequently the results of
studies that are based on the results, such as acquisition studies and comparison with
similar phenomena in spoken languages.
Although many generalizations and claims have been made about classifiers and
classifier constructions in sign languages, and theories have been formed on the basis
of these generalizations (and vice versa), there is still much controversy in this field.
It is necessary that the observations are verified by data of different genres, especially
natural discourse, and obtained from large sets of users of (various) sign languages.
Also, recent developments in other linguistic domains need to be taken into account.
8. Classifiers 181

The results of such studies will give us a clearer view of the phenomenon, and provide
a solid basis for research based on these results.

Acknowledgements: I am indebted to my colleagues Aslı Özyürek, Pamela Perniss, and


Connie de Vos for providing me with data from their TİD, DGS, and Kata Kolok
corpora, and to them as well as to Adam Schembri and two reviewers for comments
on earlier versions of this chapter. I am also grateful to my colleagues Yassine Nauta
and Johan Ros for signing the NGT examples in (7). The construction of the DGS/
TİD corpora was made possible through a VIDI grant from the Dutch Science Founda-
tion NWO. The construction of the Corpus NGT was funded by an investment grant
from the same foundation.

8. Literature

Adams, Karen
1986 Numeral Classifiers in Austroasiatic. In: Craig, Colette (ed.), Noun Classes and Catego-
rization. Amsterdam: Benjamins, 241⫺262.
Aikhenvald, Alexandra Y.
2000 Classifiers: A Typology of Noun Categorization Devices. Oxford: Oxford University
Press.
Allan, Keith
1977 Classifiers. In: Language 53, 285⫺311.
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2003 Classifier Constructions and Morphology in Two Sign Languages. In: Emmorey, Karen
(ed.), Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum,
53⫺84.
Benedicto, Elena/Brentari, Diane
2004 Where Did All the Arguments Go? Argument-changing Properties of Classifiers in
ASL. In: Natural Language and Linguistic Theory 22, 743⫺810.
Bernardino, Elidea L. A.
2006 What Do Deaf Children Do When Classifiers Are Not Available? The Acquisition of
Classifiers in Verbs of Motion and Verbs of Location in Brazilian Sign Language (LSB).
PhD Dissertation, Graduate School of Arts and Sciences, Boston University.
Boyes-Braem, Penny
1981 Features of the Handshape in American Sign Language. PhD Dissertation, Berkeley,
University of California.
Brennan, Mary
1990a Productive Morphology in British Sign Language. Focus on the Role of Metaphors. In:
Prillwitz, Siegmund/Vollhaber, Tomas (eds.), Current Trends in European Sign Lan-
guage Research. Proceedings of the 3rd European Congress on Sign Language Research.
Hamburg, July 26⫺29, 1989. Hamburg: Signum, 205⫺228.
Brennan, Mary
1990b Word Formation in British Sign Language. Stockholm: University of Stockholm.
Brentari, Diane/Goldsmith, John
1993 Secondary Licensing and the Non-dominant Hand in ASL Phonology. In: Coulter, Ge-
offrey R. (ed.), Current Issues in ASL Phonology. New York: Academic Press, 19⫺41.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
182 II. Morphology

Brentari, Diane/Padden, Carol


2001 Native and Foreign Vocabulary in American Sign Language. In: Brentari, Diane (ed.),
Foreign Vocabulary in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 87⫺119.
Chang, Jung-hsing/Su, Shiou-fen/Tai, James H-Y.
2005 Classifier Predicates Reanalyzed, with Special Reference to Taiwan Sign Language. In:
Language and Linguistics 6(2), 247⫺278.
Corazza, Serena
1990 The Morphology of Classifier Handshapes in Italian Sign Language (LIS). In: Lucas,
Ceil (ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet Uni-
versity Press, 71⫺82.
Croft, William
1994 Semantic Universals in Classifier Systems. In: Word 45, 145⫺171.
Cuxac, Christian
2000 La Langue des Signes Française: les Voies de l’Iconicité. Paris: Ophrys.
Cuxac, Christian
2003 Iconicité des Langues des Signes: Mode d’Emploi. In: Monneret, Philippe (ed.), Cahiers
de Linguistique Analogique 1. A.B.E.L.L. Université de Bourgogne, 239⫺263.
Cuxac, Christian/Sallandre, Marie-Anne
2007 Iconicity and Arbitrariness in French Sign Language ⫺ Highly Iconic Structures, De-
generated Iconicity and Diagrammatic Iconicity. In: Pizzuto, Elena/Pietrandrea, Paola/
Simone, Raffaele (eds.), Verbal and Signed Languages. Comparing Structures, Con-
structs and Methodologies. Berlin: Mouton de Gruyter, 13⫺33.
Delancey, Scott
1999 Lexical Prefixes and the Bipartite Stem Construction in Klamath. In: International Jour-
nal of American Linguistics 65, 56⫺83.
DeMatteo, Asa
1977 Visual Imagery and Visual Analogues in American Sign Language. In: Friedman, Lynn
A. (ed.), On the Other Hand. New Perspectives on American Sign Language. New York:
Academic Press, 109⫺137.
Denny, J. Peter
1979 The ‘Extendedness’ Variable in Classifier Semantics: Universal Features and Cultural
Variation. In: Mathiot, Madeleine (ed.), Ethnolinguistics: Boas, Sapir and Whorf Revis-
ited. The Hague: Mouton Publishers, 97⫺119.
Denny, J. Peter/Creider, Chet A.
1986 The Semantics of Noun Classes in Proto Bantu. In: Craig, Colette (ed.), Noun Classes
and Categorization. Amsterdam: Benjamins, 217⫺239.
Emmorey, Karen
2002 Language, Cognition, and the Brain. Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space
in a Visual Language. Hamburg: Signum.
Engberg-Pedersen, Elisabeth
1995 Point of View Expressed through Shifters. In: Emmorey, Karen/Reilly, Judy (eds.), Lan-
guage, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 133⫺154.
Fernald, Theodore B./Napoli, Donna Jo
2000 Exploitation of Morphological Possibilities in Signed Languages: Comparison of
American Sign Language with English. In: Sign Language & Linguistics 3(1), 3⫺58.
Fischer, Susan D.
2000 Thumbs Up Versus Giving the Finger: Indexical Classifiers in NS and ASL. Paper
Presented at the 7 th International Conference on Theoretical Issues in Sign Language
Research (TISLR), Amsterdam.
8. Classifiers 183

Fish, Sarah/Morén, Bruce/Hoffmeister, Robert/Schick, Brenda


2003 The Acquisition of Classifier Phonology in ASL by Deaf Children: Evidence from
Descriptions of Objects in Specific Spatial Arrangements. In: Beachley, Barbara et al.
(eds.), Proceedings of the Annual Boston University Conference on Language Develop-
ment 27(1). Somerville, MA: Cascadilla Press, 252⫺263.
Frishberg, Nancy/Gough, Bonnie
2000[1973] Morphology in American Sign Language. In: Sign Language & Linguistics 3(1),
103⫺131.
Glück, Susanne/Pfau, Roland
1998 On Classifying Classification as a Class of Inflection in German Sign Language. In:
Cambier-Langeveld, Tina/Lipták, Aniko/Redford, Michael (eds.), Proceedings of Con-
Sole VI. Leiden: SOLE, 59⫺74.
Glück, Susanne/Pfau, Roland
1999 A Distributed Morphology Account of Verbal Inflection in German Sign Language. In:
Cambier-Langeveld, Tina/Lipták, Aniko/Redford, Michael/van der Torre, Eric Jan
(eds.), Proceedings of ConSole VII. Leiden: SOLE, 65⫺80.
Grinevald, Colette
2000 A Morphosyntactic Typology of Classifiers. In: Senft, Günter (ed.), Systems of Nominal
Classification. Cambridge: Cambridge University Press, 50⫺92.
Grote, Klaudia/Linz, Erika
2004 The Influence of Sign Language Iconicity on Semantic Conceptualization. In: Müller,
Wolfgang G./Fischer, Olga (eds.), Inconity in Language and Literature 3. Amsterdam:
Benjamins, 23⫺40.
Hendriks, Bernadette
2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD
Dissertation, University of Amsterdam. Utrecht: LOT.
Hilzensauer, Marlene/Skant, Andrea
2001 Klassifikation in Gebärdensprachen. In: Leuninger, Helen/Wempe, Karin (eds.), Gebär-
densprachlinguistik 2000 ⫺ Theorie und Anwendung. Hamburg: Signum, 91⫺111.
Hoiting, Nini/Slobin, Dan
2002 Transcription as a Tool for Understanding: The Berkeley Transcription System for Sign
Language Research (BTS). In: Morgan, Gary/Woll, Bencie (eds.), Directions in Sign
Language Acquisition. Amsterdam: Benjamins, 55⫺76.
Johnston, Trevor
1989 Auslan: The Sign Language of the Australian Deaf Community. PhD Dissertation, Uni-
versity of Sydney.
Johnston, Trevor/Schembri, Adam
1999 On Defining Lexeme in a Signed Language. In: Sign Language & Linguistics 2(2),
115⫺185.
Kantor, Rebecca
1980 The Acquisition of Classifiers in American Sign Language. In: Sign Language Studies
28, 193⫺208.
Kegl, Judy A./Schley, Sarah
1986 When Is a Classifier No Longer a Classifier? In: Nikiforidou, V./Clay, M. Van/Niepokuj,
M./Feder, D. (eds.), Proceedings of the 12 th Annual Meeting of the Berkeley Linguistic
Society. Berkeley, CA: Berkeley Linguistics Society, 425⫺441.
Kooij, Els van der
2002 Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic
Implementation and Iconicity. PhD Dissertation, Utrecht University. Utrecht: LOT.
Liddell, Scott K./Johnson, Robert E.
1987 An Analysis of Spatial-Locative Predicates in American Sign Language. Paper Pre-
sented at the 4 th International Symposium on Sign Language Research, Lappeenranta,
Finland.
184 II. Morphology

Liddell, Scott K.
2003 Sources of Meaning in ASL Classifier Predicates. In: Emmorey, Karen (ed.), Perspec-
tives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 199⫺220.
Mandel, Mark Alan
1977 Iconic Devices in American Sign Language. In: Friedman, Lynn A. (ed.), On the Other
Hand. New Perspectives on American Sign Language. New York: Academic Press,
57⫺107.
Martin, Amber Joy/Sera, Maria D.
2006 The Acquisition of Spatial Constructions in American Sign Language and English. In:
Journal of Deaf Studies and Deaf Education 11(4), 391⫺402.
McDonald, Betsy Hicks
1982 Aspects of the American Sign Language Predicate System. PhD Dissertation, University
of Buffalo.
Meir, Irit
2001 Verb Classifiers as Noun Incorporation in Israeli Sign Language. In: Booij, Gerard/
Marle, Jacob van (eds.), Yearbook of Morphology 1999. Dordrecht: Kluwer, 299⫺319.
Mithun, Marianne
1986 The Convergence of Noun Classification Systems. In: Craig, Colette (ed.), Noun Classes
and Categorization. Amsterdam: Benjamins, 379⫺397.
Morgan, Gary/Woll, Bencie
2003 The Development of Reference Switching Encoded through Body Classifiers in British
Sign Language. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages.
Mahwah, NJ: Lawrence Erlbaum, 297⫺310.
Morgan, Gary/Herman, Rosalind/Barriere, Isabelle/Woll, Bencie
2008 The Onset and Mastery of Spatial Language in Children Acquiring British Sign Lan-
guage. In: Cognitive Development 23, 1⫺19.
Newport, Elissa
1982 Task Specificity in Language Learning? Evidence from Speech Perception and Ameri-
can Sign Language. In: Wanner, Eric/Gleitman, Lila (eds.), Language Acquisition: the
State of the Art. Cambridge: Cambridge University Press, 450⫺486.
Newport, Elissa
1988 Constraints on Learning and Their Role in Language Acquisition: Studies of the Acqui-
sition of American Sign Language. In: Language Sciences 10, 147⫺172.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Padden, Carol A./Perlmutter, David M.
1987 American Sign Language and the Architecture of Phonological Theory. In: Natural
Language and Linguistic Theory 5, 335⫺375.
Perniss, Pamela
2007 Space and Iconicity in German Sign Language (DGS). PhD Dissertation, University of
Nijmegen. Nijmegen: MPI Series in Psycholinguistics.
Rosen, Sara Thomas
1989 Two Types of Noun Incorporation: A Lexical Analysis. In: Language 65, 294⫺317.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Schembri, Adam
2001 Issues in the Analysis of Polycomponential Verbs in Australian Sign Language (Auslan).
PhD Dissertation, University of Sydney.
Schembri, Adam
2003 Rethinking ‘Classifiers’ in Signed Languages. In: Emmorey, Karen (ed.), Perspectives
on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 3⫺34.
8. Classifiers 185

Senft, Günther
2000 What Do We Really Know About Nominal Classification Systems? In: Senft, Günther
(ed.), Nominal Classification Systems. Cambridge: Cambridge University Press, 11⫺49.
Schick, Brenda
1990a Classifier Predicates in American Sign Language. In: International Journal of Sign Lin-
guistics 1, 15⫺40.
Schick, Brenda
1990b The Effects of Morphosyntactic Structure on the Acquisition of Classifier Predicates in
ASL. In: Lucas, Ceil (ed.), Sign Language Research. Theoretical Issues. Washington,
DC: Gallaudet University Press, 358⫺374.
Seifart, Frank
2005 The Structure and Use of Shape-based Noun Classes in Miraña (North West Amazon).
PhD Dissertation, University of Nijmegen. Nijmegen: MPI Series in Psycholinguistics.
Slobin, Dan I./Hoiting, Nini/Kuntze, Marlon/Lindert, Reyna/Weinberg, Amy/Pyers, Jennie/An-
thony, Michelle/Biederman, Yael/Thumann, Helen
2003 A Cognitive/Functional Perspective on the Acquisition of ‘Classifiers’. In: Emmorey,
Karen (ed.) Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erl-
baum, 271⫺298.
Supalla, Ted
1980 Morphology of Verbs of Motion and Location in American Sign Language. In: Caccam-
ise, Frank/Hicks, Don (eds.), Proceedings of the 2nd National Symposium of Sign Lan-
guage Research and Teaching, 1978. Silver Spring, MD: National Association of the
Deaf, 27⫺45.
Supalla, Ted
1982 Structure and Acquisition of Verbs of Motion and Location in American Sign Language.
PhD Dissertation, University of San Diego.
Supalla, Ted
1986 The Classifier System in American Sign Language. In: Craig, Colette (ed.), Noun
Classes and Categorization. Amsterdam: Benjamins, 181⫺214.
Talmy, Leonard
1985 Lexicalization Patterns: Semantic Structure in Lexical Forms. In: Shopen, Timothy
(ed.), Language Typology and Syntactic Description. Grammatical Categories and the
Lexicon. Cambridge: Cambridge University Press, 57⫺149.
Tang, Gladys
2003 Verbs of Motion and Location in Hong Kong Sign Language: Conflation and Lexicali-
zation. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages. Mah-
wah, NJ: Lawrence Erlbaum, 143⫺165.
Tang, Gladys/Sze, Felix Y. B./Lam, Scholastica
2007 Acquisition of Simultaneous Constructions by Deaf Children of Hong Kong Sign Lan-
guage. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno A. (eds.), Simul-
taneity in Signed Languages. Form and Function. Amsterdam: Benjamins, 283⫺316.
Taub, Sarah F.
2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cam-
bridge: Cambridge University Press.
Voort, Hein van der
2004 A Grammar of Kwaza. Berlin: Mouton de Gruyter.
Wallin, Lars
1996 Polysynthetic Signs in Swedish Sign Language. PhD Dissertation, University of Stock-
holm.
Wallin, Lars
2000 Two Kinds of Productive Signs in Swedish Sign Language: Polysynthetic Signs and Size
and Shape Specifying Signs. In: Sign Language and Linguistics 3, 237⫺256.
186 II. Morphology

Wilbur, Ronnie B.
2008 Complex Predicates Involving Events, Time and Aspect: Is This Why Sign Languages
Look so Similar? In: Quer, Josep (ed.), Signs of the Time. Selected Papers from TISLR
2004. Hamburg: Signum, 217⫺250.
Young, Robert/Morgan, William
1987 The Navajo Language ⫺ A Grammar and Colloquial Dictionary. Albuquerque, NM:
University of New Mexico Press.
Zeshan, Ulrike
2003 ‘Classificatory’ Constructions in Indo-Pakistani Sign Language: Grammaticalization
and Lexicalization Processes. In: Emmorey, Karen (ed.), Perspectives on Classifiers in
Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 113⫺141.
Zwitserlood, Inge
1996 Who’ll HANDLE the OBJECT? An Investigation of the NGT-classifier. MA Thesis,
Utrecht University.
Zwitserlood, Inge
2003 Classifying Hand Configurations in Nederlandse Gebarentaal (Sign Language of the
Netherlands). PhD Dissertation, Utrecht University. Utrecht: LOT.
Zwitserlood, Inge
2008 Morphology Below the Level of the Sign ⫺ Frozen Forms and Classifier Predicates. In:
Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR 2004. Hamburg:
Signum, 251⫺272.

Inge Zwitserlood, Nijmegen (The Netherlands)

9. Tense, aspect, and modality


1. Introduction
2. Tense
3. Aspect
4. Modality
5. Conclusions
6. Literature

Abstract
Cross-linguistically, the grammatical categories tense, aspect, and modality ⫺ when they
are overtly expressed ⫺ are generally realized by free morphemes (such as adverbials
and auxiliaries) or by bound inflectional markers. The discussion in this chapter will
make clear that this generalization also holds true for sign languages. It will be shown
that tense is generally encoded by time adverbials and only occasionally (and only in a
few sign languages) by verbal inflection. In contrast, various aspect types are realized on
the lexical verb, in particular, by characteristic movement modulations. Only completive/
perfective aspect is commonly realized by free morphemes across sign languages. Finally,
9. Tense, aspect, and modality 187

deontic and epistemic modality is usually encoded by dedicated modal verbs. In relation
to all three grammatical categories, the possibility of (additional) non-manual marking
and the issue of grammaticalization will also be addressed.

1. Introduction

Generally, in natural languages, every sentence that is uttered must receive a temporal
and aspectual interpretation as well as an interpretation with respect to modality, for
instance, the possibility or necessity of occurrence of the event denoted by the main
verb. Frequently, these interpretational nuances are not overtly specified. In this case,
the required interpretation is either inferred from the context or the sentence receives
a default interpretation. When information concerning tense, aspect, and/or modality
(TAM) is overtly marked, this is usually done by means of verbal inflection or free
morphemes such as adverbials or auxiliaries. Languages also show considerable varia-
tion with respect to what categories they mark.
Sign languages are no exception in this respect. Interestingly, TAM-marking in a
certain sign language is usually quite different from the patterns attested in the respec-
tive surrounding spoken language. In addition, a certain amount of variation notwith-
standing, sign languages generally display strikingly similar patterns in the domain of
TAM-marking (e.g. lack of tense inflection, rich systems of aspectual inflection, etc.),
as will become clear in the following discussion.
In this chapter, we will address the three categories subsumed under the label
‘TAM’ in turn. We first turn to tense marking (section 2), where we discuss common
adverbial and less common inflectional strategies and also introduce the concept of
‘time lines’, which plays an important role in most sign languages studied to date.
Section 3 on aspect provides an overview of the most common free and bound aspec-
tual morphemes, their meaning, and phonological realization. Finally, in section 4, we
turn to the encoding of modality, focusing on selected deontic and epistemic modal
expressions. In all three sections, manual and non-manual strategies will be considered
and grammaticalization issues will be briefly discussed. Also, wherever appropriate, an
attempt is made to compare sign languages to each other.

2. Tense

It has long been noted (Friedman 1975; Cogen 1977) that sign language verbs, just like
verbs in many spoken languages, generally do not inflect for tense. Rather, tense is ex-
pressed by means of adverbials (section 2.1), which frequently make use of so-called
‘time lines’ (section 2.2). Still, it has been suggested for American Sign Language (ASL)
and Italian Sign Language (LIS) that at least some verbs may inflect for tense ⫺ be it by
means of manual or non-manual modulations; these proposals will be discussed in sec-
tion 2.3.
188 II. Morphology

2.1. Adverbials and lexical tense markers

Across sign languages, the most common strategy for locating an event on a time line
with respect to the time of utterance is by means of adverbials. A sentence that con-
tains no time reference is either interpreted within the time-frame previously estab-
lished in the discourse or ⫺ by default ⫺ as present tense. Still, sign languages usually
have a lexical sign meaning ‘now’, which may be used emphatically or for contrast to
indicate present tense (Friedman 1975).
Across sign languages, time adverbials commonly appear sentence-initially, as in the
Spanish Sign Language (LSE) example in (1a) (Cabeza Pereiro/Fernández Soneira 2004,
69). They may either indicate a (more or less) specific point in time (e.g. past week (1a),
yesterday, in-two-days) or more broadly locate the event in the future or past, as, for
instance, the adverbial past in the German Sign Language (DGS) example in (1b).

(1) a. past week meeting start ten end quarter to three [LSE]
‘Last week the meeting started at ten and ended at a quarter to three.’
b. past peter index3 book write [DGS]
‘Peter wrote a book.’

According to Aarons et al. (1995, 238), in ASL time adverbials may occur in sentence-
initial (2a) or sentence-final position (2b), or between the subject and the (modal)
verb (2c).

(2) a. tomorrow j-o-h-n buy car [ASL]


‘John will buy a car tomorrow.’
b. j-o-h-n buy car tomorrow
‘John will buy a car tomorrow.’
c. j-o-h-n tomorrow can buy car
‘John can buy a car tomorrow.’

Note again that the lack of tense inflection is by no means a peculiarity of the visual
modality. Some East Asian languages (e.g. Chinese) display the same property and
thus also resort to the use of adverbials to set up a time-frame in discourse.
Aarons et al. (1995) further argue that besides time adverbials, ASL also makes use
of ‘lexical tense markers’ (LTMs). Superficially, at least some of these LTMs look very
similar to time adverbials, but Aarons et al. show that they can be distinguished from
adverbials on the basis of their syntactic distribution and articulatory properties. In the
following, we only consider the LTM future-tns (other LTMs include past-tns, for-
merly-tns, and #ex-tns; also see Neidle et al. (2000, 77) for an overview). As for the
syntactic distribution, lexical tense markers behave like modals; that is, they occur
between the subject and the verb, they precede sentential negation (3a), and they
cannot appear in infinitival complements. Crucially, a modal verb and a lexical tense
marker cannot co-occur (3b) ⫺ irrespective of order (Aarons et al. 1995, 241f.). The
authors therefore conclude that LTMs, just like modals, occupy the head of the Tense
Phrase, a position different from that of adverbials (for a modal interpretation of fu-
ture see section 4 below).
9. Tense, aspect, and modality 189

neg
(3) a. j-o-h-n future-tns not buy house [ASL]
‘John will not buy the house.’
b. * j-o-h-n future-tns can buy house
‘John will be able to buy a house.’

With respect to articulatory properties, Aarons et al. show that the path movement of
time adverbials such as future-adv can be modulated to express a greater or lesser
distance in time. In contrast, this variation in path length is excluded with LTMs, which
have a fixed articulation. Taken together, this shows that LTMs are more restricted
than time adverbials in both their articulatory properties and their syntactic distribu-
tion.

2.2. Time lines

Concerning the articulation of time adverbials (and LTMs), it has been observed that
almost all sign languages investigated to date make use of ‘time lines’ (see the previ-
ously mentioned references for ASL and LSE; see Brennan (1983) and Sutton-Spence/
Woll (1999) for British Sign Language (BSL), Schermer/Koolhof (1990) for Sign Lan-
guage of the Netherlands (NGT), Massone (1994) for Argentine Sign Language (LSA),
Schmaling (2000) for Hausa Sign Language, among many others). Time lines are based
on culture-specific orientational metaphors (Lakoff/Johnson 1980). In many cultures,
time is imagined as proceeding linearly and past and future events are conceptualized
as lying either behind or before us. This conceptual basis is linguistically encoded, as,
for instance, in the English metaphors ‘looking forward to something’ and ‘something
lies behind me’.
In sign languages, space can be used metaphorically in time expressions. Various
time lines have been described, but here we will focus on the line that runs “parallel
to the floor from behind the body, across the shoulder to ahead up to an arm’s length,
on the signer’s dominant side” (Sutton-Spence/Woll 1999, 183; see Figure 9.1). Thus, in
adverbials referring to the past (e.g. past, before, yesterday), path movement proceeds
backwards (towards or over the shoulder, depending on distance in time), while in
adverbials referring to the future (e.g. future, later, tomorrow), we observe forward
movement from the body into neutral signing space ⫺ again, length of the path move-
ment indicates distance in time. Present tense (as in now or today) is expressed by a
downward movement next or in front of the body. Other time lines that have been
described are located in front of the body, either horizontally (e.g. for duration in time)
or vertically (e.g. for growth) (see, for instance, Schermer/Koolhof (1990) and Massone
(1994) for illustrations).
Interestingly, in some cultures, the flow of time is conceptualized differently, namely
such that past events are located in the front (i.e. before our eyes) while future events
lie behind the body (because one cannot see what has not yet happened). As before,
this conceptual image is reflected in language (e.g. Malagasy; Dahl 1995), and it is
expected that it will also be reflected in the sign language used in such a culture.
An example of a sign language that does not make use of the time line illustrated
in Figure 9.1. is Kata Kolok, a sign language used in a village in Bali (see chapter 24,
190 II. Morphology

Fig. 9.1: Time line, showing points of reference for past, present, and future

Shared Sign Languages, for discussion). Still, signers do refer to spatial positions in
temporal expressions in a different way. For instance, given that the village lies close
to the equator, pointing approximately 90° upwards signifies noon while pointing 180°
to the west means six-o-clock(pm) (or more generally ‘late afternoon time’), based on
the approximate position of the sun at the respective time (Marsaja 2008, 166).

2.3. Tense marking on the verb

Sutton-Spence and Woll (1999, 116) note that in some dialects of BSL, certain verbs
differ depending on whether the event is in the past or present (e.g. win/won, see/saw,
go/went). These verb pairs, however, do not exemplify systematic inflection; rather,
the past tense forms should be treated as lexicalized exceptions.
Jacobowitz and Stokoe (1988) claim to have found systematic manual indications
of tense marking in more than two dozen ASL verbs. For verbs like come and go,
which involve path movement in their base form, they state that “extension (of the
hand) at the wrist, (of the forearm) at the elbow, or (of the upper arm) at the shoul-
der” ⫺ or a combination thereof ⫺ will denote future tense. Similarly, “flexion at the
wrist, elbow, or shoulder with no other change in the performance of an ASL verb”
will denote past tense (Jacobowitz/Stokoe 1988, 337). The authors stress that the time
line cannot be held responsible for these inflections as the direction of movement
remains unchanged. Rather, the changes result in a slight movement or displacement
on the vertical plane (extension of joints: upward; flexion of joints: downward). For
instance, in order to express the meaning ‘will go’, the signer’s upper arm is extended
at the shoulder. It is worth pointing out that the vertical scale has also been found to
play a role in spoken language metaphor, at least in referring to future events (e.g.
‘What is coming up next week?’ (Lakoff/Johnson 1980, 16)).
A systematic change that does involve the time line depicted in Figure 9.1 has been
described for LIS by Zucchi (2009). Zucchi observes that in LIS, temporal information
can be conveyed by means of certain non-manual (that is, suprasegmental) features that
co-occur with the verb. The relevant feature is shoulder position: if the shoulder is tilted
9. Tense, aspect, and modality 191

backward (‘sb’), then the action took place before the time of utterance (past tense (4a));
if the shoulder is tilted forward (‘sf’), then the action is assumed to take place after the
time of utterance (future tense (4b)). A neutral shoulder position (i.e. shoulder aligned
with the rest of the body) would indicate present tense (Zucchi 2009, 101).

sb
(4) a. gianni house buy [LIS]
‘Gianni bought a house.’
sf
b. gianni house buy
‘Gianni will buy a house.’
c. tomorrow gianni house buy
‘Tomorrow Gianni will buy a house.’

Zucchi concludes that LIS is unlike Chinese and more like Italian and English in that
grammatical tense is marked on verbs by means of shoulder position. He further shows
that non-manual tense inflection is absent in sentences containing past or future time
adverbs (4c), a pattern that is clearly different from the one attested in Italian and
English (Zucchi 2009, 103). In fact, the combination of a time adverb and non-manual
inflection leads to ungrammaticality.

3. Aspect
While tense marking appears to be absent in most sign languages, many of the sign
languages studied to date have rich systems of aspectual marking. Aspectual systems
are commonly assumed to consist of two components, namely situation aspect and
viewpoint aspect (Smith 1997). Situation aspect is concerned with intrinsic temporal
properties of a situation (e.g. duration, repetition over time) while viewpoint aspect
has to do with how a situation is presented (e.g. as closed or open). Another notion
often subsumed under the term aspect is Aktionsart or lexical aspect, which describes
the internal temporal structure of events. This category is discussed in detail in chap-
ter 20 on lexical semantics (see also Wilbur 2008, 2011).
Across sign languages, aspect is either marked by free functional elements (sec-
tion 3.1) or by modulations of the verb sign (section 3.2), most importantly, by charac-
teristic changes in the manner and frequency of movement, as first described in detail
by Klima and Bellugi (1979). It is important to note that Klima and Bellugi interpreted
the term ‘aspect’ fairly broadly and also included in their survey alterations that do
not have an impact on the temporal structure of the event denoted by the verb, namely
adverbial modifications such as manner (e.g. ‘slowly’) and degree (e.g. ‘intensively’)
and distributional quantification (e.g. exhaustive marking; see chapter 7, Agreement,
for discussion). We follow Rathmann (2005) in excluding these alterations from the
following discussion.

3.1. Free aspectual markers


For numerous (unrelated) sign languages, free grammatical markers have been de-
scribed that convey completive and/or perfective aspect (i.e. viewpoint aspect). Com-
192 II. Morphology

monly, these aspectual markers are grammaticalized from verbs (mostly finish) or
adverbs (e.g. already) ⫺ a developmental path that is also frequently attested in spo-
ken languages (Heine/Kuteva 2002; see also chapter 34 for grammaticalization in sign
languages). In LIS, for instance, the lexical verb done (meaning ‘finish’, (5a)) can also
convey aspectual meanings, such as perfective aspect in (5b) (Zucchi 2009, 123f). Note
that the syntactic position differs: when used as a main verb, done appears in preverbal
position, while in its use as an aspectual marker, it follows the main verb (a similar
observation has been made for the ASL element finish by Fischer/Gough (1999
[1972]); for ASL, also see Janzen (1995); for a comparison of ASL and LIS, see Zucchi
et al. (2010)).

(5) a. gianni cake done eat [LIS]


‘Gianni has finished eating the cake.’
b. gianni house buy done
‘Gianni has bought a house.’

Meir (1999) provides a detailed analysis of the Israeli Sign Language (Israeli SL) per-
fect marker already. First, she shows that, despite the fact that this marker frequently
occurs in past tense contexts, it is not a past tense marker, but rather an aspectual
marker denoting perfect constructions; as such, it can, for instance, also co-occur with
time adverbials denoting future tense. Following Comrie (1976), she argues that “con-
structions with already convey the viewpoint of ‘a present state [which] is referred to
as being the result of some past situation’ (Comrie 1976, 56)” (Meir 1999, 50). Among
the manifestations of that use of already are the ‘experiental’ perfect (6a) and the
perfect denoting a terminated (but not necessarily completed) situation (6b) (adapted
from Meir 1999, 50 f.).

(6) a. index2 already eat chinese? [Israeli SL]


‘Have you (ever) eaten Chinese food?’
b. index1 already write letter sister poss1
‘I have written a letter to my sister.’

Meir (1999) also compares already to its ASL counterpart finish and shows that the
functions and uses of already are more restricted. She hypothesizes that this might
result from the fact that Israeli SL is a much younger language than ASL and that
therefore, already has not yet grammaticalized to the same extent as finish. Alterna-
tively, the differences might be due to the fact that the two functional elements have
different lexical sources: a verb in ASL, but an adverb in Israeli SL.
For Greek Sign Language (GSL), Sapountzaki (2005) describes three different signs
in the set of perfective markers: been, for ‘done, accomplished, experienced’ (7a); its
negative counterpart not-been, for ‘not done, accomplished, experienced’ (7b); and
not-yet for ‘not yet done, accomplished, experienced’ (also see chapter 15, Negation,
for discussion of negative aspectual markers).

(7) a. yesterday ctella month ago letter send been [GSL]


‘Yesterday she told me that she had sent the letter a month ago.’
b. granddad lesson not-been
‘Grandpa had not gone to school.’
9. Tense, aspect, and modality 193

The use of similar completive/perfective markers has also been described for BSL
(Brennan 1983), DGS (Rathmann 2005), Swedish Sign Language (SSL, Bergman/Dahl
1994), and Turkish Sign Language (TİD, Zeshan 2003). Zeshan (2003, 49) further
points out that Indopakistani Sign Language (IPSL) has a free completive aspect
marker “that is different from and independent of two signs for finish and is used as
an aspect marker only”.
For NGT, Hoiting and Slobin (2001) describe a free marker of continuous/habitual
aspect, which they gloss as through. This marker is used when the lexical verb cannot
inflect for aspect by means of reduplication (see section 3.2) due to one of the following
phonological constraints: (i) it has internal movement or (ii) it includes body contact.
The sign try, in which the R-hand makes contact with the nose, exemplifies constraint
(ii); see example (8) (adapted from Hoiting/Slobin 2001, 129). Note that the elliptical
reduplication characteristic of continuous/habitual inflection is still present; however,
it accompanies through rather than the main verb. Hoiting and Slobin argue that use
of through is an example of borrowing from spoken Dutch, where the corresponding
element door (‘through’) can be used with some verbs to express the same aspectual
meanings.

(8) index3 try throughCC [NGT]


‘He tried continuously / He tried and tried and tried.’

3.2. Aspectual inflection on verbs

Building on earlier work on ASL verbal reduplication by Fischer (1973), Klima and
Bellugi (1979) provide a list of aspectual distinctions that can be marked on ASL verbs,
which includes no less than 15 different aspect types. They point out that the attested
modulations are characterized by “dynamic qualities and manners of movement” such
as reduplication, rate of signing, tension, and pauses between cycles of reduplication,
and they also provide evidence for the morphemic status of these modulations. Given
considerable overlap in meaning and form of some of the proposed aspect types, later
studies attempted to re-group the proposed modulations and to reduce their number
(e.g. Anderson 1982; Wilbur 1987). More recently, Rathmann (2005) suggested that in
ASL six aspectual morphemes have to be distinguished: the free aspectual marker
finish (discussed in the previous section) as well as the bound inflectional morphemes
continuative, iterative, habitual, hold, and conative. Only the first three of these mor-
phemes ⫺ all of which belong to the class of situation aspect ⫺ will be discussed in
some detail below.
Before turning to the discussion of aspectual morphemes, however, we wish to point
out that not all scholars are in agreement about the inflectional nature of these mor-
phemes. Based on a discussion of aspectual reduplication in SSL, Bergman and Dahl
(1994), for instance, argue that the morphological process involved is ideophonic rather
than inflectional. According to Bergman and Dahl (1994, 412 f.), “ideophones are usu-
ally a class of words with peculiar phonological, grammatical, and semantic properties.
Many ideophones are onomatopoetic [...]. A typical ideophone can be seen as a global
characterization of a situation”. In particular, they compare the system of aspectual
194 II. Morphology

reduplication in SSL to a system of ideophones (‘expressives’) found in Kammu, a


language spoken in Laos. These ideophones are characterized by “their iconic and
connotative rather than symbolic and denotative meaning” (Svantesson 1983; cited in
Bergman/Dahl 1994, 411). We cannot go into detail here concerning the parallels be-
tween Kammu ideophones and SSL aspectual reduplication, but it is important to note
that both systems involve a certain degree of iconicity and that Bergman and Dahl
(1994, 418) conclude “that the gestural-visual character of signed languages favors the
development of iconic or quasi-iconic processes like reduplication” to express ideo-
phonic meanings similar to that of Kammu expressives.

3.2.1. Continuative

The label ‘continuative’, as used by Rathmann (2005), also includes the aspectual mod-
ulations ‘durative’ and ‘protractive’ suggested by Klima and Bellugi. According to
Rathmann (2005, 36), the semantic contribution of the continuative morpheme is that
“the temporal interval over which the eventuality unfolds is longer than usual and
uninterrupted”. For instance, combination of the morpheme with the verb study yields
the meaning ‘to study for a long time’. There are strong similarities across sign lan-
guages in how continuative is marked. Most frequently, ‘slow reduplication’ is men-
tioned as an integral component of this aspect type. In more detailed descriptions, the
modulation is described as involving slow arcing movements. According to Aronoff,
Meir, and Sandler (2005, 311), for instance, ASL durative aspect is marked by “super-
imposing an arc-shaped morpheme on the movement of the LML sign, and then redu-
plicating, to create a circular movement” (LML = location-movement-location). Sut-
ton-Spence and Woll (1999) note that in BSL verbs that do not have path movement,
such as look and hold, continuative is marked by an extended hold. Hoiting and
Slobin (2001, 127) describe continuative aspect in NGT as involving “three repetitions
of an elliptical modulation accompanied by pursed lips and a slight blowing gesture”
(see section 3.3 for further discussion of non-manual components).

3.2.2. Iterative

Rathmann (2005) subsumes three of the aspect types distinguished by Klima and Bel-
lugi (1979) under the label ‘iterative’: the ‘incessant’, ‘frequentative’, and ‘iterative’
(note that Wilbur (1987) groups the incessant, which implies the rapid recurrence of a
characteristic, together with the habitual). The meaning contributed by the iterative
morpheme can be paraphrased as ‘over and over again’ or ‘repeatedly’, that is, multiple
instances of an eventuality. Phonologically, the morpheme is realized by reduplication
of the movement of the verb root. Several sign languages have forms that look similar
to the iterative morpheme in ASL. Bergman and Dahl (1994), for instance, describe
fast reduplication in SSL, with repeated short movements. Sutton-Spence and Woll
(1999) find similar patterns in BSL. Similarly, Zeshan (2000) for IPSL and Senghas
(1995) for Nicaraguan Sign Language (ISN) describe repeated movements executed in
the same location as being characteristic for iterative aspect.
9. Tense, aspect, and modality 195

3.2.3. Habitual

The ‘habitual’ is similar to the ‘iterative’ in that it also describes the repetition of
an eventuality. The habitual, however, expresses the notion of a pattern of events or
behaviours rather than the quality of a specific event. Thus, the semantic contribution
of the habitual morpheme can be paraphrased as ‘regularly’ or ‘usually’. Also, in con-
trast to the iterative morpheme, the habitual morpheme does not assume that there is
an end to the repetition of the eventualities. Just like the iterative, the habitual is
phonologically realized by reduplication. Klima and Bellugi (1979) and Rathmann
(2005), however, point out that the habitual morpheme involves smaller and faster
movement than the iterative morpheme. Again, similar marking has been attested in
other sign languages. Cabeza Pereiro and Fernández Soneira (2004, 76), for instance,
also mention that LSE uses repetition of movement to indicate habitualness. Interest-
ingly, Hoiting and Slobin (2001, 127) describe a somewhat different pattern for NGT;
they observe that in this sign language, the habitual is characterized by “slower ellipti-
cal modulation accompanied by gaze aversion, lax lips with protruding tongue, and
slowly circling head movement”.

3.2.4. Other aspectual morphemes

The aspect types introduced in the previous sections are the ones most commonly
discussed in the sign language literature. We want to briefly mention some further
aspect types that have been suggested, without going into details of their phonological
realization (see Rathmann (2005) for details). First, there is the ‘unrealized inceptive’
(Liddell 1984), the meaning of which can be paraphrased as ‘was about to … but’.
Second, Brentari (1998) describes the ‘delayed completive’, which adds the meaning
of ‘at last’ to the verb. Thirdly, Jones (1978) identifies an aspectual morpheme which
he labels ‘unaccomplished’ and which expresses that an event is unfinished in present
(‘to attempt to’, ‘to be in the process of’). Despite semantic differences, Rathmann
(2005) suggests to subsume these three aspect types under a single label ‘conative’, an
attempt that has been criticized by other scholars. He argues that what these aspect
types have in common is that “there is an attempt for the eventuality to be carried
out” (Rathmann 2005, 47). Rathmann further describes a ‘hold’ morpheme, which adds
a final endpoint to an event, thereby signalling that the event is interrupted or termi-
nated (without necessarily being completed).
Zeshan (2003) claims that TİD, besides two free completive aspect markers compa-
rable to the ones discussed in section 3.1, has a simultaneous morpheme for completive
aspect which may combine with some verbs ⫺ a strategy which appears to be quite
unique cross-linguistically. The phonological reflex of this morpheme consists of “a
single accentuated movement, which may have a longer movement path than its non-
completive counterpart and may be accompanied by a single pronounced head nod or,
alternatively, a forward movement of the whole torso” (Zeshan 2003, 51). She provides
examples involving the verbs see, do, and go and points out that, for phonological
reasons, the morpheme cannot combine with verbs that consist of a hold only (e.g.
think).
196 II. Morphology

3.3. Non-manual aspect marking

Above, we mentioned in passing that certain aspect types may be accompanied by


non-manual markers. The continuative, for instance, commonly involves puffed cheeks
and/or pursed lips and blowing of air while performing the characteristic reduplication
(Hoiting/Slobin 2001, 127). For SSL, Bergman (1983) observes that, at least with some
verbs, durative aspect (e.g. ‘think for a long time’) can be realized with the hand held
still while the head performs a cyclical arc movement ⫺ that is, in a sense, the head
movement replaces the hand movement.
Grose (2003) argues that in ASL, a head nod commonly occurs in sentences with a
perfective interpretation, independent of their temporal specification. The head nod
may accompany an aspectual marker like finish, but it may also be the sole marker of
perfectivity, co-occurring with a lexical sign or appearing in clause-final position. In
example (9), the past reading comes from the sign past, while the perfective reading
comes from the head nod (‘hn’) (Grose 2003, 54).

hn
(9) index1 past walk school [ASL]
‘I have walked to school / I used to walk to school.’

4. Modality

In spoken languages, modal expressions are typically verbal auxiliaries. From a seman-
tic point of view, modals convey deontic or epistemic modality. Deontic modality has
to do with the necessity or possibility of a state of affairs according to a norm, a law,
a moral principle, or an ideal. The related meanings are obligation, permission, or
ability. Conversely, epistemic modality is related to the signer’s knowledge about the
world (Palmer 1986). What is possible or necessary in a world according to a signer’s
knowledge depends on his or her epistemic state. In many languages (sign languages
included), modal expressions are often ambiguous between epistemic and deontic read-
ings. This is illustrated by the English example in (10).

(10) Mary must be at home.


a. Given what I know, it is necessary that she is at home now (epistemic).
b. Given some norms, it is necessary that she is at home now (deontic).

The grammatical category of modality as well as modal expressions have been de-
scribed for different sign languages: for ASL, see Wilcox and Wilcox (1995), Janzen
and Shaffer (2002) and Wilcox and Shaffer (2006); for GSL, see Sapountzaki (2005);
for LSA, see Massone (1994), and for Brazilian Sign Language (LSB), see Ferreira
Brito (1990). Note that some modal verbs have dedicated negative forms due to clitici-
zation or suppletion (for negative modals, see Shaffer (2002) and Pfau/Quer (2007);
see also chapter 15 on negation).
9. Tense, aspect, and modality 197

4.1. Deontic modality

In their study on the expression of modality in ASL, Wilcox and Shaffer (2006) distin-
guish between ‘participant-internal’ and ‘participant-external’ necessity and possibility.
Just like in English, necessity and possibility are mainly expressed by modal verbs/
auxiliaries. In addition, the manual verb signs are typically accompanied by specific
non-manual markers such as, for instance, furrowed eyebrows, pursed lips, or head nod,
typically indicating the degree of modality. In ASL, the deontic modality of necessity is
expressed by the modal must/should as is illustrated in the examples in (11), which
are adapted from Wilcox and Shaffer (2006, 215; ‘bf’ = brow furrowing). The sign is
performed with a crooked index finger (cf. also Wilcox/Wilcox 1995).

top
(11) a. before class must lineup(2h) [ASL]
‘Before class we had to line up.’
b. (leaning back) should cooperate, work together, interact forget
bf
(gesture) past push-away new life from-now-on should
‘They (the deaf community) should cooperate and work together, they
should forget about the past and start anew.’

The examples in (11) describe an external deontic necessity where the obligation is
imposed by some external source, that is, either an authority or general circumstances.
An example for a participant-internal use of the modal must/should is given in (12),
again adopted from Wilcox and Shaffer (2006, 217).

top
(12) know south country (waits for attention) know south country [ASL]
top
spanish food strong chile must index1 (leans back)
‘You know how it is in the southern part. You know how it is with Spanish
food. In the southern part, there’s a lot of hot chile. I have to have chile.’

The DGS deontic modal must looks similar to the corresponding ASL modal. As
opposed to ASL must, the DGS sign is signed with an extended index finger and palm
orientation towards the contra-lateral side of the signing space.
For the expression of the deontic meaning of possibility, the modal verb can is used
in ASL. As in English, can is not only used to express physical or mental ability, but
also to indicate permission or the possibility of an event occurring. Again, the condition
for the situation described by the sentence can be participant-internal, as in (13a), or
participant-external, as in (13b). The first use of can can be paraphrased as ‘the signer
has the (physical) ability to do something’, while the second one involves permission,
that is, ‘the teacher is allowed to do something’ (Wilcox/Shaffer 2006, 221 f).

(13) a. index1can lift-weight 100 pounds [ASL]


‘I can lift one hundred pounds.’
198 II. Morphology

b. poss1mother time teach, teach can sign but always fingerspellCCC


‘In my mother’s time the teachers were allowed to sign, but they always
fingerspelled.’

On the basis of historical sources, Wilcox and Wilcox (1995) argue that the ASL modals
can and must have developed from gestural sources via lexical elements. can has origi-
nated from a lexical sign meaning strong/power, which in turn can be traced back to
a gesture ‘strong’ in which the two fists perform a short tense downward movement in
front of the body. Interestingly, the modal can has undergone some phonological
changes. In particular, the orientation of the hands has changed. Likewise, Wilcox and
Wilcox assume that the modals must/should have developed from a gestural source,
which is a deictic pointing gesture indicating monetary debt. This gesture entered the
lexicon of Old French Sign Language and ⫺ due to the influence of (Old) French
Sign Language on ASL ⫺ the lexicon of ASL. In both sign languages, the lexical sign
grammaticalized into a deontic modal expressing strong (i.e. must in (11a) above) or
weak (should in (11b) above) obligation. Again, the modals have undergone some
phonological changes. Both modals are phonologically reduced in that the base hand
present in the source sign owe is lost. But they differ from each other with respect to
movement: must has one downward movement while the movement of should is
shorter and reduplicated (cf. also Janzen/Shaffer 2002; Wilcox/Shaffer 2006; for similar
LSC examples, see Wilcox 2004; for grammaticalization in sign languages, see Pfau/
Steinbach (2006) and chapter 34, Lexicalization and Grammaticalization).
The system of modal expressions in DGS is very similar to that of ASL. One differ-
ence is that we are not aware of a lexical sign that must could have been derived from.
It is, however, clearly related to a co-speech gesture that commonly accompanies or-
ders and commands. We therefore assume that the DGS modal, unlike the correspond-
ing ASL modal, is directly derived from a gestural source. In comparison to ASL and
DGS, LSB appears to have a greater number of different modal expressions at its
disposal. Moreover, these modal expressions belong to different parts of speech. Fer-
reira Brito (1990) analyzes the LSB modals need, can, prohibit, have-not, and let as
verbs, obligatory, prohibited, optional1, and optional2 as adjectives, and obligation
as a noun.
Note finally that the development of modal verbs expressing physical/mental ability
and possibility from a lexical element is attested in spoken languages, too. Latin potere
(‘to be able’), for instance, is related to the adjective potens (‘strong, powerful’). Also,
modal verbs that express obligation may be grammaticalized from lexical items that
refer explicitly to concepts related to obligation, such as ‘owe’ (cf. Bybee/Perkins/Pagli-
uca 1994).

4.2. Epistemic modality

In the previous section, we saw that the deontic interpretation of modal verbs basically
affects the necessity or possibility of a participant to do something. By contrast, the
more grammaticalized epistemic interpretation of modal verbs indicates the signer’s
degree of certainty about or the degree of commitment to the truth of an utterance
(Palmer 1986). In LSB, for instance, a high degree of certainty is expressed by the
9. Tense, aspect, and modality 199

sentence-final modal construction have certainty, as illustrated in (14a) taken from


Ferreira Brito (1990, 236). In ASL, like in DGS, epistemic modality is realized not
only by modal verbs alone, but by a combination of modals and additional manual and
non-manual markers. According to Wilcox and Shaffer (2006, 226 f.), the epistemic
modal verb should, which expresses a high degree of certainty, occupies the sentence-
final position in ASL. In addition, the non-manual markers ‘head nod’ (‘hn’) and ‘bf’
accompany the modal verb (14b). Likewise, the modal verb possible appears sentence-
finally and it is also accompanied by a head nod (14c).

(14) a. today rain … have certainty [LSB]


‘I am certain that today it will rain.’
top bf/hn
b. library have deaf life should [ASL]
‘The library should have Deaf Life/I’m sure the library has Deaf Life.’
top bf/hs
c. same sign because bad translation false c-o-g-n-a-t-e doubt
hn
(pause) (gesture “well”) possible
‘I doubt the two concepts share the same sign (now) because of a problem
with translation, or because of a false cognate, but, well, I suppose it’s pos-
sible.’

Besides non-manual markers, manual markers such as sharp and short movements vs.
soft and reduplicated movements may also have an impact on the interpretation of the
modal verb. Whereas sharp and short movements trigger a stronger commitment, soft
and reduplicated movements indicate a weaker commitment (Wilcox/Shaffer 2006).
In addition to the modal verbs in (14), ASL also uses semi-grammaticalized expres-
sions such as feel, obvious, and seem to express epistemic modality (Wilcox/Wilcox
1995). Again, in their epistemic interpretation, feel, obvious, and seem are often ac-
companied by a head nod and furrowed eyebrows. Interestingly, the sign future cannot
only be used as a lexical tense marker future-tns (as discussed in section 2.1) but also
as a lexical marker of epistemic modality, cf. example (15a), which is adopted from
Wilcox and Shaffer (2006, 228). A similar observation has been made by Massone
(1994, 128) for the sentence-final LSA temporal marker in-the-future (15b). Let us
discuss example (15a) in some more detail: The first occurrence of future, which is
articulated with raised eyebrows, a manual wiggle marker, and longer softer movement,
receives the temporal interpretation future-tns. The second occurrence is performed
with short and sharp movements and accompanied by the two non-manual markers
head nod and furrowed eyebrows, which are typical for the epistemic interpretation.

(15) a. rt 29 think-like ix3 r-o-c-k-v-i-l-l-e p-i-k-e ix3 build+ ix3 [ASL]


top bf/hn
future(wiggle) develop future s-o why must 1move3 near
columbia mall?
‘(I live off) route 29, the Rockville Pike area. In the future I’m sure they will
develop that area. So why do I have to move all the way up near Colum-
bia Mall?’
200 II. Morphology

b. maria ix 3aabandon3b in-the-future [LSA]


‘Maria will abandon him.’

In many sign languages, the degree of modality seems to be marked mainly by non-
manual means. Wilcox and Shaffer (2006, 229) argue that “it is appropriate to discuss
the semantics of modal strength as a matter of degree intensification ⫺ that is, as
variation along a scale of intensification of necessity, possibility, and speaker’s epis-
temic commitment”. Since manual and non-manual modification is a frequent means
to express intensification in many sign languages, the use of both markers in modal
intensification comes as no surprise. An alternative strategy would be to use lexical
expressions. Ferreira Brito’s (1990) discussion of modal expressions in LSB shows that
LSB chooses the second strategy in that it uses a variety of lexical modal expressions
to realize modal intensification.
Note finally, that speaker- and addressee-oriented (epistemic) meaning nuances
such as reference to common knowledge, reference to evident knowledge, or uncer-
tainty, are also extensively discussed in Herrmann (2010). In many spoken languages,
such meanings are, for example, triggered by modal particles or equivalent expressions.
A main result of Herrmann’s typological study, which compares three sign languages
(DGS, NGT, and Irish Sign Language), is that all three sign languages investigated use
mainly non-manual means to express such nuances of meaning.

5. Conclusion

Sign languages employ free and bound grammatical markers to express the grammati-
cal categories of tense, aspect, and modality. While across sign languages, free mor-
phemes ⫺ time adverbials or lexical tense markers ⫺ are the most common strategy
for encoding tense, various aspect types can be realized by verbal inflections, many of
which involve characteristic movement alterations in combination with reduplication.
The encoding of completive/perfective aspect is exceptional in this respect, as these
aspect types are usually realized by free grammatical morphemes. The same is true for
modality distinctions, which are generally expressed by modal verbs. The discussion
has made clear that, when it comes to TAM-marking, sign languages are strikingly
similar to each other ⫺ a pattern that is also familiar from the study of other inflec-
tional categories such as pluralization, agreement, and classification (see chapters 6, 7,
and 8); also see Meier (2002).
The attested free TAM-markers are also highly interesting from a diachronic per-
spective because they involve grammaticalization pathways that are well-known from
the study of TAM-systems in spoken languages (Bybee/Perkins/Pagliuca 1994): future
tense markers may develop from movement verbs, completive and perfective aspect
markers are commonly grammaticalized from adverbials and verbs, and modals de-
velop from adjectives and verbs. The latter are particularly interesting in this context
because the lexical source of a modal can sometimes be traced back to a gestural
source.
While aspects of the TAM-systems of at least some sign languages are fairly well
understood, further research is required to identify (obligatory and optional) non-man-
9. Tense, aspect, and modality 201

ual markers, to distinguish truly inflectional non-manuals from non-manual adverbials,


and to investigate possible gestural sources for the non-manuals involved in TAM-
marking.

6. Literature
Aarons, Debra/Bahan, Ben/Kegl, Judy/Neidle, Carol
1995 Lexical Tense Markers in American Sign Language. In: Emmorey, Karen/Reilly, Judy
(eds.), Language, Gesture, and Space. Hillsdale, NJ: Erlbaum, 225⫺253.
Anderson, Lloyd B.
1982 Universals of Aspect and Parts of Speech: Parallels Between Signed and Spoken Lan-
guages. In: Hopper, Paul J. (ed.), Tense ⫺ Aspect: Between Semantics and Pragmatics.
Amsterdam: Benjamins, 91⫺114.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81(2), 301⫺344.
Bergman, Brita
1983 Verbs and Adjectives: Morphological Processes in Swedish Sign Language. In: Kyle,
Jim/Woll, Bencie (eds.), Language in Sign: An International Perspective on Sign Lan-
guage. London: Croom Helm, 3⫺9.
Bergman, Brita/Dahl, Östen
1994 Ideophones in Sign Language? The Place of Reduplication in the Tense-aspect System
of Swedish Sign Language. In: Bache, C./Basbøll, H./Lindberg, C.-E. (eds.), Tense, As-
pect and Action: Empirical and Theoretical Contributions to Language Typology. Berlin:
Mouton de Gruyter, 397⫺422.
Brennan, Mary
1983 Marking Time in British Sign Language. In: Kyle, Jim/Woll, Bencie (eds.), Language in
Sign: An International Perspective on Sign Language. London: Croom Helm, 10⫺31.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Bybee, Joan L./Perkins, Revere D./Pagliuca, William
1994 The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World.
Chicago: Chicago University Press.
Cabeza Pereiro, Carmen/Fernández Soneira, Ana
2004 The Expression of Time in Spanish Sign Language (SLE). In: Sign Language & Linguis-
tics 7(1), 63⫺82.
Cogen, Cathy
1977 On Three Aspects of Time Expression in American Sign Language. In: Friedman, Lynn
A. (ed.), On the Other Hand: New Perspectives on American Sign Language. New York:
Academic Press, 197⫺214.
Comrie, Bernard
1976 Aspect. Cambridge: Cambridge University Press.
Dahl, Øyvind
1995 When the Future Comes from Behind: Malagasy and Other Time Concepts and Some
Consequences for Communication. In: International Journal of Intercultural Relations
19(2), 197⫺209.
Ferreira Brito, Lucinda
1990 Epistemic, Alethic, and Deontic Modalities in a Brazilian Sign Language. In: Fischer,
Susan D./Siple, Patricia (eds.), Theoretical Issues in Sign Language Research. Vol. 1:
Linguistics. Chicago: University of Chicago Press, 229⫺260.
202 II. Morphology

Fischer, Susan D.
1973 Two Processes of Reduplication in the American Sign Language. In: Foundations of
Language 9, 469⫺480.
Fischer, Susan/Gough, Bonnie
1999 [1972] Some Unfinished Thoughts on finish. In: Sign Language & Linguistics 2(1),
67⫺77.
Friedman, Lynn A.
1975 Space, Time, and Person Reference in American Sign Language. In: Language 51(4),
940⫺961.
Grose, Donovan R.
2003 The Perfect Tenses in American Sign Language: Nonmanually Marked Compound
Tenses. MA Thesis, Purdue University, West Lafayette.
Heine, Bernd/Kuteva, Tania
2002 World Lexicon of Grammaticalization. Cambridge: Cambridge University Press.
Herrmann, Annika
2009 Modal Particles and Focus Particles in Sign Languages. A Cross-linguistic Study of DGS,
NGT, and ISL. PhD Dissertation, University of Frankfurt/Main (to be published in the
Series Sign Language and Deaf Communities, Mouton de Gruyter).
Hoiting, Nini/Slobin, Dan I.
2001 Typological and Modality Constraints on Borrowing: Examples from the Sign Language
of the Netherlands. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages. A
Cross-linguistic Investigation of Word Formation. Mahwah, NJ: Erlbaum, 121⫺137.
Jacobowitz, Lynn/Stokoe, William C.
1988 Signs of Tense in ASL Verbs. In: Sign Language Studies 60, 331⫺339.
Janzen, Terry
1995 The Poligrammaticalization of finish in ASL. MA Thesis, University of Manitoba,
Winnipeg.
Janzen, Terry/Shaffer, Barbara
2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard/
Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spo-
ken Languages. Cambridge: Cambridge University Press, 199⫺223.
Jones, Philip
1978 On the interface of ASL Phonology and Morphology. In: Communication and Cogni-
tion 11, 69⫺78.
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Lakoff, George/Johnson, Mark
1980 Metaphors We Live by. Chicago: University of Chicago Press.
Liddell, Scott K.
1984 Unrealized Inceptive Aspect in American Sign Language: Feature Insertion in Syllabic
Frames. In: Drogo, Joseph/Mishra, Veena/Testen, David (eds.), Papers from the 20 th
Regional Meeting of the Chicago Linguistic Society. Chicago: University of Chicago
Press. 257⫺270.
Marsaja, I Gede
2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen:
Ishara Press.
Massone, Maria Ignacia
1994 Some Distinctions of Tense and Modality in Argentine Sign Language. In: Ahlgren,
Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure.
Papers from the Fifth International Symposium on Sign Language Research. Durham:
ISLA, 121⫺130.
9. Tense, aspect, and modality 203

Meier, Richard P.
2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon
Linguistic Structure in Sign and Speech. In: Meier, Richard P./Cormier, KearsyA./
Quinto-Pozos, David G. (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 1⫺25.
Meir, Irit
1999 A Perfect Marker in Israeli Sign Language. In: Sign Language & Linguistics 2(1),
43⫺62.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert G.
2000 The Syntax of American Sign Language. Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Palmer, Frank R.
1986 Mood and Modality. Cambridge: Cambridge University Press.
Pfau, Roland/Quer, Josep
2007 On the Syntax of Negation and Modals in German Sign Language (DGS) and Catalan
Sign Language (LSC). In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visi-
ble Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de
Gruyter, 129⫺161.
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 5⫺94.
Rathmann, Christian
2005 Event Structure in American Sign Language. PhD Dissertation, University of Texas
at Austin.
Sapountzaki, Galini
2005 Free Functional Elements of Tense, Aspect, Modality and Agreement as Possible Auxilia-
ries in Greek Sign Language. PhD Dissertation, Centre of Deaf Studies, University
of Bristol.
Schermer, Trude/Koolhof, Corline
1990 The Reality of Time-lines: Aspects of Tense in Sign Language of the Netherlands
(SLN). In: Prillwitz, Siegmund/Vollhaber, Tomas (eds.), Proceedings of the Forth Inter-
national Symposium on Sign Language Research. Hamburg: Signum, 295⫺305.
Schmaling, Constanze
2000 Maganar Hannu: Language of the Hands. A Descriptive Analysis of Hausa Sign Lan-
guage. Hamburg: Signum.
Senghas, Ann
1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation,
MIT, Cambridge, MA.
Shaffer, Barbara
2002 can’t: The Negation of Modal Notions in ASL. In: Sign Language Studies 3(1), 34⫺53.
Smith, Carlotta
1997 The Parameter of Aspect (2nd Edition). Dordrecht: Kluwer.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge Uni-
versity Press.
Wilbur, Ronnie B.
1987 American Sign Language: Linguistic and Applied Dimensions. Boston: College-Hill.
Wilbur, Ronnie B.
2008 Complex Predicates Involving Events, Time and Aspect: Is This Why Sign Languages
Look so Similar? In: Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR
2004. Hamburg: Signum, 217⫺250.
204 II. Morphology

Wilbur, Ronnie
2010 The Semantics-Phonology Interface. In: Brentari, Diane (ed.), Sign Languages. Cam-
bridge Language Surveys. Cambridge: Cambridge University Press, 357⫺382.
Wilcox, Sherman/Shaffer, Barbara
2006 Modality in American Sign Language. In: Frawley, William (ed.), The Expression of
Modality. Berlin: Mouton de Gruyter, 207⫺237.
Wilcox, Sherman/Wilcox, Phyllis
1995 The Gestural Expression of Modality in ASL. In: Bybee, Joan/Fleischman, Suzanne
(eds.), Modality in Grammar and Discourse. Amsterdam: Benjamins, 135⫺162.
Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan. A Description of a Signed Language. Amsterdam: Ben-
jamins.
Zeshan, Ulrike
2003 Aspects of Türk İşaret Dili (Turkish Sign Language). In: Sign Language & Linguistics
6(1), 43⫺75.
Zucchi, Sandro
2009 Along the Time Line: Tense and Time Adverbs in Italian Sign Language. In: Natural
Language Semantics 17, 99⫺139.
Zucchi, Sandro/Neidle, Carol/Geraci, Carlo/Duffy, Quinn/Cecchetto, Carlo
2010 Functional Markers in Sign Languages. In: Brentari, Diane (ed.), Sign Languages (Cam-
bridge Language Surveys). Cambridge: Cambridge University Press, 197⫺224.

Roland Pfau, Amsterdam (The Netherlands)


Markus Steinbach, Göttingen (Germany)
Bencie Woll, London (United Kingdom)

10. Agreement auxiliaries


1. Introduction
2. Form and function of agreement auxiliaries
3. Agreement auxiliaries in different sign languages ⫺ a cross-linguistic comparison
4. Properties of agreement auxiliaries in sign languages
5. Grammaticalization of auxiliaries across modalities
6. Conclusion
7. Literature

Abstract

In this chapter, I summarize and discuss findings on agreement auxiliaries from various
sign languages used across the world today. These functional devices have evolved in
order to compensate for the ‘agreement gap’ left when a plain verb is the main verb of
a sentence. Although tracing back the evolutionary path of sign language auxiliaries can
be quite risky due to the scarcity of documentation of older forms of these languages,
10. Agreement auxiliaries 205

internal reconstruction of the grammaticalization paths in sign languages, cross-checked


with cross-linguistic tendencies of grammaticalization of auxiliaries in spoken languages
provides us with some safe assumptions: grammaticalization follows more or less the
same pathways irrespective of the visual-gestural modality of sign languages. At the same
time, however, the development of sign language auxiliaries exhibits some unique charac-
teristics, such as the possibility for a signed language agreement auxiliary to have a
nominal, a pronominal, or even a gestural source of grammaticalization.

1. Introduction
Agreement between the verb and its arguments (i.e. subject and object or source and
goal) in a sentence is one of the essential parts of the grammar in many languages.
Most sign languages, like many spoken languages, possess inflectional mechanisms for
the expression of verbal agreement (see chapter 7 for verb agreement). Auxiliaries,
that is, free grammatical elements accompanying the main verb of the sentence, are
not amongst the most usual means of expressing agreement in spoken languages
(Steele 1978, 1981). Hence, the wide use of agreement auxiliaries in sign languages has
become an issue of great interest (Steinbach/Pfau 2007).
As discussed in chapter 7, in sign languages, verbal agreement is realized by modifi-
cation of path movement and/or hand orientation of the verb stem, thereby morpho-
syntactically marking subject and object (or source/agent and goal/patient) in a sen-
tence. Agreement auxiliaries use the same means for expressing agreement as
agreement verbs do. They are mainly used with plain verbs, which cannot inflect for
agreement. Agreement auxiliaries are either semantically empty, or their lexical mean-
ing is very weak (i.e. light verbs); they occupy similar syntactic positions as inflected
verbs or (in the case of light-verb-like auxiliaries) seem to be part of a serial verb
construction. Only in a few cases, they are able to inflect for aspect, but they commonly
have reciprocal forms (Sapountzaki 2005; Quer 2006; Steinbach/Pfau 2007; de Quadros/
Quer 2008). However, although sign languages have been considered unique as to their
rich morphological agreement expressions, unbound agreement auxiliaries were until
recently under-researched (Smith 1991; Fischer 1996).
The focus of this chapter is on the grammatical functions, as well as on the evolu-
tionary processes which have shaped this set of free functional elements that are used
as agreement auxiliaries in many genetically unrelated sign languages. It is not the
main purpose of this study to give a detailed account of each and every auxiliary,
although such information will be employed for outlining the theoretical issues related
to sign language agreement auxiliaries. Specifically in the case of sign languages, the
device of agreement auxiliaries is closely related to at least three other issues, which are
discussed in depth in other chapters of this volume, namely morphological agreement
inflection (see chapter 7), indexical pronouns (see chapter 11), and grammaticalization
(see chapter 34 for further discussion). The present study builds on the information
and assumptions provided in these three chapters and it attempts to highlight links
between them so that the grammatical function and the historical development of
agreement auxiliaries will become clearer.
This chapter is organized as follows: the form and function of agreement auxiliaries
as well as general implications of their study for linguistic theories and the human
206 II. Morphology

language faculty, form the main parts of this chapter. Section 2 gives a brief overview
of the forms and functions of agreement auxiliaries, in order to familiarize the reader
with these specific grammatical markers. Moreover, I discuss the restrictions on the
use of agreement auxiliaries in sign languages. In section 3, I introduce various sign
languages that make use of agreement auxiliaries. Section 4 examines one by one a set
of grammatical properties of sign language auxiliaries and considers possible implica-
tions for our understanding of sign languages as a major group of languages. In sec-
tion 5, I will compare auxiliaries in sign languages to their counterparts in spoken
languages. The final section summarizes the main issues addressed in this chapter.

2. Form and function of agreement auxiliaries

In spoken languages, auxiliaries have many different functions, such as expressing


tense, aspect, modality, and grammatical voice (genus verbi), amongst others. In addi-
tion, auxiliaries also express verbal agreement features in many spoken languages.
However, the realization of agreement is usually not the main function of spoken
language auxiliaries (but see the discussion of German tun (‘do’) in Steinbach and
Pfau (2007)). The morphosyntactic expression of verb agreement was one of the first
grammatical features of sign languages to be researched (Padden 1988). Some verbs
in sign languages express agreement between subject and object (or between source
and goal) on the morphosyntactic level by modifying path movement and/or hand
orientation. These verbs sharing specific grammatical properties are called agreement
verbs. By contrast, another set of verbs, the so-called plain verbs, does not share the
property of morphosyntactic expression of agreement, and it is primarily with these
verbs that agreement auxiliaries find use, as will be described below. Interestingly, the
main function of agreement auxiliaries is the overt realization of verb agreement with
plain verbs that cannot be modified to express agreement. Hence, agreement auxilia-
ries in sign languages differ from auxiliaries in spoken languages in that they are not
used to express tense, aspect, modality, or genus verbi.

2.1. Functions of sign language agreement auxiliaries

Agreement auxiliaries in sign languages have a different function from their counter-
parts in spoken languages in that their most important function is to express agreement
with the subject and the object of the sentence ⫺ see example (1a) below. All of the
agreement auxiliaries that have been described in sign languages accompany main
verbs, as is the norm in most spoken languages, too. As stated above, sign language
agreement auxiliaries usually accompany plain verbs, that is, verbs that cannot inflect
for agreement. In addition, agreement auxiliaries occasionally accompany uninflected
agreement verbs. Moreover, agreement auxiliaries have been found to accompany in-
flected agreement verbs (see example (1b) from Indopakistani Sign Language (IPSL))
thereby giving rise to constructions involving split and/or double inflection (following
Steinbach/Pfau 2007). This will be described in more detail in section 4 below. Besides
10. Agreement auxiliaries 207

verbs, in many sign languages, agreement auxiliaries also accompany predicative adjec-
tives such as proud in example (1c) from German Sign Language (DGS).
While a large proportion of the attested agreement auxiliaries are semantically
empty, a subset of semi-grammaticalized auxiliaries such as give-aux in example (1d)
from Greek Sign Language (GSL) still have traceable roots (in this case, the main verb
to give) and still carry semantic load expressing causativity or transitivity. This results
in agreement auxiliaries which select for the semantic properties of their arguments
and put semantic restrictions on the possible environments they occur in. Moreover,
since agreement verbs usually select animate arguments, most agreement auxiliaries
also select [Canimate] arguments. Example (1a) is from Massone and Curiel (2004),
(1b) from Zeshan (2000), (1c) from Steinbach and Pfau (2007), and (1d) from Sapoun-
tzaki (2005); auxiliaries are in bold face.

(1) Agreement auxiliaries in different sign languages


a. john1 mary2 love 1AUX2 [LSA]
‘John loves Mary.’
b. leftAUX1 complete leftteach1. [IPSL]
‘He taught me everything completely.’
c. ix1 poss1 brother ix3a proud 1PAM3a [DGS]
‘I am proud of my brother.’
d. ix2 2GIVE-AUX3 burden end! [GSL]
‘Stop being a trouble/nuisance to him/her!’

As in spoken languages, the same forms of agreement auxiliaries may also have addi-
tional grammatical functions when used in different syntactic slots or in specific envi-
ronments in sign languages. They may, for instance, also function as disambiguation
markers, such as the Brazilian Sign Language (LSB) agreement auxiliary when used
preverbally. The DGS auxiliary can also be used as a marker of emphasis, similar to
the insertion of do in emphatic sentences in English, and the auxiliaries in Flemish
Sign Language (VGT) and GSL are also markers of transitivity and causativity.

2.2. Forms and origins of agreement auxiliaries

Based on the origins of agreement auxiliaries, Steinbach and Pfau (2007) have pro-
posed a three-way distinction in their study on the grammaticalization of agreement
auxiliaries:

(i) indexical auxiliaries, which derive from concatenated pronouns; see the IPSL ex-
ample in Figure 10.1 (note that we cannot exclude the possibility that the indexical
signs went through an intermediate evolutionary stage of path/motion or transfer
markers of a verbal nature);
(ii) non-indexical agreement auxiliaries and semi-auxiliaries which derive from main
verbs such as give, meet, go-to; see the GSL example in Figure 10.2; and
(iii) non-indexical agreement auxiliaries which derive from nouns like person (see the
DGS example in Figure 10.3 (the DGS auxiliary is glossed as pam, which stands
for Person Agreement Marker (Rathmann 2001)).
208 II. Morphology

‘you to him/her’ ‘I to him and he to me’ ‘to each other’


Fig. 10.1: Indexical auxiliary derived from pronoun: aux1 (IPSL, Zeshan 2000). Copyright © 2000
by John Benjamins. Reprinted with permission.

Fig. 10.2: Non-indexical agreement auxiliary derived from verb; pictures show beginning and end
point of movement: give-aux (GSL, Sapountzaki 2005)

Fig. 10.3: Non-indexical agreement auxiliary derived from noun; pictures show beginning and end
point of movement: 3apam3b (DGS, Rathmann 2001). Copyright © 2001 by Christian
Rathmann. Reprinted with permission.

Note that neither the first nor the third subgroup of auxiliaries is common in spoken lan-
guages since in spoken languages, auxiliaries are usually grammaticalized from verbs (i.e.
subgroup (ii)). In contrast, grammaticalization of auxiliaries from nouns is rare, if it exists
at all. The abundant occurrence of sign language auxiliaries that have developed from
pronouns or from a paralinguistic means such as indexical gestures is also intriguing.
Actually, the latter development, from pointing sign via pronoun to subject/object-
agreement auxiliary, is the most common one identified in the sign languages investi-
10. Agreement auxiliaries 209

gated to date; it is attested in, for instance, GSL (Sapountzaki 2005), IPSL, Japanese
Sign Language (NS) (Fischer 1992, 1996), and Taiwan Sign Language (TSL) (Smith
1989, 1990). Fischer (1993) mentions the existence of a similar agreement auxiliary in
Nicaraguan Sign Language (ISN), glossed as baby-aux1, which evolved in communica-
tion amongst deaf children. Another similar marker resembling an indexical auxiliary
has been reported in studies on the communication of deaf children who are not ex-
posed to a natural sign language but either to artificial sign systems (Supalla 1991; in
Fischer 1996) or no sign systems at all (Mylander/Goldin-Meadow 1991; in Fischer
1996). This set of grammaticalized indexical (pointing) auxiliaries belongs to the
broader category of pronominal or determiner indexical signs, which, according to the
above findings, have evolved ⫺ following universal tendencies of sign languages ⫺
from pointing gestures to a lexicalized pointing sign (Pfau/Steinbach 2006, 2011).
In contrast to indexical auxiliaries, the second set of agreement auxiliaries has ver-
bal roots. The lexical meaning of these roots can still be traced. Such auxiliaries mostly
function as semi-auxiliaries and have not spread their use to all environments. They
are attested in TSL (Smith 1990), GSL (Sapountzaki 2005), VGT (Van Herreweghe/
Vermeerbergen 2004), and Sign Language of the Netherlands (NGT) (Bos 1994). The
third group consists at the moment of only two auxiliaries, attested in DGS and Catalan
Sign Language (LSC). Both auxiliaries derive from the noun person (see Figure 10.3).
In the following section, I will analyze the individual characteristics of the auxiliaries
of all three groups language by language.

3. Agreement auxiliaries in different sign languages −


a cross-linguistic comparison
Agreement auxiliaries are part of the grammar of numerous sign languages, which will
be described individually in alphabetical order in this section. Firstly, I describe the
form and the syntactic properties of each auxiliary. Then I discuss the historical devel-
opment of each auxiliary for all cases where there is evidence for a specific grammati-
calization process. Finally, I turn to other issues that are essential for the function of
each auxiliary (such as, for example, the use of non-manual features).

3.1. Comparative studies on agreement auxiliaries

Agreement auxiliaries are attested in a wide range of sign languages from around the
world. These agreement auxiliaries can be found in historically unrelated sign lan-
guages, and appear at different stages in the evolutionary continuum. Agreement
markers explicitly described as auxiliaries appear in descriptions of the following sign
languages:

Argentine Sign Language (LSA) Greek Sign Language (GSL)


Brazilian Sign Language (LSB) Indopakistani Sign Language (IPSL)
Catalan Sign Language (LSC) Japanese Sign Language (NS)
Danish Sign Language (DSL) Sign Language of the Netherlands (NGT)
210 II. Morphology

Flemish Sign Language (VGT) Taiwan Sign Language (TSL)


German Sign Language (DGS)

Initially, work was done from a language-specific perspective, analyzing agreement


markers or sets of markers within a specific sign language. Recently, comparative stud-
ies also appeared in the field of sign language linguistics, shedding light on the similar-
ities and differences of agreement auxiliaries across sign languages. The first compara-
tive studies of agreement markers in sign languages (Engberg-Pedersen 1993; Fischer
1996; Zeshan 2000; Rathmann 2001) used findings from TSL, IPSL, DGS, and NGT
combined with evidence of agreement auxiliaries in NS (which is historically related
to TSL and structurally similar). The first cross-linguistic generalizations concerning
auxiliaries in sign languages were already drawn in these studies: these auxiliaries,
initially referred to as ‘pointing signs’, were identified as functional elements and not
as pronouns that realize a specific grammatical function and are directly linked to the
main verb of the sentence. But already in these studies, it became apparent how differ-
ent the uses, origins, and distribution of pronominal auxiliaries may be, even in struc-
turally related languages such as TSL and NS. More recent comparative studies by
Quer (2006) and de Quadros and Quer (2008) discuss the typological status of two
indexical auxiliaries in LSC and LSB, both glossed as aux-ix. Amongst other issues,
these studies provide information on the syntax and inflection of agreement auxiliaries,
as well as on the verbs they accompany and their distribution in various syntactic
environments.
However, the broadest typological study on agreement auxiliaries in sign languages
to date is the one by Steinbach and Pfau (2007), which is also one of the main sources
for this chapter. These authors emphasize the grammaticalization processes underlying
agreement auxiliaries. The grammaticalization of auxiliaries in spoken languages is
compared to the grammaticalization of agreement auxiliaries in sign languages. The
authors conclude that modality-independent as well as modality-specific cognitive
processes and grammaticalization paths characterize overall the grammaticalization of
agreement auxiliaries in sign languages; the peculiarity of indexical pronominal sources
for agreement auxiliation in sign languages, for instance, is attributed to specific prop-
erties of the visual-gestural modality.

3.2. Argentine Sign Language

In their work on sign order in LSA, Massone (1994) and Massone and Curiel (2004)
compare the articulatory nature of pronouns and of an indexical agreement auxiliary;
they conclude that morphologically, pronoun copy differs from a transitive auxiliary
aux. The auxiliary almost always appears in sentence-final position (2a). However,
when it is combined with an agreement verb, it may appear in a preverbal position
(2b), and when it is used in interrogative clauses with an overt sentence-final interroga-
tive pronoun, aux precedes the wh-pronoun (2c). Its function is restricted to the ex-
pression of agreement, while its form indicates that it is grammaticalized from two
concatenated pronouns. The auxiliary is produced with a “smooth hold followed by a
curved movement between two different loci in the signing space, also ending with a
smooth hold” (Massone 1993). By contrast, a pronoun copy still retains more specific
10. Agreement auxiliaries 211

beginning and end points of each pronoun, thus being grammaticalized to a lesser ex-
tent.

(2) Argentine Sign Language indexical auxiliary [LSA]


a. bob ix1 send-letter 3aux1
‘Bob sends me a letter.’
b. 3aaux3b say-yes
‘He says yes to her.’
_______________wh
c. ix2 say3 2aux3 what
‘What did you tell him/her?’

3.3. Flemish Sign Language

In VGT, the agreement auxiliary glossed as give (geven in Dutch) is phonologically


similar to the VGT main verb meaning ‘to give’ (Van Herreweghe/Vermeerbergen
2004). give needs two animate arguments and it tends to appear in reversible sentences
where subject/source and object/goal can occupy interchangeable syntactic slots (3a),
as long as the movement path of the auxiliary is from subject/source to object/goal. It
functions as a semi-auxiliary, marking subject/object-agreement as well as causativity,
a semantic property which can be traced back to its lexical source to give (see (3)
below). Its lexical source and the grammaticalization process are apparently still visible
in its present form.

(3) Flemish Sign Language semi-auxiliary give [VGT]


a. girl give boy hit
‘The girl hits the boy.’
b. man give dog caress
‘The man is caressing the dog’.

3.4. German Sign Language

The person agreement marker in DGS has been analyzed in several studies. The first
study on this marker is the one by Keller (1998), where it was glossed as auf-ix because
it used to be accompanied by a mouthing related to the German preposition auf (‘on’).
Phonologically, the auxiliary is similar to the sign for person. Rathmann (2001) glossed
this auxiliary as pam (Person Agreement Marker), a gloss that hints at its phonological
form as well as its morphosyntactic function in DGS. In this study, pam was described
as a marker which mainly occupies a preverbal position (its postverbal position had
not been discussed prior to this) and has the ability to inflect for singular, dual, and
distributional plural. The postverbal use of pam in (4a) is described in Steinbach and
Pfau (2007), who argue that the syntactic distribution of pam in DGS is subject to
212 II. Morphology

dialectal variation. Rathmann (2001) was the first to describe this marker as an agree-
ment auxiliary, which is used in association with verb arguments that refer to animate
or human entities. pam can inflect for number and person. Rathmann argues that the
use of pam with specific main verbs is subject to certain phonological constraints, that
is, it is used primarily with plain verbs such as like in (4a), but it also complies with
semantic criteria, in that the use of pam may force an episodic reading (4c). Besides
plain verbs, pam can also be used with adjectival predicates such as proud in (4b),
which do not select source and goal arguments, that is, with predicates that do not
involve the transition of an object from a to b. Rathmann claims that pam, unlike most
agreement verbs, does not express agreement with source and goal arguments but
rather with subject and direct object. Interestingly, when used with uninflected back-
ward verbs such as invite, pam does not move from the position of the source to the
position of the goal but from the position of the subject to the position of the object
(cf. also Steinbach/Pfau 2007; Pfau et al. 2011; Steinbach 2011). Hence, pam has devel-
oped into a transitivity marker which is not thematically (source/goal) but syntactically
restricted (subject/object). Note finally that with plain verbs, pam can also be used as
a reciprocal marker (Pfau/Steinbach 2003). Examples (4a) and (4b) are from Steinbach
and Pfau (2007, 322), (4c) is from Rathmann (2001).

(4) German Sign Language auxiliary pam [DGS]


a. mother ix3a neighbor new ix3b like 3apam3b
‘My mother likes Mary.’
b. ix1 poss1 brother ix3a proud 1pam3a
‘I am proud of my brother.’
c. son2 mother1 5-years 1pam2 teach
‘A mother has been teaching her son for 5 years.’ (episodic reading)
??
‘A mother used to teach her son for 5 years.’ (generic reading)

3.5. Greek Sign Language

GSL has two different agreement auxiliaries. Firstly, there is some evidence for an
indexical agreement auxiliary ix-aux, although it does not occur frequently in sponta-
neous data (Sapountzaki 2005). As in other sign languages where indexical auxiliaries
are observed, the movement of the one-handed auxiliary starts with the finger pointing
towards the subject locus and ends with the finger pointing towards the object locus,
the movement being a smooth path from subject to object. In addition, ix-aux appears
in a reciprocal form meaning ‘transmitting to each other’: in fact, the GSL sign usually
glossed as each-other seems to be no more than the inflected, reciprocal form of ix-
aux. The reciprocal form can also appear with strong aspectual inflection (progressive
or repetitive). It can be used with the verbs telephone, fax, help, and communicate-
through-interpreter. Interestingly, all of the verbs of transmission of information,
which typically combine with the GSL ix-aux, are by default agreement verbs in GSL,
which does not support the argument that the evolution of an indexical agreement
auxiliary covers an ‘agreement gap’ in grammar. A hypothesis is that this indexical
10. Agreement auxiliaries 213

sign selects only verbs that semantically relate to ‘transmission of message’. However,
there is not enough evidence to support this hypothesis further at this point.
Secondly, a non-indexical semi-auxiliary marking agreement is also used in GSL. It
is glossed as give-aux. In terms of grammatical function, its role is to make an intransi-
tive mental state verb, such as feel-sleepy, transitive and, in addition, to express the
causation of this state. Occasionally, it may combine with atelic verbs of activity like
sing, suggesting that the use of give-aux is expanding to atelic, body-anchored verbs,
in addition to plain verbs of mental or emotional state, which typically are also body-
anchored. It appears that the criteria for selecting the verbs that combine with give-
aux are both semantic and structural in nature. Usually (but not always, for example
see (5b) below) give-aux appears in structures including first person (non-first to first,
or first to non-first). The auxiliary may inflect for aspect, but it is more common for the
main verb to carry aspectual inflection, while the auxiliary only carries the agreement
information (5d).

(5) Greek Sign Language non-indexical auxiliary give-aux [GSL]


a. deaf in-grouploc:c sign-too-much 3give-aux1 get-overwhelmed
‘Deaf who are too talkative make me bored and overwhelmed.’
b. ix2 2give-aux3 burden end!
‘Stop being a trouble / nuisance to him/her!’
c. ix2 sign++ stative 2give-aux1 get-overwhelmed, end!
‘Stop signing the same things again and again, you are getting very tiresome
to me!’
d. ix1 sea all-in-front-of-me sit, what? 3give-aux1 be-calm.
‘Sitting by the sea makes me calm.’

3.6. Indopakistani Sign Language

The IPSL agreement auxiliary, which is glossed as aux (or ix in some earlier studies;
e.g. Zeshan 2000) is similar to the indexical GSL auxiliary discussed in the previous
section. The IPSL auxiliary has the phonological form of an indexical sign with a
smooth movement between two or more locations, with the starting point at the locus
linked to the source of the action and the end point(s) at the locus or loci linked to
the goal(s) of the action. It is thus used to express spatial agreement with the source
and goal arguments, as is illustrated in (6a) and (6b). Its sentence position varies,
depending on whether the main verb it accompanies is a plain verb or an agreement
verb. Generally, the auxiliary occupies the same syntactic slot as the indexical sign ix
in its basic localizing function, that is, immediately before or after the (non-agreeing)
verb. When used with plain verbs, the auxiliary immediately follows the predicate (6c).
When accompanying an agreement verb, the auxiliary may precede and/or follow the
main verb and thus may be used redundantly, yielding structures with double (6a) or
even multiple markings of agreement (6b). It can also stand alone in an elliptical sen-
tence (6d) where the main verb is known from the context. In this case, it is usually
associated with communicative verbs (say, tell, talk, inform, amongst others). Finally,
and similar to the GSL auxiliary ix-aux, it can also express reciprocity. aux is a verbal
214 II. Morphology

functional element, which is semantically empty. In sum, it is a fully grammaticalized


auxiliary verb that accompanies main verbs.

(6) Indopakistani Sign Language indexical auxiliary [IPSL]


a. leftaux1 all complete leftteach1.
‘He taught me everything completely.’
b. sign work 1aux0 0aux3b 3baux0 0aux1 1aux0 0both3b
‘I discuss the matter via an interpreter.’
q
c. understand 2aux1?
‘Do you understand me?’
d. yasin rightaux1 deaf little end.
‘Yasin told me that there are few deaf people.’

3.7. Japanese Sign Language

Fischer (1996) provides evidence of an indexical auxiliary used in NS. Like aux-1 in
TSL, which will be discussed below, aux-1 in NS seems to be a smoothed series of
indexical pronouns (pronoun copy is a common phenomenon in NS, much more than
in American Sign Language (ASL)). aux-1 has phonologically assimilated the phono-
logical borders of the individual pronouns, that is, their beginning and end points. Its
sentence position is more fixed than that of pronouns. It does not co-occur with certain
pronoun copy verbs and is not compatible with gender marking. All these verb-like
properties show that aux-1 in NS is a grammaticalized person agreement marker and
that it functions as an agreement auxiliary, as illustrated in (7) (Fischer 1996, 107).

(7) Japanese Sign Language indexical auxiliary [NS]


child3a teacher3b like 3aaux-13b
‘The child likes the teacher.’

3.8. Sign Language of the Netherlands

Inspired by studies on agreement auxiliaries in TSL, Bos (1994) and Slobin and Hoiting
(2001) identified an agreement auxiliary in NGT, glossed as act-on. The grammatical
function of this auxiliary is to mark person agreement between first and second person
or between first and third and vice versa; see example (8). act-on accompanies verbs
selecting arguments which are specified for the semantic feature [Chuman]. The posi-
tion of act-on in the sentence is not absolutely fixed, although in more than half of
the examples analyzed, act-on occupies a postverbal position. In elliptical sentences,
it can also stand alone without the main verb. Historically, act-on seems to be derived
from the main verb go-to (Steinbach/Pfau 2007), but unlike go-to, act-on is often
accompanied by the mouthing /op/, which corresponds to the Dutch preposition op
10. Agreement auxiliaries 215

(‘on’), although act-on is not always used in contexts where the preposition op would
be grammatically correct in spoken Dutch. In the Dutch equivalent of (8), for instance,
the preposition op would not be used. As for the source, an alternative analysis would
be that act-on is an indexical auxiliary, that is, that it is derived from two concatenated
pronouns, just like the auxiliaries previously discussed

(8) Sign Language of the Netherlands auxiliary act-on [NGT]


ix1 partner ix3a love 3aact-on1
‘My boyfriend loves me.’

Bos (1994) found a few examples where both the main verb and act-on agree, that is,
instances of double agreement marking. Consequently, she argues that agreement
verbs and agreement auxiliaries are not mutually exclusive. In other words, act-on can
combine with an already inflected agreement verb to form a grammatical sentence.
Just like agreement verbs, act-on marks subject and object agreement by a change in
hand orientation and movement direction. However, unlike agreement verbs, it has no
lexical meaning, and its function is purely grammatical, meaning ‘someone performs
some action with respect to someone else’.
Remember that act-on might have developed from either a spatial verb or pro-
nouns. According to Bos, act-on is distinct from NGT pronouns with respect to manner
of movement (which is rapid and tense); also, unlike indexical auxiliaries derived from
pronouns, act-on does not begin with a pointing towards the subject. Although it
cannot be decided with certainty whether act-on is derived from two concatenated
pronouns or from a verb, the latter option seems to be more plausible. This brings us
back to the question of grammaticalization in the systems of sign languages. In both
studies on act-on, reference is made to the accompanying mouthing (a language con-
tact phenomenon), suggesting that the sign retains some traces of its lexical origin. In
other sign languages, such as DGS, the initial use of mouthing with the agreement
auxiliary (pam) has gradually decreased, so that the DGS auxiliary is currently used
without mouthing (i.e. in a phonologically reduced form), thus being grammaticalized
to a greater extent (Steinbach/Pfau 2007). Trude Schermer (p.c.) suggests that the NGT
auxiliary is presently undergoing a similar change.

3.9. Taiwan Sign Language

Smith (1990, 1991) is the first detailed discussion of agreement auxiliaries in a sign
language. He focuses on TSL and describes which properties the TSL auxiliaries share
with other auxiliaries cross-modally (Steele 1978). The three TSL auxiliaries serving
as subject/object-agreement markers are glossed as aux-1, aux-2, and aux-11, based
on their function (auxiliary) and their phonological form: (i) aux-1 is indexical, using
the handshape conventionally glossed as ‘1’ (@); (ii) aux-2 is identical to the TSL verb
see, using the handshape glossed as ‘2’ (W); and (iii) aux-11 is phonologically identical
to the two-handed TSL meet, performed with two ‘1’ handshapes (glossed as ‘11’). The
use of aux-11 is illustrated in (9).
216 II. Morphology

(9) Taiwan Sign Language non-indexical auxiliary aux-11 [TSL]


top
a. that vegetable, index1 1aux-113 not-like
‘I don’t like that dish.’
b. 3aaux-113b-[fem] teach3b-[fem]
‘He/she teaches her.’

TSL agreement auxiliaries differ from verbs in syntax: they most often appear in a
fixed position before the main verb. They are closely attached to the main verb and
mark person, number, and gender, but not tense, aspect, or modality. In (9b), gender
is marked on the non-dominant hand by a N-handshape (Smith 1990, 222). Usually, the
auxiliaries co-occur with plain verbs or with unmarked forms of agreement verbs. In
(9b), however, both the lexical verb and the auxiliary are marked for object agreement
(and the auxiliary in addition for subject agreement). Historically, the auxiliaries have
developed from different sources. As mentioned above, aux-1 might result from a
concatenation of pronouns, while aux-2 and aux-11 are phonetically identical to the
TSL verbs see and meet, respectively, and seem to derive from ‘frozen’ uninflected
forms of these verbs. They all seem to have proceeded along a specific path of gram-
maticalization and have lost their initial lexical meanings, as is evident from the exam-
ples in (9).

3.10. Sign languages without agreement auxiliaries

So far, we have seen that a number of unrelated sign languages employ agreement
auxiliaries to express verb agreement in various contexts. However, this does not neces-
sarily mean that agreement auxiliaries are modality-specific obligatory functional el-
ements that can be found in all sign languages. Actually, quite a few sign languages
have no agreement auxiliaries at all. ASL, for example, does not have dedicated agree-
ment auxiliaries (de Quadros/Lillo-Martin/Pichler 2004). Likewise, British Sign Lan-
guage (BSL), like ASL, distinguishes between agreement verbs and plain verbs but
has not developed a means to express agreement with plain verbs (Morgan/Woll/Bar-
rière 2003). For ASL, it has been argued that non-manual markers such as eye-gaze
are used to mark object agreement with plain verbs (cf. Neidle et al. 2000; Thompson/
Emmorey/Kluender 2006). In the case of young sign languages, agreement as an inflec-
tional category may not even exist, such as is the case in the Al-Sayyid Bedouin Sign
Language (ABSL), used in the Bedouin community of Al-Sayyid in the Negev in Israel
(Aronoff et al. 2004).

4. Properties of agreement auxiliaries in sign languages

4.1. Inflection carried by agreement auxiliaries

In sign languages, the grammatical expression of agreement between the verb and two
of its arguments is restricted to a specific group of verbs, the so-called agreement verbs.
10. Agreement auxiliaries 217

In some sign languages, agreement auxiliaries take up this role when accompanying
plain verbs, which cannot inflect for subject/object-agreement. When pam accompanies
an agreement verb, the latter usually does not show overt agreement (Rathmann 2001).
Equally clear-cut is the distribution of agreement auxiliaries in many sign languages.
In LSB, the indexical agreement auxiliary does usually combine with plain verbs, but
when the same (indexical) form accompanies an agreement verb, the auxiliary takes
over the function of a subject/object-agreement marker and the agreement verb re-
mains uninflected. Interestingly, in LSB, in these cases the sentential position of the
marker is different (preverbal, instead of postverbal), possibly indicating a different
grammatical function of the auxiliary. In some sign languages (e.g. DGS), double inflec-
tion of both the main verb and the agreement auxiliary is possible. Such cases are,
however, considered redundant, that is, not essential for marking verb agreement. Pos-
sibly, double agreement serves an additional pragmatic function like emphasis in this
case (Steinbach/Pfau 2007). However, there are exceptions to this tendency, as in some
other sign languages, such as IPSL or LSC, agreement auxiliaries commonly accom-
pany agreement verbs, either inflected or uninflected, without any additional pragmatic
function (Quer 2006; de Quadros/Quer 2008). In contrast, in other sign languages,
such as, for example, GSL and NS, examples of double agreement are reported to be
ungrammatical (Fischer 1996).
A related issue is the semantics of the auxiliary itself, and the semantic properties
of its arguments in the sentence. Most auxiliaries that evolved from indexical (pronomi-
nal) signs are highly grammaticalized, purely functional, and semantically empty el-
ements. The movement from subject to object may go back to a gesture tracing the
path of physical transfer of a concrete or abstract entity from one point in the sign
space to another. The grammaticalized agreement auxiliary expresses the metaphorical
transfer from the first syntactic argument to the second one. Although in sign languages
transfer from a point x to a point y in topographic sign space is commonly realized by
means of classifiers, which carry semantic information about the means of or the instru-
ment involved in this transfer (see chapter 8 for discussion), the movement of a seman-
tically empty indexical handshape can be seen as the result of a desemanticization
process in the area of the grammatical use of the sign space. While in some sign lan-
guages, agreement auxiliaries are fully functional elements that may combine with a
large set of verbs, in other sign languages, agreement auxiliaries cannot accompany
main verbs of all semantic groups. Take, for example, the GSL ix-aux that only accom-
panies verbs expressing transmission of a metaphorical entity, like send-fax or tele-
phone (Sapountzaki 2005). In NGT, TSL, and LSB, agreement auxiliaries may combine
with main verbs of any semantic group but require their arguments to be specified as
[Chuman] or at least [Canimate].
The ability of agreement auxiliaries to inflect for aspect, as well as their ability to
inflect for person, also varies amongst sign languages. In sign languages, various types
of aspectual inflection are usually expressed on the main verb by means of reduplica-
tion and holds (see chapter 9 for discussion). In auxiliary constructions, aspectual in-
flection is still usually realized on the main verb ⫺ in contrast to what is commonly
found in spoken languages. In LSB, for instance, aux-ix cannot inflect for aspect. The
same holds for pam in DGS. However, in a few sign languages, agreement auxiliaries
can express aspectual features (e.g. GSL give-aux). Similarly, in some sign languages,
agreement auxiliaries do not have a full person paradigm. GSL give-aux has a strong
218 II. Morphology

preference to occur in first person constructions while in sentences with non-first per-
son subject and object, ix-aux is usually used. Thus, in GSL, the distribution of ix-aux
and give-aux seems to be complementary.
Finally note that some of the agreement auxiliaries, such as pam in DGS, ix-aux and
give-aux in GSL, and aux in IPSL, can also be used in reciprocal constructions. The
reciprocal form of the agreement auxiliaries may either be two-handed ⫺ both hands
moving simultaneously in opposite directions ⫺ or one-handed ⫺ in this case, the
reciprocal function is expressed by a sequential backward movement.

4.2. Syntactic position

In syntax, agreement auxiliaries show a considerable amount of variation. The position


of the DGS auxiliary pam appears to be subject to dialectal variation as it may occupy
either a preverbal (post-subject; Rathmann 2001) or a postverbal position (Steinbach/
Pfau 2007). By contrast, LSA aux and LSB aux-ix usually occupy the sentence-final
position. In GSL and TSL, indexical agreement auxiliaries are attested in two different
positions, preverbal or sentence-final in GSL and sentence-initial or preverbal in TSL.
Unlike in DGS, in GSL this variation is not dialectal. In some sign languages, the
function of the agreement auxiliary may vary with the syntactic position. The LSB
auxiliary can, for example, appear in a preverbal position but with a different grammat-
ical function. In this position, it is used as a disambiguation marker. While the GSL
indexical agreement auxiliary ix-aux occupies the sentence-final or at least post-verbal
position, give-aux appears immediately preverbal. The reason for this distribution may
be articulatory: in most uses of give-aux, the end point of the movement is the signer
(i.e. position ‘1’). Since the auxiliary only accompanies body-anchored signs, the order
body-anchored main verb after agreement auxiliary seems to be more optimal. Other
parameters may also play a role in determining the syntactic position of agreement
auxiliaries. The indexical agreement auxiliary in IPSL, for instance, occupies the sen-
tence-final position when the main verb is plain, while it has a more flexible distribu-
tion when the main verb is an agreeing verb.
Concerning syntactic structure, sign languages can be divided in two types: (i) sign
languages which are specified as [Caux] like the sign languages with agreement auxil-
iaries described in this chapter and (ii) [⫺aux] languages, like ASL or BSL, which do
not have agreement auxiliaries. According to Rathmann (2001), only [Caux] languages
project agreement phrases where the agreement auxiliary checks agreement features
(note that Rathmann uses pam as a general label for agreement auxiliaries and thus
distinguishes between [Gpam]).

4.3. Non-manual features

Mouthing ⫺ an assimilated cross-modal loan of (a part of) a spoken word (see chap-
ter 35 for discussion) ⫺ is a phenomenon that not all of the studies on agreement
auxiliaries address. In at least one case, that is, the NGT agreement auxiliary act-on,
mouthing of the Dutch preposition op is still fairly common (at least for some signers)
10. Agreement auxiliaries 219

and can be considered as an integral part of the lexical form of the auxiliary. However,
according to recent studies at the Dutch Sign Centre (Nederlands Gebarencentrum),
use of mouthing is gradually fading. A similar process has previously been described
for the DGS auxiliary pam, which has lost its accompanying mouthing /awf/. This proc-
ess can be considered as an instance of phonological reduction. Moreover, in DGS, the
mouthing associated with an adjacent verb or adjective may spread over pam, thus
suggesting the development of pam into a clitic-like functional element.
In GSL, the non-indexical auxiliary give-aux, unlike the phonologically similar main
verb give, is not accompanied by a mouthing. Besides its specific syntactic position,
which is different from that of the main verb, it is now recognized as an agreement
auxiliary because it is used without mouthing, a fact that further supports the hypoth-
esis of ongoing grammaticalization of agreement auxiliaries.
Another interesting issue for theories of grammaticalization is the source of the
mouthings accompanying act-on and pam in NGT and DGS respectively. The mouthing
of the corresponding Dutch and German prepositions op and auf can either be ana-
lyzed as a cross-modal loan expression or as a Creole neologism taken from a language
of a different (oral) modality into a sign language. In both languages, the prepositions
are used with one-place predicates such as wait or be proud to mark objects (i.e. Ich
warte auf dich, ‘I am waiting for you’). Hence, the use of the agreement auxiliaries in
NGT and DGS corresponds to some extend to the use of the prepositions in Dutch
and German (i.e. ix wait pam, ‘I am waiting for you’). However, the use of the auxilia-
ries and the accompanying mouthings in NGT and DGS does not exactly match the
use of the prepositions op and auf in Dutch and German (i.e. ix laugh pam ⫺ Ich lache
über/*auf dich, ‘I laugh at you’). Moreover, although apparently both prepositions do
not function as auxiliaries in Dutch and German, the semantics of a preposition mean-
ing on nevertheless fits the semantic criteria for agreement auxiliary recruitment, that
is, the motion and/or location schemas proposed by Heine (1993).

4.4. Degree of grammaticalization

As mentioned above, agreement auxiliaries have developed from three different sour-
ces: (i) pronouns, (ii) verbs, and (iii) nouns. Indexical agreement auxiliaries are gener-
ally grammaticalized to a high degree. An example of a fully grammaticalized agree-
ment marker is the TSL auxiliary aux-1, its NS and IPSL counterparts, and the LSB
auxiliary aux-ix, all of which have initially evolved from indexical signs. In their present
stage, they are semantically empty function words ⫺ they are reported to have no
meaning of their own and they only fulfill a grammatical function in combination with
a main verb. They can accompany many different verbs in these sign languages, and
their position can be predicted with some accuracy; in most cases, they immediately
precede or follow the main verb. Still, we also find some cases of indexical agreement
auxiliaries which are not fully grammaticalized: they do not inflect freely for person
and they select only arguments which are specified for the semantic feature [Chuman].
Moreover, the IPSL agreement auxiliary exhibits selectional restrictions on the verbs
it accompanies, as it is usually associated with communicative verbs meaning ‘say’,
‘tell’, ‘talk’, or ‘inform’ (Zeshan, p.c.).
220 II. Morphology

Non-indexical agreement auxiliaries generally show a lower degree of grammaticali-


zation. The GSL auxiliary give-aux has developed from the main verb give. Although
it is not yet fully grammaticalized, there is clear evidence for this grammaticalization
path. Like the main verb, it requires a human recipient of an action; also, it inherited
the source/goal argument structure of the main verb. However, give-aux does not
inflect freely for agreement, it only combines with certain classes of verbs and its use
is less systematic and less frequent than the use of the other auxiliary in GSL. Hence,
it is not yet a fully grammaticalized agreement marker.
Although there is no historical relation between GSL and VGT, a very similar aux-
iliary is also found in VGT (see examples in (3) above). The VGT auxiliary give acts
as an agreement marker between actor and patient, both specified as [Canimate].
Moreover, give appears not to be fully grammaticalized, as it has, for example, selective
restrictions on its two arguments. In both form and function it thus resembles the
GSL auxiliary give-aux. GSL give-aux and VGT give comply with the criteria on low
grammaticalization proposed by Bybee (1994), Heine (1993), and Heine and Kuteva
(2002): (i) selectivity of a marker with respect to the groups of verbs it combines with,
(ii) low ability for inflection, and (iii) synchronic use with an identical lexical form. In
addition, these markers still carry a significant amount of semantic content and, due
to semantic restrictions, are not used as frequently as indexical auxiliaries in a sign
language. Finally, these markers express more than only agreement in that they also
convey causativity and change-of-state. This shows that the grammaticalization of these
non-indexical agreement auxiliaries has not reached the end of grammaticalization
continuum.
However, it is not the case that all non-indexical auxiliaries show a lesser degree of
grammaticalization. In some sign languages, non-indexical agreement auxiliaries can
also be fully grammaticalized, as is, for example, the case in TSL and DGS, where the
non-indexical markers appear to be highly grammaticalized. Consequently, the gram-
maticalization patterns of non-indexical agreement auxiliaries vary from language to
language, but overall they show a somewhat lower degree of grammaticalization than
the indexical ones. This is evidenced by a narrower grammatical distribution: non-
indexical auxiliaries may not have a full inflectional paradigm (GSL), may not combine
with all semantic groups of arguments (GSL, LSB, VGT, and DGS), may express more
than one grammatical function (VGT and GSL), and may show an overall higher se-
mantic load and light-verb characteristics.
In the next section, the discussion of shared linguistic properties of agreement auxil-
iaries in sign languages is expanded to include auxiliaries in spoken languages.

5. Grammaticalization of auxiliaries across modalities

5.1. Grammaticalization of gestures: the notion of transfer

Universally, the main function of grammaticalization as a cognitive mechanism is the


“exploitation of old means for novel functions” (Werner/Kaplan 1963, 403; cited in
Heine/Traugott 1991, 150). Following this reasoning, one may argue that sign languages
needed some grammatical means to express grammatical categories such as verb agree-
10. Agreement auxiliaries 221

ment. However, this view does not provide us with sufficient answers to the question
why grammaticalization occurs in all languages, and why grammaticalized elements
often co-occur with other devices that express the same meaning (Heine/Claudi/Hüh-
nemeyer 1997, 150; Heine/Traugott 1991). Borrowing of linguistic tokens might be an
alternative means of incorporating elements that fulfill novel functions but, apparently,
the driving forces of borrowing are not always adequate in practice cross-linguistically
(Sutton-Spence 1990; cited in Sapountzaki 2005).
A major issue in the evolution of sign language auxiliaries is the fact that some of
them are not simply grammaticalized from lexical items, but have evolved from a non-
linguistic source, that is, gestures. Indeed, strictly following the terminology of spoken
language linguistics, gestures cannot be considered as a lexical source that is the basis
of grammaticalization. According to Steinbach and Pfau (2007), agreement in sign
languages has a clear gestural basis (see also Wilcox (2002), Pfau/Steinbach (2006,
2011), and Pfau (2011) on the grammaticalization of manual and non-manual gestures).
In sign languages, gestures can enter the linguistic system, either as lexical elements or
as grammatical markers (also see chapter 34 on grammaticalization). Some of these
lexicalized gestures such as the index sign index can further develop into auxiliaries.
As mentioned at the beginning of this chapter, the most common assumption for
(Indo-European) spoken languages is that auxiliaries derive from verbs (e.g. English
will, may, shall, do). Irrespective of this apparent difference, however, there are com-
mon cognitive forces, such as the concept of transition from one place to another,
which is a common source for grammaticalization in both modalities. Many spoken
languages, some belonging to the Indo-European family and some not, use verbs such
as ‘go’, ‘come’, or ‘stay’ as auxiliaries (Heine 1993; Heine/Kuteva 2002). Similarly, trac-
ing a metaphorical path from the subject/agent to the object/goal, for example, is quite
common in many sign languages. This is just another realization of the same concept
of transition, although this spatial concept is realized in a modality-specific way in the
sign space. Thus, the spatial concept of transition from a to b is grammatically realized
by gestural means in sign languages, with the use of agreement verbs or agreement
auxiliaries. In the case of most agreement verbs, the physical movement between spe-
cific points in space either represents transfer of a concrete object (such as in the case
give) or transfer of an abstract entity such as information (as in the case of verbs of
communication, e.g. explain). Finally, in the case of agreement auxiliaries this basic
concept of transition may be even more abstract since agreement auxiliaries may de-
note transfer of virtually any relation from subject to object, that is, they denote trans-
fer in a grammatical sense (Steinbach/Pfau 2007; cf. also Steinbach 2011).

5.2. Semantic emptiness and syntactic expansion

Two essential criteria for complete grammaticalization are the semantic emptiness of
a grammaticalized item and its syntactic expansion. Applying these criteria to sign
languages, the pointing handshape of indexical auxiliaries can be analyzed as a reduced
two-dimensional index, which carries as little visual information as possible, in order
to denote motion between two or more points in space. In accordance with the second
criterion, that is, syntactic expansion, agreement auxiliaries again express grammar in
a physically visible form in sign languages. ‘Syntax’ is a Greek word with the original
222 II. Morphology

meaning of ‘grouping entities in an order’, and agreement auxiliaries in sign languages,


when syntactically expanded, do visibly exactly this, that is, on the very physical level
of articulation, they perform motor-visual links between points in the sign space in
front of the signer’s body, a peculiarity unique to sign languages. In sign languages,
syntactic expansion of the use of an agreement auxiliary is then realized as maximiza-
tion of path tracings in the sign space, with minimal movement constraints. Moreover,
indexical agreement auxiliaries in sign languages originate from reference points in
space and are linked to the concept of transfer in real space. Overall, however, the use
of indexical auxiliaries is functional to a higher degree than that of auxiliaries that
originate from verbs or nouns in sign languages.
On the other hand, lexical origins still remain transparent in the respective non-
indexical auxiliaries. Auxiliaries derived from verbs (such as give, see, meet) comply
with universals of grammaticalization. But even in the case of pam, which derives from
the noun person, the handshape and movement of the underlying noun provide an
essential phonological and semantic basis for the grammaticalization of the agreement
auxiliary (Rathmann/Mathur 2002; Steinbach/Pfau 2007).
Cross-linguistically, it is typical for verbs denoting transfer (such as give or send) to
grammaticalize into markers of transitivity. Interestingly, typologically different lan-
guages like Mandarin Chinese (an isolating language) and GSL and VGT use auxilia-
ries drawn from a lexical verb meaning ‘give’ (Ziegeler 2000; Van Herreweghe/Ver-
meerbergen 2004; Sapountzaki 2004, 2005). Moreover, the use of a main verb meaning
‘go to’ as the source for the NGT auxiliary act-on and the use of a verb meaning
‘meet’ as the source of TSL aux-11 can be explained if we apply the Motion Schema
proposed as a general (modality-independent) schema for the grammaticalization of
auxiliaries (Heine 1993). By contrast, there is no direct equivalent in spoken languages
that corresponds to the use of see as the source of the TSL auxiliary aux-2. Steinbach
and Pfau (2007) point out that the verb see could be included in the category of “a
proposition involving mental process or utterance verbs such as ‘think’, ‘say’, etc.”
(Heine 1993, 35). The following example from Tonga, a Bantu language of Zambia,
illustrates the grammaticalization of an auxiliary from a mental state verb in spoken
languages. In (10), the verb yeeya (‘to think’) has developed into an auxiliary marking
future tense (Collins 1962; cited in Heine 1993, 35).

(10) Joni u-yeeya ku-fwa [Tonga]


John 3.sg-think inf-die
‘John is about to die.’ (or: ‘John will die.’)

It would not come as a surprise that in sign languages, whose users perceive language
visually, verbs like see are linked to mental events in a more direct way than in spoken
languages. Thus, see may be used in sign languages within the mental process event
schema as an optimal source for the grammaticalization of auxiliaries. Moreover, the
TSL verb see belongs to the group of agreement verbs and can therefore more readily
grammaticalize into an agreement auxiliary. Note finally that in most sign languages,
mental state verbs are usually body-anchored plain verbs, articulated on or close to
the (fore)head. Consequently, typical mental process verbs such as think are not as
available for carrying agreement as auxiliaries.
10. Agreement auxiliaries 223

5.3. Frequency of occurrence as a criterion of grammaticalization

The issues of syntactic expansion and of use in different syntactic environments are
linked to the issue of frequency of use of auxiliaries. One can hypothesize that agree-
ment marking by free functional morphemes in sign languages may not be as developed
as in the domains of aspect and modality. According to cross-linguistic evidence on
auxiliaries, aspectual auxiliaries are the most frequent and thus the most developed
auxiliaries, whereas agreement auxiliaries are the least frequent and thus the least
developed ones ⫺ and also the ones with the lowest degree of grammaticalization in
a wide sample of spoken languages examined by Steele (1981). The examples discussed
in this chapter show, however, that agreement auxiliaries are used abundantly in sign
languages and that in many different sign languages, agreement auxiliaries are already
highly grammaticalized functional elements. The following table sums up the properties
and distribution of agreement auxiliaries in the sign languages discussed in this chapter
(this is an extended version of a table provided in Steinbach/Pfau 2007).

Tab. 10.1: Properties of agreement auxiliaries across sign languages


N source aspectual double reciprocal sentence
marking agr? marking? position
a
LSA 1 pronouns on verb yes ?? sf > prv
LSB 1 pronouns on verb no ?? sf > prv
pronouns on verb ?? ?? ??
LSC 2 noun person on verb ?? ?? ??
a
DGS 1 noun person on verb yes yes (1H) sf (prv)
VGT 1 verb give ?? ?? ?? prv
pronouns on verb no ?? sf, prv
GSL 2 verb give on aux no ?? prv
IPSL 1 pronouns on verb yes yes (2H) sf a > si
NS 1 pronouns on verb no ?? sf > prv, si
a
NGT 1 verb go-to on verb yes yes (1H) sf
pronouns on verb yes yes (2H) prv, si
TSL 3 verb see on verb yes yes (2H) prv, si
verb meet on verb yes yes (2H) prv, si
Abbreviations used in Tab. 10.1: si = sentence-initial, sf = sentence-final, prv = pre-verbal, > means
“more frequent than”, 1H = one-handed, 2H = two-handed, ‘??’ indicates that no information is
available.
a
Some signs, such as wh-signs, manual negation, or aspectual markers may follow the auxiliary.

6. Conclusion
Many different sign languages across the world make use of agreement auxiliaries.
These auxiliaries share many properties in terms of their phonological form, syntactic
224 II. Morphology

distribution, lexical sources, and indirect gestural origins. However, some degree of
variation between agreement auxiliaries in different sign languages is also attested, as
would be expected in any sample of unrelated, natural languages. Based on these find-
ings, future research with a wider cross-linguistic scope might deepen our understand-
ing of common properties of auxiliaries in sign languages in particular (thereby includ-
ing wider samples of still unresearched sign languages), as well as of similarities and
differences between sign and spoken languages in general, thus shedding more light on
the cognitive forces of grammaticalization and auxiliation in sign and spoken languages.

7. Literature
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2004 Morphological Universals and the Sign Language Type. In: Booij, Geert/Marle, Jaap
van (eds.), Yearbook of Morphology. Dordrecht: Kluwer, 19⫺40.
Bos, Heleen
1994 An Auxiliary Verb in Sign Language of the Netherlands. In: Ahlgren, Inger/Bergman,
Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Papers from the
Fifth International Symposium on Sign Language Research. Vol. 1. Durham: ISLA,
37⫺53.
Boyes Braem, Penny/Sutton-Spence, Rachel (eds.)
2001 The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Language.
Hamburg: Signum.
Bybee, Joan/Perkins, Revere/Pagliuca, William
1994 The Evolution of Grammar: Tense, Aspect and Modality in the Languages of the World.
Chicago: University of Chicago Press.
Comrie, Bernard
1981 Language Universals and Linguistic Typology: Syntax and Morphology. Oxford: Black-
well.
Engberg-Pedersen, Elisabeth
1993 The Ubiquitous Point. In: Signpost 6(2), 2⫺8.
Fischer, Susan D.
1992 Agreement in Japanese Sign Language. Paper Presented at the Annual Meeting of the
Linguistic Society of America, Los Angeles.
Fischer, Susan
1993 Auxiliary Structures Carrying Agreement. Paper Presented at the Workshop Phonology
and Morphology of Sign Language, Amsterdam and Leiden. [Summary in: Hulst, Harry
van der (1994), Workshop Report: Further Details of the Phonology and Morphology
of Sign Language Workshop. In: Signpost 7, 72.]
Fischer, Susan D.
1996 The Role of Agreement and Auxiliaries in Sign Languages. In: Lingua 98, 103⫺119.
Heine, Bernd
1993 Auxiliaries: Cognitive Forces and Grammaticalization. Oxford: Oxford University Press.
Heine, Berndt/Claudi, Ulrike/Hünnemeyer, Friederike
1997 From Cognition to Grammar: Evidence from African Languages. In: Givon, Talmy
(ed.), Grammatical Relations: A Functionalist Perspective. Amsterdam: Benjamins,
149⫺188.
Heine, Berndt/Kuteva, Tania
2002 On the Evolution of Grammatical Forms. In: Wray, Alison (ed.), The Transition to
Language. Studies in the Evolution of Language. Oxford: Oxford University Press,
376⫺397.
10. Agreement auxiliaries 225

Hopper, Paul/Traugott, Elizabeth


1993 Grammaticalization. Cambridge: Cambridge University Press.
Janzen, Terry/Shaffer, Barbara
2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard
P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and
Spoken Languages. Cambridge: Cambridge University Press, 199⫺223.
Keller, Jörg
1998 Aspekte der Raumnutzung in der Deutschen Gebärdensprache. Hamburg: Signum.
Massone, Maria Ignacia
1993 Auxiliary Verbs in LSA. Paper Presented at the 2nd Latin-American Congress on Sign
Language and Bilingualism, Rio de Janeiro.
Massone, Maria Ignacia
1994 Some Distinctions of Tense and Modality in Argentine Sign Language. In: Ahlgren,
Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure.
Durham: ISLA, 121⫺130.
Massone, Maria Ignacia/Curiel, Monica
2004 Sign Order in Argentine Sign Language. In: Sign Language Studies 5(1), 63⫺93.
Morgan, Gary/Barriere, Isabelle/Woll, Bencie
2003 First Verbs in British Sign Language Development. In: Working Papers in Language
and Communication Science 2, 57⫺66.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G.
2000 The Syntax of American Sign Language. Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Padden, Carol
1988 The Interaction of Morphology and Syntax in American Sign Language. New York:
Garland Publishing.
Pfau, Roland
2011 A Point Well Taken: On the Typology and Diachrony of Pointing. In: Napoli, Donna
Jo/Mathur, Gaurav (eds.), Deaf Around the World. The Impact of Language. Oxford:
Oxford University Press, 144⫺163.
Pfau, Roland/Salzmann, Martin/Steinbach, Markus
2011 A Non-hybrid Approach to Sign Language Agreement. Paper Presented at the 1st For-
mal and Experimental Advances in Sign Languages Theory (FEAST), Venice.
Pfau, Roland/Steinbach, Markus
2003 Optimal Reciprocals in German Sign Language. In: Sign Language & Linguistics 6,
3⫺42.
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 5⫺98. [Available at http://www.ling.uni-
potsdam.de/lip/]
Pfau, Roland/Steinbach, Markus
2011 Grammaticalization in Sign Languages. In: Heine, Bernd/Narrog, Heiko (eds.), Hand-
book of Grammaticalization. Oxford: Oxford University Press, 681⫺693.
Quadros, Ronice M. de/Lillo-Martin, Diane/Pichler, Chen
2004 Clause Structure in LSB and ASL. Paper Presented at the 26. Jahrestagung der Deut-
schen Gesellschaft für Sprachwissenschaft, Mainz.
Quadros, Ronice M. de/Quer, Josep
2008 Back to Back(wards) and Moving on: On Agreement, Auxiliaries and Verb Classes. In:
Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past,
Present, and Future. Forty-five Papers and Three Posters from the 9th Theoretical Issues
in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópo-
lis: Editora Arara Azul. [Available at: www.editora-arara-azul.com.br/Estudos
Surdos.php]
226 II. Morphology

Quer, Josep
2006 Crosslinguistic Research and Particular Grammars: A Case Study on Auxiliary Predi-
cates in Catalan Sign Language (LSC). Paper Presented at the Workshop on Cross-
linguistic Sign Language Research, Max Planck Institute for Psycholinguistics, Nij-
megen.
Rathmann, Christian
2001 The Optionality of Agreement Phrase: Evidence from Signed Languages. MA Thesis,
The University of Texas at Austin.
Rathmann, Christian/Mathur, Gaurav
2002 Is Verb Agreement the Same Cross-modally? In: Meier, Richard/Cormier, Kearsy/
Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 370⫺404.
Sapountzaki, Galini
2004 Free Markers of Tense, Aspect, Modality and Agreement in Greek Sign Language
(GSL): The Role of Language Contact and Grammaticisation. Paper Presented at the
ESF Workshop Modality Effects on The Theory of Grammar: A Cross-linguistic View
from Sign Languages of Europe, Barcelona.
Sapountzaki, Galini
2005 Free Functional Markers of Tense, Aspect, Modality and Agreement as Possible Auxilia-
ries in Greek Sign Language. PhD Dissertation, Centre of Deaf Studies, University
of Bristol.
Slobin, Dan/Hoiting, Nini
2001 Typological and Modality Constraints on Borrowing: Examples from the Sign Language
of the Netherlands. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A
Cross-linguistic Investigation in Word Formation. Mahwah, NJ: Erlbaum, 121⫺137.
Smith, Wayne
1989 The Morphological Characteristics of Verbs in Taiwan Sign Language. PhD Dissertation,
Ann Arbor.
Smith, Wayne
1990 Evidence for Auxiliaries in Taiwan Sign Language. In: Fischer, Susan/Siple, Patricia
(eds.), Theoretical Issues in Sign Language Research. Vol. 1: Linguistics. Chicago: Uni-
versity of Chicago Press, 211⫺228.
Steele, Susan
1981 An Encyclopedia of AUX: A Study In Cross-linguistic Equivalence. Cambridge: MIT
Press.
Steinbach, Markus
2011 What Do Agreement Auxiliaries Reveal About the Grammar of Sign Language Agree-
ment? In: Theoretical Linguistics 37, 209⫺221.
Steinbach, Markus/Pfau, Roland
2007 Grammaticalization of Auxiliaries in Sign Languages. In: Perniss, Pamela/Pfau, Roland/
Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language
Structure. Berlin: Mouton de Gruyter, 303⫺339.
Thompson, Robin/Emmorey, Karen/Kluender, Robert
2006 The Relationship Between Eye Gaze and Verb Agreement in American Sign Language:
An Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604.
Traugott, Elizabeth/Heine, Berndt (eds.)
1991 Approaches to Grammaticalization. Vol.1: Focus on Theoretical and Methodological Is-
sues. Amsterdam: Benjamins.
Van Herreweghe, Mieke/Vermeerbergen, Myriam
2004 The Semantics and Grammatical Status of Three Different Realizations of geven (give):
Directional Verb, Polymorphemic Construction, and Auxiliary/Preposition/Light Verb.
Poster Presented at the 8th International Conference on Theoretical Issues in Sign Lan-
guage Research (TISLR 8), Barcelona.
11. Pronouns 227

Wilcox, Sherman
2002 The Gesture-language Interface: Evidence from Signed Languages. In: Schulmeister,
Rolf/Reinitzer, Heimo (eds.), Progress in Sign Language Research. In Honor of Sieg-
mund Prillwitz. Hamburg: Signum, 63⫺81.
Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Ben-
jamins.
Ziegeler, Debra
2000 A Possession-based Analysis of the ba-construction in Mandarin Chinese. In: Lingua
110, 807⫺842.

Galini Sapountzaki, Volos (Greece)

11. Pronouns
1. Pronouns in spoken languages and sign languages
2. Personal pronouns
3. Proforms
4. Conclusion
5. Literature

Abstract
The term ‘pronoun’ has been used with spoken languages to refer not only to personal
pronouns ⫺ i.e. those grammatical items than ‘stand for’ nouns or noun phrases ⫺
but also to ‘proforms’, including words such as demonstratives, indefinites, interrogative
pronouns, relative pronouns, etc. In sign languages, pronominal systems have been iden-
tified at least as far back as the mid-1970s (e.g., Friedman 1975 for American Sign
Language). Since then, the term ‘pronoun’ has been widely used to refer to signs in
various sign languages which have the function of personal pronouns ⫺ that is, deictic/
pointing signs which refer to signer, addressee, and non-addressed participants. As with
spoken languages, the term has also been extended to refer to proforms such as indefi-
nites, interrogatives, and relative pronouns. This chapter describes personal pronouns
and proforms in sign languages, their relationships (or possible relationships) to each
other, and how these relationships compare to pronouns/proforms in spoken languages.

1. Pronouns in spoken languages and sign languages


The traditional definition of a pronoun is that it ‘stands for’ or ‘takes the place of’ a
noun (or more specifically, noun phrase) (Bhat 2004). However, the term ‘pronoun’
228 II. Morphology

has been used traditionally to refer to various types of words in spoken languages,
including not only personal pronouns but also words such as demonstratives, indefi-
nites, interrogative pronouns, relative pronouns, etc. Some of these fit the traditional
definition better than others. Interrogatives, demonstratives, indefinites, and relative
pronouns for instance can stand for lexical categories other than nouns. Also, while
these latter examples do have various deictic and/or anaphoric uses, they ‘stand for’
nouns/noun phrases much less clearly than personal pronouns do. For this reason, Bhat
(2004) refers to non-personal pronouns such as demonstratives, indefinites, reflexives,
and interrogatives collectively as ‘proforms’.
Various types of personal pronouns and proforms are related to each other in differ-
ent ways. Some types of proforms are phonologically identical to other types (e.g.
relative pronouns and demonstrative pronouns in some languages; indefinite pronouns
and interrogative pronouns in others), and the affinities vary across languages (Bhat
2004).
Pronominal systems have been identified in sign languages such as American Sign
Language (ASL) at least as far back as the mid-1970s (Friedman 1975). Since then,
the term ‘pronoun’ has been widely used to refer to signs in various sign languages
which have the function of personal pronouns ⫺ that is, deictic/pointing signs which
refer to signer, addressee, and non-addressed participants. As with spoken languages,
the term has also been extended to refer to other categories such as indefinites, inter-
rogatives, and relative pronouns. Here, I follow the terminology used by Bhat (2004)
in distinguishing personal pronouns referring to speech act participants from proforms
(including indefinites, interrogatives, and relative pronouns), with the term ‘pronoun’
as a superordinate category subsuming both personal pronouns and proforms. Thus in
this chapter, the term proform is used to refer to pronouns other than personal pro-
nouns, including reflexive pronouns, relative pronouns, reciprocal pronouns, indefi-
nites, interrogatives, and demonstratives.
As with spoken languages, affinities can be found with pronouns and proforms in
sign languages as well. In particular, in many sign languages, the singular non-first
person personal pronoun (a pointing sign) is phonologically identical to many proforms
(e.g. demonstratives and relative pronouns). Additionally, it is also possible for pointing
signs to have other non-pronominal functions, such as determiners and adverbials
(Edge/Herrmann 1977; Zimmer/Patschke 1990). Thus one characteristic that pointing
signs tend to share within and across sign languages is a general deictic, not just pro-
nominal, function.
This chapter begins with personal pronouns then moves on to proforms such as
indefinites, demonstratives, interrogative pronouns, and relative pronouns. Examples
in this chapter (which include productions of fluent native and non-native British Sign
Language (BSL) signers from elicited narrative descriptions of cartoons/animations)
will focus largely on two sign languages for which pronouns have been fairly well
described: BSL and ASL. Data from some other sign languages is included where
information from the literature is available.

2. Personal pronouns
Personal pronouns in sign languages generally take the form of pointing signs, which
are then directed towards present referents or locations in the signing space associated
11. Pronouns 229

Fig. 11.1: index3a ‘she’ Fig. 11.2: index2 ‘you’ Fig. 11.3: index1 ‘me’

with absent referents, as shown in Figures 11.1 and 11.2, or towards the signer him/
herself, as in Figure 11.3. First person pronouns in sign languages are directed inwards,
usually towards the signer’s chest. However, there are exceptions to this, e.g. first per-
son pronouns in Japanese Sign Language (NS) and Plains Indian Sign Language can
be directed towards the signer’s nose (Farnell 1995; McBurney 2002).
In general in most sign languages, the space around the signer is used for establish-
ment and maintenance of pronominal (as well as other types of) reference throughout
a discourse. However, there is evidence that the use of the signing space for pronominal
reference may not be universal amongst sign languages. Marsaja (2008) notes that Kata
Kolok, a village sign language used in Bali, Indonesia, prefers use of pointing to fingers
on the non-dominant hand ⫺ i.e. ‘list buoys’ (Liddell 2003) ⫺ rather than to locations
in space for reference. Also, Cambodian Sign Language appears to prefer full noun
phrases over pronouns, an influence from politeness strategies in Khmer (Schembri,
personal communication).
In addition to pronouns, other means of establishing and maintaining spatial loci in
a discourse include agreement/indicating verbs (see chapter 7 on verb agreement) and
in some sign languages, agreement auxiliaries (see chapter 10 on agreement auxilia-
ries). Both of these devices have been considered to be grammaticised forms of pro-
nominalisation or spatial loci (Pfau/Steinbach 2006).
If the referent is present, the signer uses a pronoun or other agreement/indicating
device to point to the location of the referent. If the referent is not present, the signer
may establish a point in space for the referent, which could be motivated in some way
(e.g. pointing towards a chair where a person usually sits) or could be arbitrary. Once
a location in space for a referent has been established, that same location can be
referred to again and again unambiguously with any of these devices, as in an example
from BSL in (1) below, until they are actively changed. For more on the use of signing
space in sign languages, see chapter 19.

(3) sister index3a upset. index1 1ask3a what. index3a lose bag. [BSL]
sister there upset. I I-ask-her what She lost bag.
‘My sister was upset. I asked her what was wrong. She had lost her bag.’

2.1. Person
The issue of person in sign languages is controversial. Traditionally sign language re-
searchers assumed the spatial modification of personal pronouns to be part of a three-
230 II. Morphology

person system analogous to those found in spoken languages (Friedman 1975; Klima/
Bellugi 1979; Padden 1983). According to these analyses, pronouns which point to the
signer are first person forms, those which point to the addressee(s) are second person
forms, and those which point to non-addressed participant(s) are third person forms.
A three-person system for sign languages could be considered problematic, however,
because there is no listable set of location values in the signing space to which a non-
first person pronoun may point, for addressee or non-addressed participants. To ad-
dress this issue, some researchers such as Lillo-Martin and Klima (1990) and McBurney
(2002) proposed that sign languages like ASL have no person distinctions at all. Liddell
(2003) has taken this idea a step further by claiming that sign language pronouns simply
point to their referents gesturally. For Liddell, sign language pronouns are the result
of a fusion of linguistic elements (phonologically specified parameters such as hand-
shape and movement) and gestural elements (specifically the directionality of these
signs). However, a gestural account of directionality alone does not explain first person
behaviours, particularly with first person plurals, which do not necessarily point to their
referents. This is part of the basis for Meier’s (1990) argument for a distinct first person
category in ASL.
Meier (1990) has argued for a two-person system for ASL ⫺ specifically, first person
vs. non-first person. Meier claims that the use of space to refer to addressee and non-
addressed participants is fully gradient rather than categorical, i.e. that loci towards
which these pronouns point are not listable morphemes, similarly to Lillo-Martin and
Klima (1990), McBurney (2002), and Liddell (2003). But the situation with first person
pronouns, Meier argues, is different. There is a single location associated with first
person (in BSL and ASL, the centre of the signer’s chest). Furthermore, this location
is not restricted to purely indexic reference, i.e. a point to the first person locus does
not necessarily only refer to the signer. First person plurals in BSL and ASL, as shown
in Figures 11.4 and 11.5, point primarily towards the chest area although they neces-
sarily include referents other than just the signer. Furthermore, during constructed
dialogue (a discourse strategy used for direct quotation ⫺ see Earis (2008) and chap-
ter 17 on utterance reports and constructed action), a point toward the first person
locus refers to the person whose role the signer is assuming, not the signer him/herself.
Similarly, Nilsson (2004) found that in Swedish Sign Language, a point to the chest can
be used to refer to the referent not only in representation of utterances but also of
thoughts and actions. It is unclear whether or to what extent these patterns differ from
gestural uses of pointing to the self in non-signers.

Fig. 11.4: BSL we Fig. 11.5: ASL we


11. Pronouns 231

Meier’s (1990) analysis recognises the ‘listability problem’ (Rathmann/Mathur 2002;


see also chapter 7 on verb agreement) of multiple second/third person location values
while at the same time recognising the special status of first person, for which there is
only one specified location within a given sign language (e.g. the signer’s chest). The
first person locus is so stable that it can carry first person information virtually alone,
i.e. even when the F-handshape is lost through phonological processes. Studies on
handshape variation in ASL (Lucas/Bayley 2005) and BSL (Schembri/Fenlon/Rentelis
2009) have found that the F-handshape is used significantly less often (e.g. due to
assimilation) with first person pronouns than with non-first person pronouns. Other
evidence for a distinct grammatical category for first person comes from first person
plural forms. Non-first person pronouns point to the location(s) of each of their refer-
ent(s), while first person plurals generally only point, if anywhere, to the location of
the signer (Cormier 2005, 2007; Meier 1990). Two-person systems have been assumed
by other researchers for ASL and other sign languages (e.g., Emmorey 2002; Engberg-
Pedersen 1993; Farris 1998; Lillo-Martin 2002; Padden 1990; Rathmann/Mathur 2002;
Todd 2009), including Liddell (2003) who presumably sees a two-person (first vs. non-
first) system as compatible with the notion that non-first person pronouns point to
their referents gesturally.
However, not all researchers subscribe to a two-person system. Berenz (2002) and Ali-
basic Ciciliani and Wilbur (2006) support the notion of a three-person system for Brazil-
ian Sign Language (LSB) and Croatian Sign Language (HZJ), respectively, as well as
ASL. They argue that, while the spatial locations to which addressee-directed and non-
addressee-directed pronouns are directed may be exactly the same, there are other cues
that do reliably distinguish second from third person. These cues include the relationship
between the direction of the signer’s eye gaze and the orientation of the head, chest, and
hand. For second person reference, these four articulators typically align (assuming the
signer and addressee are directly facing each other); for third person reference, the direc-
tion in which the hand points is misaligned with the other three articulators.
Based on their analyses of LSB and HZJ, Berenz (2002) and Alibasic Ciciliani and
Wilbur (2006) argue for a three-person system for these sign languages, and for ASL,
based on a systematic distinction between reference to second versus third persons.
However, in an eye-tracking study Thompson (2006) found no systematic difference
in eye gaze between reference to addressees and reference to non-addressed partici-
pants in ASL. Even if eye gaze behaviours are more systematic in LSB and HZJ than
in ASL, it is not clear what would make this distinction grammatical, as similar patterns
of alignment and misalignment of eye gaze, torso orientation, and pointing are found
in hearing non-signers when they gesture (Kita 2003). More research on pronominal
systems of other sign languages and deictic gestures as used by non-signers, particularly
reference in plural contexts, would help further clarify the role of person in sign lan-
guages (Johnston, in press).

2.2. Number

Number marking on pronouns is somewhat more straightforward than person. Sign lan-
guages generally distinguish singular, dual, and plural forms. Singular and dual pronouns
index (point to) their referent(s) more or less directly, singular pronouns with a simple
232 II. Morphology

Fig. 11.6: BSL two-of-us Fig. 11.7: BSL they

Fig. 11.8: BSL they-comp

point to a location and dual forms with b-handshape (or some variant with the index and
ring finger extended) which oscillates back and forth between the two locations being
indexed (see Figure 11.6 for first person plural dual pronoun two-of-us in BSL). Many
sign languages additionally have so-called ‘number-incorporated pronouns’. BSL and
ASL have pronouns which incorporate numerals and indicate three, four and (for some
signers in BSL) five referents (McBurney 2002; Sutton-Spence/Woll 1999). For ASL,
some signers accept up to nine. This limit appears to be due to phonological constraints;
most versions of the numbers 10 and above in ASL include a particular phonological
movement which blocks number incorporation (McBurney 2002). Plural pronouns and
number-incorporated pronouns index their referents more generally than singular or
dual forms (Cormier 2007). Plural forms usually take the form of a F-handshape with a
sweeping movement across the locations associated with the referents (as shown in Fig-
ure 11.7 they below) or with a distributed pointing motion towards multiple locations
(see Figure 11.8 above for they-comp, a non-first person composite plural form). These
forms have been identified in various sign languages (McBurney 2002; Zeshan 2000).
Number-incorporated pronouns typically have a handshape of the numeral within that
sign language and a small circular movement in the general location associated with the
group of referents. Number-incorporated plurals have been identified in many sign lan-
guages, although some (such as Indopakistani Sign Language, IPSL) appear not to have
them (McBurney 2002).
McBurney (2002) argues that ASL grammatically marks number for dual but not in
the number-incorporated pronouns. She points out that number marking for dual is oblig-
atory while the use of number-incorporation appears to be an optional alternative to plu-
ral marking. For more on number and plural marking in sign languages, see chapter 6.
11. Pronouns 233

2.3. Exclusive pronouns

Further evidence for a distinction between singulars/duals which index their referents
directly and plurals/number-incorporated forms which index their referents less (or not
at all) comes from exclusive pronouns in BSL and ASL (Cormier 2005, 2007). These stud-
ies aimed to investigate whether BSL and ASL have an inclusive/exclusive distinction in
the first person plural, similar to the inclusive/exclusive distinction common in many spo-
ken languages (particularly indigenous languages of the Americas, Australia and Ocea-
nia, cf. Nichols 1992), whereby first person plurals can either include the addressee (‘in-
clusive’) or exclude the addressee (‘exclusive’). In languages which lack an inclusive/
exclusive distinction, first person plurals are neutral with regard to whether or not the
addressee is included (e.g. ‘we/us’ in English). Both BSL and ASL were found to have
first person plurals (specifically plurals and number-incorporated pronouns) that are
neutral with respect to clusivity, just as English. These forms are produced at the centre
of the signer’s chest, as shown above in Figures 11.4 and 11.5. However, these forms can
be made exclusive by changing the location of the pronoun from the centre of the signer’s
chest to the signer’s left or right side. These exclusive forms are different from exclusive
pronouns in spoken languages because they may exclude any referent salient in the dis-
course, not only the addressee.
Wilbur and Patchke (1998) and Alibasic Ciciliani and Wilbur (2006) discuss what they
refer to as ‘inclusive’ and ‘exclusive’ pronouns in ASL and HZJ. However, based on the
descriptions, these forms seem to actually be first person and non-first person plurals,
respectively ⫺ i.e. inclusive/exclusive of the signer ⫺ rather than inclusive/exclusive of
the addressee or other salient referent as in spoken languages and as identified in BSL
and ASL (Cormier 2005, 2007).

2.4. Possessive pronouns

Possessive pronouns in sign languages described to date are directional in the same way
that non-possessive personal pronouns are. They usually have a handshape distinct from
the pointing F-handshape used in other personal pronouns ⫺ e.g. a u-handshape with
palm directed toward the referent in sign languages such as ASL, HZJ, and Austrian Sign
Language (ÖGS), Finnish Sign Language (FinSL), Danish Sign Language (DSL), and
Hong Kong Sign Language (HKSL) (Alibasic Ciciliani/Wilbur 2006; Pichler et al. 2008;
Tang/Sze 2002), and a 4-handshape in the British, Australian, and New Zealand Sign
Language family (BANZSL) (Cormier/Fenlon 2009; Sutton-Spence/Woll 1999). Al-
though BSL does use the 4-handshape in most cases, the F-handshape may also be used
for inalienable possession (Cormier/Fenlon 2009; Sutton-Spence/Woll 1999). In HKSL,
the u-handshape for possession is restricted to predicative possession. Nominal posses-
sion (with or without overt possessor) is expressed via a F-handshape instead (Tang/Sze
2002). Possessive pronouns, in BSL and ASL at least, are marked for person and number
in the same way that non-possessive personal pronouns are (Cormier/Fenlon 2009).
234 II. Morphology

2.5. Gender and case

It is not common for sign language pronouns to be marked for gender, but examples have
been described in the literature. Fischer (1996) and Smith (1990) note gender marking
for pronouns and on classifier constructions in NS and Taiwan Sign Language (TSL).
They claim that pronouns and some classifiers are marked for masculine and feminine via
a change in handshape. However, there are some questions about to what degree gender
marking is obligatory (or even to what degree it occurs with pronouns at all) within the
pronominal systems of these languages; McBurney (2002) suggests that this marking may
be a productive (optional) morphological process in the pronominal systems of these lan-
guages rather than obligatory grammatical gender marking.
Case marking on nouns or pronouns in sign languages is also not very common. Gram-
matical relations between arguments tend to be marked either by the verb, by word or-
der, or are not marked and only recoverable via pragmatic context. However, Meir (2003)
describes the emergence of a case-marked pronoun in Israeli Sign Language (Israeli SL).
This pronoun, she argues, has been grammaticised from the noun person and currently
functions as an object-marked pronoun. This pronoun exists alongside the more typical
pointing sign used as a pronoun unmarked for case and is used in a variety of grammatical
relations (subject, object, etc.), just as in other sign languages.

3. Proforms
Somewhat confusingly, the term ‘proform’ or ‘pro-form’ has been used to refer to a vari-
ety of different features and constructions in sign languages, including the location to
which a personal pronoun or other directional sign points (Edge/Herrmann 1977; Fried-
man 1975); the (personal) pronominal pointing sign itself (Hoffmeister 1978); a pointing
sign distinct from a personal pronoun, usually made with the non-dominant hand, which
is used to express spatial information (Engberg-Pedersen 1993); an alternative label for
handshapes in classifier constructions (Engberg-Pedersen/Pedersen 1985); and finally as
a superordinate term to cover both personal pronouns and classifier constructions which
refer to or stand for something previously identified (Chang/Su/Tai 2005; Sutton-Spence/
Woll 1999). As noted above, following Bhat (2004), the term proform is used here to refer
to pronouns other than personal pronouns, including reflexive pronouns, relative pro-
nouns, reciprocal pronouns, indefinites, interrogatives, and demonstratives.

3.1. Reflexive and emphatic pronouns

There is a class of sign language proforms that has been labelled as reflexive and is often
glossed in its singular form as self. This pronoun can be marked for person (first and non-
first) and number (singular and plural) in BSL and ASL and is directional in the same
way that other personal pronouns are, as shown in Figures 11.9 and 11.10. These pro-
nouns function primarily as emphatic pronouns in ASL (Lee et al. 1997; Liddell 2003),
and seem to function the same way in BSL. Examples from BSL and ASL (Padden 1983,
134) are given in (2) and (3).
11. Pronouns 235

Fig. 11.9: BSL self3a Fig. 11.10: ASL self3a

(2) gromit3a play poss3a toy drill. drill++. stuck. self3a spin-around [BSL]
‘Gromit was playing with a toy drill. He was drilling. The drill got
stuck, and he himself spun around.’
(3) sister iself telephone c-o [ASL]
‘My sister will call the company herself.’

3.2. Indefinite pronouns

Indefinite pronouns in some spoken languages appear to have been grammaticalised


from generic nouns such as ‘person’ or ‘thing’, and/or from the numeral ‘one’ (Haspel-
math 1997). This pattern is also found in some sign languages.
The indefinite animate pronoun someone in BSL has the same handshape and orien-
tation as the BSL numeral one and the BSL classifier for person or animate entity, with
an additional slight tremoring movement, as in Figure 11.11 and in (4) below. (The sign
someone is also identical in form with the interrogative pronoun who, as noted in section
3.4 below). Inanimate indefinites in BSL may be the same as the sign some as in Figure
11.12 and in (5), or the sign thing (Brien 1992).

Fig. 11.11: BSL someone/ Fig. 11.12: BSL something(=some)


ASL something/one
236 II. Morphology

(4) road bicycle someone cl:sit-on-bicycle nothing. [BSL]


‘On a road there is a bicycle with nobody sitting on it.’
(5) something(=some) road, something(=some) low
‘There is something on the road, something low down close to the road.’

Neidle et al. (2000) describe the ASL indefinite pronoun something/one, which is the
same as the indefinite animate pronoun in BSL, as in Figure 11.11 above and in (6). As in
BSL, the ASL indefinite pronoun shares the same handshape and orientation as the ASL
numeral one and the ASL classifier for person or animate entity (Neidle et al. 2000, 91).

(6) something/one arrive [ASL]


‘Someone/something arrived.’

Pfau and Steinbach (2006) describe the indefinite pronoun in German Sign Language
(DGS) and Sign Language of the Netherlands (NGT) as a grammaticised combination
of the numeral one and sign person, as in (7) and (8). Pfau and Steinbach point out
that what distinguishes this indefinite form from the phrase one person ‘one person’
is that the indefinite does not necessarily refer to only one person. Therefore it could
be one or more people that is seen in (7), or one or more people who are expected to
do the dishes in (8) (Pfau/Steinbach 2006, 31).

(7) index1 one^person see [DGS]


‘I’ve seen someone.’
(8) one^person wash-dish do must [NGT]
‘Someone has to wash the dishes.’

3.3. Reciprocal pronouns


Pronouns expressing reciprocal meaning in spoken languages have an interesting rela-
tionship with reflexives and indefinites. Bhat (2004) notes that reciprocal meanings
(such as ‘each other’ in English) tend to be expressed in spoken languages by indefinite
expressions or the numeral ‘one’ (which used in a pronominal context would also
have indefinite characteristics). English for example does not derive reciprocals from
personal pronouns but instead from indefinite expressions such as ‘each’, ‘other’, ‘one’,
and ‘another’, as in (9) below. Such affinities between reciprocals and indefinites are
common amongst spoken languages. Reflexives, on the other hand, are inherently ana-
phoric and definite and are therefore semantically quite different from reciprocals
(Bhat 2004). Thus we might expect to see more affinities between reciprocals and
indefinites than between reciprocals and reflexives.

(9) a. The children are helping each other.


b. The girls looked at one another.

However, reciprocal pronouns in BSL and ASL seem to be more closely related to
reflexives than to indefinites. The reciprocal and reflexive pronouns in BSL and ASL
11. Pronouns 237

Fig. 11.13: BSL each-other Fig. 11.14: ASL each-other

share more formational features than the reciprocal and indefinite pronouns. Thus for
BSL, Figure 11.13 each-other is more similar to Figure 11.9 self than it is to Fig-
ures 11.11 someone or 11.12 something. For ASL, Figure 11.14 each-other is (much)
more similar to Figure 11.10 self than to Figure 11.11 something/one.
It is interesting that reciprocals seem to align themselves more with indefinites in
spoken languages but with reflexives in BSL and ASL; however, the reason for this
apparent difference is unclear. We do not know enough about reciprocal forms in other
sign languages to know whether or to what extent this affinity between reciprocals and
reflexives holds or varies across sign languages.
Reciprocal pronouns are not the only way of expressing reciprocal relationships in
sign languages. Agreement verbs in several sign languages allow reciprocal marking di-
rectly (Fischer/Gough 1980; Klima/Bellugi 1979; Pfau/Steinbach 2003). Pfau and Stein-
bach (2003) claim that DGS does not have reciprocal pronouns at all but expresses reci-
procity in other ways, including via reciprocal marking on agreement verbs or on person
agreement markers. It may be that sign languages that have person agreement markers
(see chapter 10) such as DGS have less need for a reciprocal pronoun than sign languages
which do not have person agreement markers such as ASL and BSL.

3.4. Interrogative pronouns

Most sign languages have some pronouns which have an interrogative function, e.g.
signs meaning ‘what’ or ‘who’. However, the number of interrogative pronouns across
sign languages and the extent to which they differ from non-interrogative signs within
each language varies greatly. For example sign languages such as ASL and BSL have
at least one interrogative pronoun for each of the following concepts: ‘who’, ‘what’,
‘when’, ‘where’, ‘how’ and ‘why’. IPSL, on the other hand, has only one general inter-
rogative sign (Zeshan 2004). The syntactic use of interrogatives and wh-questions in
sign languages is covered in detail in chapter 14 on sentence types.
One issue regarding interrogatives that is relevant for this chapter on pronouns is
the relationship between interrogatives and indefinites. Zeshan (2004) notes that the
same signs which are used for interrogatives in many sign languages have other non-
interrogative functions as well, especially as indefinites. Specifically, NS, FinSL, LSB,
and BANZSL all have interrogatives signs which are also used for indefinites. For
238 II. Morphology

instance, in BSL, the same sign shown above in Figure 11.11 is used to mean both
‘someone’ and ‘who’. This is consistent with Bhat’s (2004) observation for spoken lan-
guages that interrogatives and indefinites are strongly linked. If this affinity between
interrogatives and indefinites holds for other sign languages, this would provide evi-
dence that the link between interrogatives and indefinites is modality independent.
More research is needed to determine whether this is the case.

3.5. Demonstrative pronouns

Demonstrative pronouns in spoken languages often distinguish between spatial loca-


tion, e.g. proximate/remote, or proximate/medial/remote. English for instance makes
only a two-way distinction (‘this’ vs. ‘that’). Sign language personal pronouns certainly
can express spatial distinctions, both for animate referents (where the pointing sign
would best be interpreted as ‘he’, ‘she’, ‘you’, ‘they’, etc.) and inanimate referents
(where the pointing sign would best be interpreted as ‘it’, ‘this’, ‘that’, etc.). However,
they do so gradiently and do not appear to have distinct categorical markings for
notions such as proximate or remote. Many sign languages have been noted as having
such an affinity between personal pronouns and demonstratives, including DGS (Pfau/
Steinbach 2005) and Italian Sign Language (LIS) (Branchini 2006).
Although it is very common for demonstrative pronouns in sign languages to be
phonologically identical to personal pronouns, ASL at least has a distinct demonstra-
tive pronoun that (Liddell 1980), as shown in Figure 11.15. (Liddell (1980) actually
describes four variants of the sign shown in Figure 11.15 which differ slightly in form
and function. The version in Figure 11.15 can be used either as a demonstrative or as
a relative pronoun; see also section 3.6, below).

Fig. 11.15: ASL that

3.6. Relative pronouns

Relative clauses have been identified in many sign languages, including ASL (Coulter
1983; Liddell 1980), LIS (Branchini 2006; Cecchetto/Geraci/Zucchi 2006), and DGS
(Pfau/Steinbach 2005) ⫺ see also chapter 16 for a detailed discussion of relative clauses.
Relative clauses are relevant to this chapter in that they often include relative pronouns.
11. Pronouns 239

ASL uses a sign glossed as that as a relative pronoun (Coulter 1983; Fischer 1990; Liddell
1980; Petronio 1993), as in (10), cf. Liddell (1980, 148). Pfau and Steinbach (2005) note
that DGS has two different relative pronouns, one for human referents as in (11) and
Figure 11.16a and one for non-human referents as in (12) and Figure 11.16b, cf. Pfau and
Steinbach (2005, 512). A sign similar to the DGS non-human relative pronoun has been
noted for LIS (Branchini 2006; Cecchetto/Geraci/Zucchi 2006). Other sign languages
such as LSB and BSL do not appear to have manual relative pronouns or complementis-
ers at all but instead use word order and prosodic cues such as non-manual features
(Nunes/de Quadros 2004, cited in Pfau/Steinbach 2005).

rc
(10) [[recently dog thata chase cat]S1 ]NP come home [ASL]
‘The dog which recently chased the cat came home.’
re
(11) [man (ix3) [ rpro-h3 cat stroke]CP ]DP [DGS]
‘the man who is stroking the cat’
re
(12) [book [ rpro-nh3 poss1 father read]CP ]DP
‘the book which my father is reading’

Fig. 11.16a: DGS RPRO-H Fig. 11.16b: DGS RPRO-NH

Bhat (2004) notes a common affinity between relative pronouns and demonstratives
in many spoken languages, including English. This also appears to hold for some sign
languages as well. ASL that (as shown above in Figure 11.15) is used both as a demon-
strative and as a relative pronoun (Liddell 1980). Pfau and Steinbach (2005) note that
the DGS relative pronoun used for non-human referents (shown in Figure 11.16b) is
identical in form to the DGS personal and demonstrative pronoun, which is also identi-
cal to the BSL personal pronoun as shown in Figure 11.1. The LIS relative pronoun is
not identical to the LIS personal/demonstrative pronoun, although it does share the
same F-handshape (Branchini 2006; Cecchetto/Geraci/Zucchi 2006)

4. Conclusion
Like spoken languages, sign languages have many different types of pronoun, including
personal pronouns as well as indefinites, reciprocals, interrogatives, demonstratives,
240 II. Morphology

and relative pronouns. Affinities between different types of pronouns (including both
personal pronouns and proforms) seem to be similar to those found within and across
spoken languages. A major modality effect when it comes to personal pronouns is due
to the use of the signing space for reference, leading to controversies surrounding
person systems and person agreement in sign languages.

Acknowledgements: Thanks to Clifton Langdon-Grigg, Jordan Fenlon, Sandra Smith,


Pascale Maroney, and Claire Moore-Kibbey for acting as models for the example signs
in this chapter. Thanks to Inge Zwitserlood, Adam Schembri, Jordan Fenlon, and
Helen Earis for comments on earlier drafts of this chapter. Thanks also to Gabriel
Arellano for advice on some ASL examples. This work was supported by the Economic
and Social Research Council of Great Britain (Grant RES-620-28-6001), Deafness,
Cognition and Language Research Centre (DCAL).

5. Literature
Alibasic Ciciliani, Tamara/Wilbur, Ronnie B.
2006 Pronominal System in Croatian Sign Language. In: Sign Language & Linguistics
9 (1/2), 95⫺132.
Berenz, Norine
2002 Insights into Person Deixis. In: Sign Language & Linguistics 5(2), 203⫺227.
Bhat, D. N. S.
2004 Pronouns. Oxford: Oxford University Press.
Branchini, Chiara
2006 On Relativization and Clefting in Italian Sign Language (LIS). PhD Dissertation, Uni-
versity of Urbino.
Brien, David (ed.)
1992 Dictionary of British Sign Language/English. Boston: Faber & Faber.
Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro
2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguis-
tic Theory 24, 945⫺975.
Chang, Jung-hsing/Su, Shiou-fen/Tai, James H-Y.
2005 Classifier Predicates Reanalyzed, with Special Reference to Taiwan Sign Language. In:
Language & Linguistics 6(2), 247⫺278.
Cormier, Kearsy
2005 Exclusive Pronouns in American Sign Language. In: Filimonova, Elena (ed.), Clusivity:
Typology and Case Studies of Inclusive-Exclusive Distinction, Amsterdam: Benjamins,
241⫺268.
Cormier, Kearsy
2007 Do All Pronouns Point? Indexicality of First Person Plural Pronouns in BSL and ASL.
In: Perniss, Pamela M./Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Com-
parative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 63⫺101.
Cormier, Kearsy/Fenlon, Jordan
2009 Possession in the Visual-Gestural Modality: How Possession Is Expressed in British
Sign Language. In: McGregor, William (ed.), The Expression of Possession. Berlin:
Mouton de Gruyter, 389⫺422.
Coulter, Geoffrey R.
1983 A Conjoined Analysis of American Sign Language Relative Clauses. In: Discourse
Processes 6, 305⫺318.
11. Pronouns 241

Earis, Helen
2008 Point of View in Narrative Discourse: A Comparison of British Sign Language and
Spoken English. PhD Dissertation, University College London.
Edge, VickiLee/Herrmann, Leora
1977 Verbs and the Determination of Subject in American Sign Language. In: Friedman,
Lynn (ed.), On the Other Hand: New Perspectives on American Sign Language. New
York: Academic Press, 137⫺179.
Emmorey, Karen
2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum Associates.
Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language. Hamburg: Signum.
Engberg-Pedersen, Elisabeth/Pedersen, Annegrethe
1985 Proforms in Danish Sign Language, Their Use in Figurative Signing. In: Stokoe, Will-
iam/Volterra, Virginia (eds.), Proceedings of the Third International Symposium on Sign
Language Research. Silver Spring, MD: Linstock Press, 202⫺209.
Farnell, Brenda
1995 Do You See What I Mean? Plains Indian Sign Talk and the Embodiment of Action.
Austin: University of Texas Press.
Farris, Michael A.
1998 Models of Person in Sign Languages. In: Lingua Posnaniensis 40, 47⫺59.
Fischer, Susan D.
1990 The Head Parameter in ASL. In: Edmondson, William H./Karlsson, Fred (ed.), SLR ’87
Papers from the Fourth International Symposium on Sign Language Research. Hamburg:
Signum, 75⫺85.
Fischer, Susan D.
1996 The Role of Agreement and Auxiliaries in Sign Language. In: Lingua 98, 103⫺119.
Fischer, Susan D./Gough, Bonnie
1980 Verbs in American Sign Language. In: Stokoe, William (ed.), Sign and Culture: A
Reader for Students of American Sign Language. Silver Spring, MD: Linstok Press,
149⫺179.
Friedman, Lynn
1975 Space and Time Reference in American Sign Language. In: Language 51(4), 940⫺961.
Haspelmath, Martin
1997 Indefinite Pronouns. Oxford: Oxford University Press.
Hoffmeister, Robert
1978 The Development of Demonstrative Pronouns, Locatives and Personal Pronouns in the
Acquisition of ASL by Deaf Children of Deaf Parents. PhD Dissertation, University
of Minnesota.
Johnston, Trevor
in press Functional and Formational Characteristics of Pointing Signs in a Corpus of Auslan
(Australian Sign Language). To appear in: Corpus Linguistics and Linguistic Theory.
Kita, Sotaro
2003 Interplay of Gaze, Hand, Torso Orientation, and Language in Pointing. In: Kita, Sotaro
(ed.), Pointing: Where Language, Culture and Cognition Meet. Mahwah, NJ: Lawrence
Erlbaum Associates, 307⫺328.
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Lee, Robert G./Neidle, Carol/MacLaughlin, Dawn/Bahan, Ben/Kegl, Judy
1997 Role Shift in ASL: A Syntactic Look at Direct Speech. In: Neidle, Carol/MacLaughlin,
Dawn/Lee, Robert G. (eds.), Syntactic Structure and Discourse Function: An Examina-
tion of Two Constructions in American Sign Language. Boston, MA: American Sign
Language Linguistic Research Project, Boston University, 24⫺45.
242 II. Morphology

Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton de Gruyter.
Liddell, Scott K.
2003 Grammar, Gesture and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lillo-Martin, Diane
2002 Where Are All the Modality Effects? In: Meier, Richard P./Cormier, Kearsy/Quinto-
Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cam-
bridge: Cambridge University Press, 241⫺262.
Lillo-Martin, Diane/Klima, Edward
1990 Pointing out Differences: ASL Pronouns in Syntactic Theory. In: Fischer, Susan D./
Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics.
Chicago: University of Chicago Press, 191⫺210.
Lucas, Ceil/Bayley, Robert
2005 Variation in ASL: The Role of Grammatical Function. In: Sign Language Studies 6(1),
38⫺75.
Marsaja, I. Gede
2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen:
Ishara Press.
McBurney, Susan L.
2002 Pronominal Reference in Signed and Spoken Language: Are Grammatical Categories
Modality-Dependent? In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David
(eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge
University Press, 329⫺369.
Meier, Richard P.
1990 Person Deixis in ASL. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical Issues
in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago Press,
175⫺190.
Meir, Irit
2003 Grammaticalization and Modality: The Emergence of a Case-Marked Pronoun in Isra-
eli Sign Language. In: Journal of Linguistics 39, 109⫺140.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert
2000 The Syntax of American Sign Language. Cambridge, MA: MIT Press.
Nichols, Johanna
1992 Linguistic Diversity in Space and Time. Chicago: University of Chicago Press.
Nilsson, Anna-Lena
2004 Form and Discourse Function of the Pointing toward the Chest in Swedish Sign Lan-
guage. In: Sign Language & Linguistics 7(1), 3⫺30.
Nunes, Jairo/de Quadros, Ronice M.
2004 Phonetic Realization of Multiple Copies in Brazilian Sign Language. Paper Presented at
the 8 th Conference on Theoretical Issues in Sign Language Research (TISLR), Barcelona.
Padden, Carol A.
1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation,
University of California at San Diego.
Padden, Carol A.
1990 The Relation between Space and Grammar in ASL Verb Morphology. In: Lucas, Ceil
(ed.), Sign Language Research: Theoretical Issues. Washington, D.C.: Gallaudet Univer-
sity Press, 118⫺132.
Petronio, Karen
1993 Clause Structure in American Sign Language. PhD Dissertation, University of Wash-
ington.
11. Pronouns 243

Pfau, Roland/Steinbach, Markus


2003 Optimal Reciprocals in German Sign Language. In: Sign Language and Linguistics 6
(1), 3⫺42.
Pfau, Roland/Steinbach, Markus
2005 Relative Clauses in German Sign Language: Extraposition and Reconstruction. In: Ba-
teman, Leah/Ussery, Cherlon (eds.), Proceedings of the North East Linguistic Society
(NELS 35), Vol. 2. Amherst, MA: GLSA, 507⫺521.
Pfau, Roland/Steinbach, Markus
2006 Modality-Independent and Modality-Specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 5⫺98.
Pichler, Deborah Chen/Schalber, Katharina/Hochgesang, Julie/Milkovic, Marina/Wilbur, Ronnie
B./Vulje, Martina/Pribanić, Ljubica
2008 Possession and Existence in Three Sign Languages. In: Quadros, Ronice M. de (ed.),
Sign Languages: Spinning and Unraveling the Past, Present and Future. TISLR 9, Forty-
Five Papers and Three Posters from the 9 th Theoretical Issues in Sign Language Research
Conference. Petrópolis/RJ. Brazil: Editora Arara Azul, 440⫺458.
Rathmann, Christian/Mathur, Gaurav
2002 Is Verb Agreement the Same Cross-Modally? In: Meier, Richard P./Cormier, Kearsy/
Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 370⫺404.
Schembri, Adam/Fenlon, Jordan/Rentelis, Ramas
2009 British Sign Language Corpus Project: Sociolinguistic Variation in the 1 Handshape
in BSL Conversations. Paper Presented at the 50 th Annual Meeting of the Linguistics
Association of Great Britain, Edinburgh.
Smith, Wayne H.
1990 Evidence for Auxiliaries in Taiwan Sign Language. In: Fischer, Susan D./Siple, Patricia
(eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: Uni-
versity of Chicago Press, 211⫺228.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. Cambridge: Cambridge University Press.
Tang, Gladys/Sze, Felix
2002 Nominal Expressions in Hong Kong Sign Language: Does Modality Make a Differ-
ence? In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and
Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press,
296⫺320.
Thompson, Robin
2006 Eye Gaze in American Sign Language: Linguistic Functions for Verbs and Pronouns.
PhD Dissertation, University of California, San Diego.
Todd, Peyton
2009 Does ASL Really Have Just Two Grammatical Persons? In: Sign Language Studies
9(2), 166⫺210.
Wilbur, Ronnie B./Patschke, Cynthia
1998 Body Leans and the Marking of Contrast in American Sign Language. In: Journal of
Pragmatics 30, 275⫺303.
Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Ben-
jamins.
Zeshan, Ulrike
2004 Interrogative Constructions in Signed Languages: Crosslinguistic Perspectives. In: Lan-
guage 80(1), 7⫺39.
244 II. Morphology

Zimmer, June/Patschke, Cynthia


1990 A Class of Determiners. In: Lucas, Ceil/Valli, Clayton (eds.), Sign Language Research:
Theoretical Issues. Washington, D.C.: Gallaudet University Press, 201⫺210.

Kearsy Cormier, London (United Kingdom)


III. Syntax

12. Word order


1. Word order ⫺ some background issues
2. Word order and sign languages
3. A timeline for sign linguistic research: how does word order work fit in?
4. Towards a typology of sign languages
5. Methodological issues: how data type impacts results
6. Conclusion
7. Literature

Abstract

This chapter explores issues relating to word order and sign languages. We begin by
sketching an outline of the key issues involved in tackling word order matters, regardless
of modality of language. These include the functional aspect of word order, the articula-
tory issues associated with simultaneity in sign languages, and the question of whether
one can identify a basic word order. Though the term ‘constituent order’ is more accurate,
we will for convenience continue to use the term ‘word order’ given its historical impor-
tance in the literature. We go on to discuss the relationship between signs and words
before providing a historically-based survey of research on word order in sign languages.
We follow Woll’s (2003) identification of three important phases of research: the first
concentrating on similarities between sign and spoken languages; the second focussing
on the modality of sign languages; and the third switching the emphasis to typological
studies. We touch on the importance of such issues as non-manual features, simultaneity,
and pragmatic processes like topicalisation. The theoretical stances of scholars cited in-
clude functional grammar, cognitive grammar, and generative grammar.

1. Word order − some background issues

In our discussion of word order, we follow Bouchard and Dubuisson (1995), who iden-
tify three aspects important to word order:

(i) a functional aspect, where the order of items provides information about the com-
bination of words and which, in turn, provides guidance on how to interpret the
sentence (section 1.1);
(ii) an articulatory aspect which (for spoken languages) arises because generally, it is
impossible to articulate more than one sound at a time (section 1.2);
(iii) the presumption of the existence of a basic word order (section 1.3).
246 III. Syntax

1.1. The functional aspect

Based on the identification of discrete constituents, which we discuss in the next sec-
tion, cross-linguistic and typological research has identified a range of associations
between specific orders in languages and particular functions. For example, ordering
relations have been identified between a verb and its arguments, whether expressed as
affixes or separate phrases, which identify the propositional structure of the clause. We
may refer to a language that exhibits this behaviour as argument configurational. This
may be achieved indirectly through a system of grammatical relations (subject, object,
etc.) or directly via semantic roles (agent, patient, etc.). Greenberg’s (1966) well-known
work on word order typology, which characterises languages as SVO, SOV, etc., as-
sumes the ubiquity of this role of order in the determination of propositional meaning.
However, scholars working on other spoken languages like Chinese (LaPolla 1995) or
Hungarian (Kiss 2002) have argued that the primary role of order in these languages
is to mark information structure distinctions such as focus and topic. Such languages
have been termed discourse configurational (Kiss 1995). There have also been claims
that some spoken languages have free word order, for example Warlpiri (Hale 1983)
and Jingulu (Pensalfini 2003). These languages, which have been termed non-configu-
rational, are said not to employ order for any discernible linguistic function. In the
Generative Grammar literature, surface non-configurationality is often countered by
positing more abstract hierarchical structure (see Sandler/Lillo-Martin (2006, 301⫺308)
for this strategy applied to sign languages).
These distinct ordering patterns of argument configurational, discourse configura-
tional, and non-configurational have also been identified for sign languages. Following
Liddell’s (1980) early descriptions of word order in American Sign Language (ASL),
Valli, Lucas, and Mulrooney (2006) identify ASL as argument configurational, reflect-
ing grammatical relations such as subject and object (also see Wilbur 1987, Neidle et
al. 2000). Similar claims have been made for Italian Sign Language (LIS, Volterra et
al. 1984), German Sign Language (DGS, Glück/Pfau 1998), and Brazilian Sign Lan-
guage (LSB, de Quadros 1999). On the other hand, various scholars have argued for
discourse configurational accounts, for example, Deuchar (1983) writing on British Sign
Language (BSL), Engberg-Pedersen (1994) on Danish Sign Language (DSL), and Na-
deau and Desouvrey (1994) on Quebec Sign Language (LSQ).

1.2. The articulatory aspect

The articulatory aspect raises issues about chronological sequence and discreteness and
links directly to the issue of modality. The fact that sign languages can express different
aspects of information at the same time differentiates them from spoken languages
(even when taking into account prosodic elements such as tone) in terms of the degree
of simultaneity. Simultaneity can be encoded both non-manually and manually. As for
the former type of simultaneity, there is a striking amount of similarity across described
sign languages regarding non-manuals marking interrogatives (including wh-questions
and yes/no-questions), negation, topic-comment structure, conditionals, etc. (see, for
example, Vermeerbergen/Leeson/Crasborn 2007). Vermeerbergen and Leeson (2011)
12. Word order 247

note that the similarities documented to date go beyond functionality: form is also
highly similar. Across unrelated sign languages, wh-questions, for example, are marked
by a clustering of non-manual features (NMFs) of which furrowed brows are most
salient, while for yes-no questions, raised brows are the most salient feature (see chap-
ter 14, Sentence Types, for discussion). The fact that sign languages, due to the availa-
bility of two articulators (the two hands), also allow for manual simultaneity com-
pounds the issue, and is one we return to again in section 2.
Given all this, it is probably not surprising that the issue of word order, when as-
sumed as a chronologically linear concept, is controversial for studies of sign languages.
Indeed, it is from consideration of wh-questions that Bouchard and Dubuisson (1995)
make their argument against the existence of a basic word order for sign languages
(also see Bouchard (1997)). On this point, Perniss, Pfau, and Steinbach (2007) note
that there seems to be a greater degree of variance across sign language interrogatives
with respect to manual marking (e.g. question word paradigms, question particles, word
order) than for non-manual marking, although differences in terms of the form and
scope of non-manual marking are also attested.

1.3. Basic word order


In many discussions of the mappings between orders and functions, there is a presump-
tion that one order is more basic than others. In this view, the basic word order is then
changed to communicate other functions. Such changed orders may be seen as
‘marked’ or ‘atypical’ in some way. More elaborate versions of this approach might
identify a range of order-function pairings, within each of which there may occur
marked or atypical orders. Generally, the criteria for identifying basic word order in-
clude the following (Brennan 1994, 19, also see Dryer 2007):

(i) the order that is most frequent;


(ii) the word order of simple, declarative, active clauses with no complex words or
noun phrases;
(iii) the word order that requires the simplest syntactic description;
(iv) the order that is accompanied by the least morphological marking;
(v) the order that is most neutral, i.e. that is the least pragmatically marked.

Based on these criteria, some scholars have argued for the existence of a basic word
order in certain sign languages. Basic SVO order has been identified in, for instance,
ASL and LSB, while LIS and DGS have been argued to have basic SOV order. These
two word order patterns are illustrated by the ASL and LIS examples in (1), taken
from Liddell (1980, 19) and Cecchetto/Geraci/Zucchi (2009, 282), respectively.

(1) a. woman forget purse [ASL]


‘The woman forgot the purse.’
b. gianni maria love [LIS]
‘Gianni loves Maria.’

However, as noted above, some scholars question the universality of basic word order.
Bouchard and Dubuisson (1995), for example, argue that “only languages in which
248 III. Syntax

word order has an important functional role will exhibit a basic order” (1995, 100).
Their argument is that the modality of sign languages reduces the importance of order
as “there are other means that a language can use to indicate what elements combine”
(1995, 132). The notion of basic word order usually underlies the identification of
functional type in that the type is usually based on a postulated basic word order,
which then may undergo changes for pragmatic reasons or to serve other functions.
Massone and Curiel (2004), for instance, identify Argentine Sign Language (LSA) as
argument configurational (SOV) in its basic word order but describe pragmatic rules
such as topicalisation that may alter this basic order (see section 3.2 for further discussion).

2. Word order and sign languages


We might start by asking whether it is appropriate to term the ordering of constituents
in a sign language as ‘word order’. Brennan (1994) concludes that while there are
difficulties with terms that originate in the examination of spoken languages, the unit
that is known as ‘the sign’ in sign languages

clearly functions as the linguistic unit that we know as the word. We do not usually exploit
a separate term for this unit in relation to written as opposed to spoken language, even
though notions of written word and spoken word are not totally congruous.
(Brennan 1994, 13)

Brennan thus uses the term ‘word’ in a general sense to incorporate spoken, sign, and
written language. She uses the term ‘sign’ when referring only to sign languages, taking
as given that ‘signs’ are equivalent to ‘words’ in terms of grammatical role. However,
in the same volume, Coerts (1994a,b), investigating word order in Sign Language of
the Netherlands (NGT), refers explicitly to constituent structure. She does not explic-
itly motivate her choice of terminology (a problem that impacts on attempts at later
typological work; see, for example, Johnston et al. (2007)), but it seems that as she is
concerned with the ordering of elements within a fixed set of parameters, the discussion
of constituents seems more appropriate.
Leaving this debate regarding terminology aside, we can say that the issue of identi-
fying a basic constituent order(s) in a sign language is complex. However, given the
fact that sign languages are expressed in another modality, one which makes use of
three-dimensional space and can employ simultaneous production of signs using the
major articulators (i.e. the arms and hands), we also encounter questions that are
unique to research on sign languages. These include questions regarding the degree
and extent of simultaneous patterning, the extent of iconicity at syntactic and lexical
levels, and the applicability to sign languages of a dichotomy between languages whose
constituent orders reflect syntactic functions and those whose orders reflect pragmatic
functions (after Brennan 1994, 29 f.).
The challenge posed by simultaneity is illustrated by the examples in (2). In the
NGT example in (2a), we observe full simultaneity of the verb and the direct object
(which is expressed by a classifier); it is therefore impossible to decide whether we are
dealing with SVO or SOV order (Coerts 1994b, 78). The Jordanian Sign Language
(LIU) example in (2b) is even more complex (Hendriks 2008, 142 f.; note that the
12. Word order 249

signer is left-handed). The Figure (the subject ‘car’) and the Ground (the locative
object ‘bridge’) are introduced simultaneously by classifiers. Subsequently, the classifier
representing the car is first held in place, then moved with respect to the Ground, and
then held in place again, taking on different grammatical roles in subsequent clauses.
Clearly, it would be challenging, if not impossible, to determine word order in this
example (see Miller (1994) and papers in Vermeerbergen/Leeson/Crasborn (2007) for
discussion of different types of simultaneity; also see example (7) below),

(2) a. R: woman cut3b [NGT]


L: cl(thread)3b
‘The woman cuts the thread.’
b. R: cl:vehicleforward hold backward-forward hold____________ [LIU]
L: cl:bridge know cl:bridge stay what
R: hold
L: cl:vehiclemove forward repeatedly indexCL:vehicle
‘The car passed under the bridge, you get it? It passed under the bridge and
stayed there. What (could he do)? That parked car was passed by other cars.’

In relation to the ordering of constituents within sign languages, Brennan notes that

there is a reasonable body of evidence to indicate that sequential ordering of signs does
express such relationships, at least some of the time, in all of the signed languages so far
studied. However, we also know from the studies available that there are other possible
ways. (Brennan 1994, 31)

Among these “other possible ways”, we can list the addition of specific morphemes to
the form of the verb, which allows for the expression of the verb plus its arguments
(see section 4.3). Brennan makes the point that we cannot talk about SVO or SOV or
VSO ordering if the verb and its arguments are expressed simultaneously within the
production of a single sign, as is, for example, the case in classifier constructions (see
chapter 8 for discussion). This, and other issues related to the expression of simultane-
ity are taken up in Vermeerbergen, Leeson, and Crasborn (2007).

3. A timeline for sign linguistic research: how does word order


work fit in?

Woll (2003) has described research on sign languages as falling into three broad catego-
ries: (i) the modern period, (ii) the post-modern period, and (iii) typological research
(see chapter 38 for the history of sign linguistic research). We suggest that work on
word order can be mapped onto this categorization, bearing in mind that the categories
suggested by Woll are not absolute. For example, while ASL research may have en-
tered into the post-modern stage in the early 1980s, the fact that for many other under-
described sign languages, the point of reference for comparative purposes has fre-
quently been ASL or BSL implies that for these languages, some degree of cross-
linguistic work has always been embedded in their approach to description. However,
250 III. Syntax

the conscious move towards typological research, taking on board findings from the
field of gesture research and awareness of the scope of simultaneity in word order, is
very much a hallmark of early twenty-first century research. We address work that can
be associated with the modern and post-modern period in the following two subsec-
tions and turn to typological research in section 4.

3.1. The modern period and word order research


Early research tended to focus on the description of individual sign languages with
reference to the literature on word order in spoken languages, and concentrated mostly
on what made sign languages similar to spoken languages rather than on what differen-
tiated them from each other. This links to Woll’s ‘modern period’. For example, early
research focused on the linearity of expression in sign languages without reference to
modality-related features of sign languages like manual simultaneity, iconicity, etc. (e.g.,
Fischer 1975; Liddell 1977). Fischer (1975), for instance, describes ASL as having an
underlying SVO pattern at the clause level, but also notes that alternative orders exist
(such as the use of topic constructions in certain instances, yielding OSV order). In
contrast, Friedman (1976) claims that word order in ASL is relatively free with a gen-
eral tendency for verbs to appear sentence-finally, also arguing that the subject is not
present in the majority of her examples. However, Liddell (1977) and Wilbur (1987)
questioned Friedman’s analysis, criticising that she did not recognise the ways in which
ASL verbs inflect to mark agreement, a point of differentiation between spoken and
sign languages, which we can align with Woll’s post-modern period of research.

3.2. The post-modern period and word order research


In the post-modern period, which Woll pinpoints as having its beginnings in the 1980s,
researchers began to look at the points of differentiation between sign and spoken
languages, leading to work focussing on the impact of language modality on syntax.
The beginnings of work on word order in BSL, for example, fall into this timeframe.
As with the work on ASL reviewed above, Deuchar (1983) raises the question of
whether BSL is an SVO language but argues that a more functional topic-comment
analysis might more fully account for the data than one that limits itself to sign order
per se. Deuchar drew on Li and Thompson’s (1976) work on the definition of topics,
demonstrating links to the functionalist view on language. Her work also seeks to
compare BSL with ASL, thus representing an early nod towards typological work for
sign languages. For example, in exploring the function of topics in BSL, Deuchar did
not find the slight backward head tilt being used as a marker of topicalisation in BSL
which had been described by Liddell (1980) for ASL. However, she found that NMFs
marked the separation of topic and comment in her data: topics were marked by raised
eyebrows while the comments were marked by a headnod (a description which also
differs slightly from that given by Baker-Shenk and Cokely (1980) for ASL).
By the late 1980s and into the 1990s, work on ASL also began to make greater
reference to topic marking and other points of differentiation such as simultaneity
(e.g., Miller 1994). We note here that Miller also looked at LSQ, thus marking a move
towards cross-linguistic, typological studies.
12. Word order 251

3.2.1. Functional and cognitive approaches to word order

Work that addresses the word order issue from a functionalist-cognitive viewpoint ar-
gues that topic-comment structure reflects basic ordering in ASL (and probably other
sign languages) and is pervasive across ASL discourse (e.g., Janzen 1998, 1999), noting,
however, that this sense of pervasiveness is lost when topic-comment structure is con-
sidered as just one of several sentence types that arises. Janzen presents evidence from
a range of historical and contemporary ASL monologues that suggests that topics
grammaticalized from yes/no-question structure and argues that topics function as ‘piv-
ots’ in the organisation of discourse. He suggests that topics in ASL arise in pragmatic,
syntactic, and textual domains, but that in all cases, their prototypical characteristic is
one of being ‘backward looking’ to a previous identifiable experience or portion of the
text, or being ‘forward looking’, serving as the ground for a portion of discourse that
follows. The examples in (3) illustrate different pragmatic discourse motivations for
topicalisation (Janzen 1999, 276 f., glosses slightly adapted). In example (3a), the string
I’ll see Bill is new information while the topicalised temporal phrase next-week situ-
ates the event within a temporal framework. In (3b), the object functions as topic
because “the signer does not feel he can proceed with the proposition until it is clear
that certain information has become activated for the addressee”.

top
(3) a. next week, future see b-i-l-l [ASL]
‘I’ll see bill next week.’
top
b. know b-i-l-l, future see next-week
‘I’ll see bill next week.’

Another, yet related, theoretical strand influencing work on aspects of constituent or-
dering is that of cognitive linguistics, which emphasises the relationship between cogni-
tive processes and language use (Langacker 1991). Work in this genre has pushed
forward new views on aspects of verbal valence such as detransitivisation and passive
constructions (Janzen/O’Dea/Shaffer 2001; Leeson 2001). Cognitive linguistics accounts
typically frame the discussion of word order in terms of issues of categorisation, proto-
type theory, the influence of gesture and iconicity with respect to the relationship
between form and meaning, and particularly the idea of iconicity at the level of gram-
mar. The identification of underlying principles of cognition evidenced by sign lan-
guage structures is an important goal. Work in this domain is founded on that of au-
thors such as Jackendoff (1990) and Fauconnier (1985, 1997) and has lead to a growing
body of work by sign linguists working on a range of sign languages; for work on ASL,
see, for instance, Liddell (2003), Armstrong and Wilcox (2007), Dudis (2004), S. Wilcox
(2004), P. Wilcox (2000), Janzen (1999, 2005), Shaffer (2004), Taub (2001), and Taub
and Galvan (2001); for Swedish Sign Language (SSL), see Bergman and Wallin (1985)
and Nilsson (2010); for DGS, see Perniss (2007); for Icelandic Sign Language, see
Thorvaldsdottir (2007); for Irish Sign Language (Irish SL), see Leeson and Saeed
(2007, 2012) and Leeson (2001); for French Sign Language (LSF), see Cuxac (2000),
Sallandre (2007), and Risler (2007); and for Israeli Sign Language (Israeli SL), see
Meir (1998).
252 III. Syntax

3.2.2. Generative approaches to word order

In contrast, other accounts lean towards generative views on language. Fischer (1990),
for instance, observes that both head-first and head-final structures appear in ASL
and notes a clear relationship between definiteness and topicalisation. She also notes
inconsistency in terms of head-ordering within all types of phrases and attempts to
account for this pattern in terms of topicalisation: heads usually precede their comple-
ments except where complements are definite ⫺ in such cases, a complement can pre-
cede the head. This leads Fischer to claim that ASL is like Japanese in structure insofar
as ASL allows for multiple topics to occur.
Similarly, Neidle et al. (2000) explore a wide range of clauses and noun phrases as
used by ASL native signers within a generativist framework. They conclude (like Fis-
cher and Liddell before them) that ASL has a basic hierarchical word order, which is
SVO, basing their claims on the analysis of both naturalistic and elicited data. Working
within a Minimalist Program perspective (Chomsky 1995), they state that derivations
from this basic order can be explained in terms of movement operations, that is, they
reflect derived orders. Neidle et al. make some very interesting descriptive claims for
ASL: they argue that topics, tags, and pronominal right dislocations are not fundamen-
tal to the clause in ASL. They treat these constituents as being external to the clause
(i.e. the Complementizer Phrase (CP) and argue that once such clause-external el-
ements are identified, it becomes evident that the basic word order in ASL is SVO.
For example, in (4a), the object has been moved from its post-verbal base position
(indicated by ‘t’ for ‘trace’) to a sentence-initial topic position ⫺ the specifier of a
Topic Phrase in their model ⫺ resulting in OSV order at the surface (Neidle et al.
2000, 50). Note that according to criterion (v) introduced in section 1.3 (pragmatic
neutrality), example (4a) would probably not be considered basic either.

top
(4) a. johni, mary love ti [ASL]
‘John, Mary loves.’
b. pro book buy ix3a [NGT]
‘He buys a book.’

Along similar lines, the OVS order observed in the NGT example in (4b) is taken to
be the result of two syntactic mechanisms: pronominal right disclocation of the subject
pronoun (pronoun copy) accompanied by pro-drop (Perniss/Pfau/Steinbach 2007, 15).
Neidle et al. (2000) further argue that the distribution of syntactic non-manual
markings (which spread over c-command domains) lends additional support for the
existence of hierarchically organized constituents, thus further supporting their claim
that the underlying word order of ASL is SVO. They conclude that previous claims
that ASL utilised free word order are unfounded.
Another issue of concern first raised in the post-modern period and now gaining
more attention in the age of typological research is that of modality, with the similar-
ities and differences between sign languages attracting increased attention. Amongst
other things, this period led to work on simultaneity in all its guises (see examples in
(2) above), and some questioning of how this phenomenon impacted on descriptions
12. Word order 253

of basic word order (e.g., Brennan 1994; Miller 1994). Clearly, simultaneity is highly
problematic for a framework that assumes that hierarchical structure is mapped onto
linear order.

4. Towards a typology of sign languages


In today’s climate, researchers are drawing on the results of work emanating from the
modern and post-modern periods, consolidating knowledge and re-thinking theoretical
assumptions with reference to cross-linguistic studies on aspects of syntax, semantics,
and pragmatics (e.g., Perniss/Pfau/Steinbach 2007; Vermeerbergen/Leeson/Crasborn
2007).
In the late twentieth century and early twenty-first century, work has tended to be
cross-linguistic in nature, considering the modality effects as a point of differentiation
between spoken and sign languages. Moreover, studies sought to identify points that
differentiate between sign languages, while also acknowledging the impact that articu-
lation in the visual-spatial modality seems to have for sign languages, which leads to a
certain level of similarity in certain areas. This phase of research maps onto Woll’s
third phase, that of ‘typological research’ and has led to a significant leap forward in
terms of our understanding of the relationship between sign languages and the ways
in which sign languages are structured.

4.1. Cross-linguistic comparison based on picture elicitation

Early work which we might consider as mapping onto a typological framework, and
which still has relevance today, involves the picture elicitation tasks first used by Vol-
terra et al. (1984) for LIS (see chapter 42, Data Collection, for details). This study,
which focused on eliciting data to reflect transitive utterances has since been replicated
for many sign languages including work by Boyes Braem et al. (1990) for Swiss-Ger-
man Sign Language, Coerts (1994a,b) for NGT, Saeed, Sutton-Spence, and Leeson
(2000) for Irish SL and BSL, Leeson (2001) for Irish SL, Sze (2003) for Hong Kong
Sign Language (HKSL), Kimmelman (2011) for Russian Sign Language (RSL), and,
more recently, comparative work on Australian Sign Language (Auslan), Flemish Sign
Language (VGT), and Irish SL (Johnston et al. 2007) as well as on VGT and South
African Sign Language (Vermeerbergen et al. 2007). These studies attempt to employ
the same framework in their analysis of comparative word order patterning across sign
languages, using the same set of sentence/story elicitation tasks, meant to elicit the
same range of orders and strategies in the languages examined. Three kinds of declara-
tive utterances were explored in particular: non-reversible sentences (i.e. where only
one referent can be the possible Actor/Agent in the utterance; e.g. The boy eats a piece
of cake), reversible sentences (i.e. where both referents could act as the semantic
Agent; e.g. The boy hugs his grandmother), and locative sentences (these presented
the positions of two referents relative to one another; e.g. The cat sits on the chair).
Unsurprisingly, results have been varied. For example, Italian subjects tended to
mention the Agent first in their sentences while Swiss informants “tended to prefer to
254 III. Syntax

set up what we have called a visual context with the utilisation of many typical sign
language techniques such as spatial referencing, use of handshape proforms, role, etc.”
(Boyes-Braem et al. 1990, 119). For many of the sign languages examined, it was found
that reversibility of the situation could have an influence on word order in that reversi-
ble sentences favoured SVO order while SOV order was observed more often in non-
reversible sentences; this appeared to be the case in, for instance, LIS (Volterra et al.
1984) and VGT (Vermeerbergen et al. 2007). In Auslan, Irish SL, and HKSL, however,
reversibility was not found to influence word order (Johnston et al. 2007, Sze 2003).
Moreover, results from many of these studies suggest that locative sentences favour a
different word order, namely Ground ⫺ Figure ⫺ locative predicate, a pattern that is
likely to be influenced by the visual modality of sign languages. A representative exam-
ple from NGT is provided in (5) (Coerts 1994a, 65), but see (7) for an alternative struc-
ture.

(5) table ball cl‘ball under the table’ [NGT]


‘The ball is under the table.’

Another study that made use of the same Volterra et al. elicitation materials is Ver-
meerbergen’s (1998) analysis of VGT. Using 14 subjects aged between 20 and 84 years,
Vermeerbergen found that VGT exhibits systematic ordering of constituents in declara-
tive utterances that contain two (reversible or non-reversible) arguments. What is nota-
ble in Vermeerbergen’s study is the clear definition of subject applied (work preceding
Coerts (1994a,b) does not typically include definitions of terms used). Vermeerbergen
interprets subject as a ‘psychological subject’, that is “the particular about whom/which
knowledge is added will be called a subject”. Similarly, her references to object are
based on a definition of object as “the constituent naming the referent affected by
what is expressed by the verb (the action, condition)” (1998, 4). However, we should
note that this is a ‘mixed’ pair of definitions: object is defined in terms of semantic role
(Patient/Theme) while subject is given a pragmatic definition (something like topic).
Vermeerbergen found that SVO ordering occurred most frequently in her elicited
data, although older informants tended to avoid this patterning. Analysing spontaneous
data, with the aim of examining whether SVO and SOV occurred as systematically
outside of her elicited data corpus, she found that actually only a small number of
clauses contained verbs accompanied by explicit referents, particularly in clauses where
two interacting animate referents were expressed. She notes that

Flemish signers seem to avoid combining one single verb and more than one of the interact-
ing arguments. To this end, they may use mechanisms that clarify the relationship between
the verb and the arguments while at the same time allowing for one of the arguments not
to be overtly expressed (e.g. verb agreement, the use of both hands simultaneously, shifted
attribution of expressive elements, etc.). (Vermeerbergen 1998, 2)

4.2. Semantic roles and animacy


Building on earlier studies, Coerts’ (1994a,b) work on NGT is one of the first attempts
to explicitly list semantic roles as a mechanism that may influence argument relations.
12. Word order 255

Her objective was to determine whether or not NGT had a preferred constituent order.
On this basis, she labelled argument positions for semantic function, including Agent,
Positioner (the entity controlling a position), Zero (the entity primarily involved in a
State), Patient, Recipient, Location, Direction, etc. Following Dik’s Functional Gram-
mar approach (Dik 1989), Coerts divided texts into clauses and extra-clausal constitu-
ents, where a clause was defined as any main or subordinate clause as generally de-
scribed in traditional grammar.
Boyes-Braem et al. (1990) and Volterra et al. (1984) had identified what they re-
ferred to as ‘split sentences’, which Boyes-Braem describes as sentences that are bro-
ken into two parts, where “the first sentence in these utterances seem to function as
‘setting up a visual context’ for the action expressed in the second sentence” (Boyes-
Braem et al. 1990, 116). A LIS utterance exemplifying this type of structure is provided
in (6); the picture that elicited this utterance showed a woman combing a girl’s hair
(Volterra et al. 1984; note that the example is glossed in Italian in the original article).

(6) child seated, mother comb [LIS]


‘The child is seated, and the mother combs (her hair).’

Coerts (1994a) identified similar structures in her NGT data, and argues that these
should be analysed as two separate clauses where the first clause functions as a ‘Setting’
for the second clause. Coerts found that most of the clauses she examined contained
two-place predicates where the first argument slot (A1) was typically filled by the
semantic Agent argument (in Action predicates), Positioner (in Position predicates),
Process or Force (in Process predicates), or Zero (in State predicates). The second
argument slot (A2) tended to be filled by the semantic Patient role (in Action, Position,
and State predicates) or Direction/Source (in Action, Position, and Process predicates).
First arguments were considered more central than second predicates given that “first
arguments are the only semantic arguments in one place predicates. That is, semanti-
cally defined, there can be no Action without an Agent, no Position without a Posi-
tioner, etc., but there can be an Action without a Patient and also a Position without
a Location et cetera” (Coerts 1994a, 53). For locative utterances, as in (7) below, the
general pattern identified was A1 V/A2 (Coerts 1994a, 56). In this example, the first
argument (car) is signed first, followed by a simultaneous construction with the verbal
predicate signed by the right hand and the second argument (the location bridge) by
the left hand.

(7) R: (2h)CAR 3adrive-cl‘car’3b [NGT]


L: bridge ctr.up_____
R: Agent Agent-Verb
L: Loc
A1 V/A2
‘The car goes under the bridge’.

This analytical approach mirrors work from the Functional and Cognitive Linguistics
fields, which suggests a general tendency within word order across languages, claiming
a natural link between form and meaning, with properties of meaning influencing and
shaping form (e.g., Tomlin 1986). Of specific interest here is the Animated First Princi-
256 III. Syntax

ple, whereby in basic transitive sentences, the most Agent-like element comes first.
That is, there is a tendency for volitional actors to precede less active or less volitional
participants. Coerts’ findings and that of others have identified this principle for several
sign languages (e.g., Boyes-Braem et al. 1990; Leeson 2001; Kimmelman 2011). Signifi-
cantly, Coerts found that sentence type was relevant to discussion of constituent order
in NGT. She writes that

From the analyses of the three sentence types, it emerges that the relation between the
arguments in a clause can also be expressed by means of a verb inflection, classifier incor-
poration and lexical marking of the second argument and that the preferred constituent
order can be influenced by semantic factors, especially the features Animate/Inanimate
and Mobile/Immobile. (Coerts 1994a, 61)

The cross-linguistic study conducted by Saeed, Sutton-Spence and Leeson (2000),


which compares BSL and Irish SL, builds on previous work by Volterra et al. (1984)
and Coerts (1994a,b). This very small-scale study looked at a set of data elicited follow-
ing the same elicitation procedure used by Volterra et al. (1984). Saeed, Sutton-Spence,
and Leeson report finding the same types of structures as reported in studies on other
sign languages that used the same elicitation materials and methodology, including
split sentences, which they account for using Coerts’ (1994a,b) implementation of a
Functional Grammar framework. Despite such striking similarities across languages,
Saeed, Sutton-Spence, and Leeson also report differences between BSL and Irish SL
in terms of use of particular features. For example, different patterns emerged with
respect to how signers of BSL and Irish SL used simultaneous constructions, with their
use being more prevalent among the BSL informants. It was also found that BSL
signers preferred to establish contextual information in greater detail than their Irish
SL counterparts. Saeed, Sutton-Spence, and Leeson report that BSL and Irish SL seem
to share a more similar underlying semantic pattern than suggested at syntactic level
alone: they report a high degree of consistency in the relationship between animacy
and focus in both languages. This was evidenced by the fact that the more animate
entities tended to be signed by the dominant hand in simultaneous constructions, and
Ground elements were introduced before Figure elements in locative constructions.
Finally, the authors note that constituent order, particularly in relation to the use of
devices like simultaneity, seems more fixed in the BSL data examined than in the Irish
SL data.

4.3. Morphosyntactic and syntactic factors


Above, we have already seen that semantic factors such as reversibility and animacy
can have an influence on word order in at least some sign languages. In addition, it
has been found in a number of studies that morphosyntactic factors can also play a
role. In particular, a different word order may be observed with verbs that carry certain
morphological markers, such as agreement, classifiers, or aspect morphology. Chen
Pichler (2001) subsumes the different types of markings under the term ‘re-ordering
morphology’.
First, it has been observed that in some sign languages, plain verbs favour SVO
order while agreeing (or indicating) verbs favour SOV order; this pattern has been
12. Word order 257

described for VGT (Vermeerbergen et al. 2007), LSB (de Quadros 1999), and Croatian
Sign Language (HZJ, Milković/Bradarić-Jončić/Wilbur 2006), among others. Secondly,
in many sign languages, classifier constructions behave differently with respect to word
order; see, for instance, the simultaneous constructions involving classifiers in (2). Fi-
nally, verbs that are modified to express aspect (e.g. by means of reduplication) may
appear in a different position. In ASL and RSL, for instance, aspectually modified
verbs usually appear clause-finally while the basic word order is SVO (Chen Pichler
2001; Kimmelman 2011). With respect to the impact of re-ordering morphology, Chen
Pichler (2008, 307) provides the examples in (8) from her corpus of acquisition data
(both examples were produced by 26-months old girls). The verb in (8a) carries aspec-
tual inflection, the verb in (8b) combines with a Handling classifier; both verbs appear
sentence-finally.

(8) a. cat searchaspect [ASL]


‘I’m looking and looking for the cat.’
b. hey+ bag indexbag pick-up-by-handle
‘Hey (waving to get attention), pick up the bag.’

Returning to the question of basic word order, it could be argued that re-ordering
morphology increases the morphological markedness of the verb. Hence, according to
criterion (iv) in section 1.3, the alternative structures observed with morphologically
marked verbs would not be considered basic.
Yet another phenomenon that complicates the identification of basic word order is
doubling. We shall not discuss this phenomenon in detail but only point out that in
many sign languages, verbs in particular are commonly doubled (see chapter 14, Sen-
tence Types, for doubling of wh-words). If the resulting structure is SVOV, then it is
not always possible to determine whether the basic structure is SVO or SOV, that is,
which of the two instances of the verb should be considered as basic (see Kimmelman
(2011) for on overview of factors potentially influencing word order in sign languages).

4.4. Summary: the impact of modality


One of the key questions underpinning recent work on word order is whether modality
effects are responsible for the reduced range of structural variation found in sign lan-
guages in comparison to spoken languages. Perniss, Pfau, and Steinbach (2007, 14) note
that variation in sign languages is most striking in the realm of syntax given that “the
merging of a syntactic phrase structure is highly abstract and independent of phono-
logical properties of the items to be inserted ⫺ no matter whether your theory involves
movement operations or not”.
The above discussion has made clear that sign languages differ from each other
with respect to word order ⫺ and that, at least to some extent, they do so along similar
lines as spoken languages do. In addition, a semantic or morphosyntactic factor that
may have an influence on word order in one sign language does not necessarily have
the same influence in another sign language. Still, we also find striking similarities
across sign languages, and a least some of these similarities appear to be influenced by
the modality (e.g. Ground-Figure order in locative sentences, simultaneous construc-
tions).
258 III. Syntax

5. Methodological issues: how data type impacts results


Finally, it is important to consider the role that data collection plays in the context of
studies of word order in sign languages (also see chapter 42). Initially, it might be
useful to note that a range of different types of data was utilised and this may have
implications for the findings. For example, Coerts (1994a, 66f) urges caution in the
interpretation of her results on NGT because the data she analysed was based on a set
of sentences elicited in isolation. She notes that while the use of drawings is common
practice in sign language research as they minimise the influence of spoken language,
the elicitation drawings used in her study (like that of Volterra et al. 1984; Boyes-
Braem et al. 1990; Vermeerbergen 1998; Johnston et al. 2007; and Saeed/Sutton-
Spence/Leeson 2000) involved sets of two pictures which were minimally different,
clearly contrastive with respect to one constituent. On this basis, she notes that poten-
tially, these contrasts may have influenced the resulting linguistic encoding of senten-
ces, involving constructions that mark contrast. In the same way, Liddell’s early work
on ASL was dependent on the translation of English sentences, which potentially al-
lows for an increase in production of English-based or English-like constructions (Lid-
dell 1980). In contrast, the work of Neidle et al. (2000) focussed only on native signers
of ASL, and this is an important issue, as only 5⫺10 per cent of sign language users
are ‘native’ insofar as they are born into Deaf families where a sign language is the
primary language at home. For the remaining 90⫺95 per cent, we might expect that
the patterns described by Neidle et al., while relevant, will not be reflected as consist-
ently in their signing as they are in the productions of native signers. Consequently,
the issue of frequency patterning as a key indicator of basic word order in a language
(Brennan 1994) ⫺ and indeed, for grammar in general ⫺ remains unsolved in this in-
stance.
Similarly, we can note that it is often difficult to compare the results of different
studies in an unambiguous way. This is due to the fact that the range of data types
varies across studies, with varying degrees of other influences on the expected target
language output ⫺ be it the influence of a spoken language or the intrusion of contrast-
ive constructions due to an informant’s wish to be maximally comprehensive in their
output. We must take into consideration the constraints that applied in the data colla-
tion of these studies and the impact this has on the reliability of the findings of these
studies. Ultimately, we might argue that the most ‘valid’ results are those that compare
and contrast constituent order across a range of data types, and we note that moves
towards comparing and contrasting sign languages, backed up by access to multimodal
data where annotations can be viewed relative to the source language data, enhances
efforts towards identifying a true typology of sign languages (see Pizzuto/Pietrandrea
(2001) for discussion of problems with glossing, for example). Indeed, we suggest that
as linguists document more unrelated sign languages, thereby facilitating cross-linguis-
tic studies based on a richer data set from a range of related and unrelated sign lan-
guages, our understanding of just how different sign languages are from each other
will increase, allowing for a true typology of sign languages to unfold. The documenta-
tion of sign languages like LIU (Hendriks 2008), Adamorobe Sign Language (AdaSL,
Nyst 2007), and the Al-Sayyid Bedouin Sign Language (ABSL, Kisch 2008; Sandler et
al. 2005) adds to the pool of languages on the basis of which we may found a considered
typology. We must bear in mind that some communities of sign language users are
12. Word order 259

larger and more robust than others (e.g., village sign languages versus national sign
languages), a fact that has implications for language transmission and usage, which in
turn has potential implications for all kinds of grammatical analysis, including word
order (see chapter 24 on village (shared) sign languages).
Given the range of theoretical frameworks that have been adopted in considering
word order in sign languages, it is practically impossible to compare and contrast find-
ings across all studies: indeed, we refer the reader to Johnston et al.’s (2007) problema-
tisation of cross-linguistic analyses of sign languages. What we can identify here is
(i) the major thrust of the range of underlying approaches applied (e.g., descriptive,
generative, functionalist, cognitive, semantic, typological); (ii) the languages consid-
ered; (iii) the methodologies applied; and (iv) the general period in which the work
took place relative to Woll’s three-way distinction. All this can assist in our interpreta-
tion of the data under analysis. For example, we have seen that some studies only focus
on the semantic analysis of a narrow range of structures (e.g., agreement verbs, transi-
tive utterances, passives, question structures) while others are more broadly based and
offer general syntactic patterns for a given language (general valency operations for a
language). This has been most notable for research on ASL and BSL, where (to gener-
alise), the consensus seems to be that ASL is an SVO language while BSL is said to
be a topic-comment language.
A final note on word order in sign languages must address the role that new technol-
ogies play. The development of software such as SignStream© and ELAN has allowed
for significant strides forward in the development of digital corpora, and the analysis
of such data promises to bring forth the potential for quantitative analyses as well as
the opportunity for richer and more broadly based qualitative analyses than has been
possible to date (see chapter 43, Transcription, for details). Digital corpus work for a
range of sign languages including Auslan, BSL, Irish SL, LSF, NGT, VGT, and SSL is
now underway. Neidle et al. (2000) employed SignStream© in their analysis of data,
which allowed them to pinpoint the co-occurrence of non-manual features with manual
features in a very precise way. Other syntactic work using SignStream© includes that
of Cecchetto, Geraci, and Zucchi (2009). Similarly, work in ELAN has allowed for
closer analysis of both the frequency of structures and the co-occurrence of structures,
and promises to facilitate a quantum leap forward in terms of analysis and sharing of
data. One of the main challenges is to ensure that the analysis of less well-supported
sign languages is not left behind in this exciting digital period.

6. Conclusion

This chapter has provided a bird’s eye view of key issues relating to word order and
sign languages. Following Bouchard and Dubuisson (1995), we identified three aspects
important to word order: (i) a functional aspect; (ii) an articulatory aspect; and (iii)
the presumption of the existence of a basic word order. We outlined the relationship
between signs and words before providing a historically-based survey of research on
word order in sign languages, following Woll’s (2003) identification of three important
phases of research: the first concentrating on similarities between sign and spoken
languages; the second focussing on the visual-gestural modality of sign languages; and
260 III. Syntax

the third switching the emphasis to typological studies. We touched on the importance
of such issues as non-manual features, simultaneity, and pragmatic processes like topi-
calisation and pointed out that the available studies on word order are embedded
within different theoretical frameworks (including Functional Grammar, Cognitive
Grammar, and Generative Grammar). We noted that over time, work on word order
issues in sign languages has become more complex, as issues such as simultaneity,
iconicity, and gesture in sign languages were included in the discussion. Similarly, as
more and more unrelated sign languages are analysed, a more comprehensive picture
of the relationship between sign languages and of the striking similarity of form and
function at the non-manual level for certain structures (such as interrogatives) has
emerged. However, we also noted that, due to the lack of a coherent approach to the
description and analysis of data across sign languages, no clear claims regarding a
typology of word order in sign languages can yet be made. Finally, we saw that new
technologies promise to make the comparison of data within and across sign languages
more reliable, and we predict that the age of digital corpora will offer new insights
into the issue of word order in sign languages.

7. Literature

Armstrong, David F./Wilcox, Sherman E.


2007 The Gestural Origin of Language. Oxford: Oxford University Press.
Baker-Shenk, Charlotte/Cokely, Dennis
1980 American Sign Language ⫺ A Teacher’s Resource Text on Grammar and Culture. Wash-
ington, DC: Gallaudet University Press.
Bergman, Brita/Wallin, Lars
1985 Sentence Structure in Swedish Sign Language. In: Stokoe, William C./Volterra, Virginia
(eds.), Proceedings of the III rd International Symposium on Sign Language Research.
Silver Spring: Linstok Press, 217⫺225.
Bouchard, Denis/Dubuisson, Colette
1995 Grammar, Order and Position of Wh-Signs in Quebec Sign Language. In: Sign Lan-
guage Studies 87, 99⫺139.
Bouchard, Denis
1997 Sign Languages and Language Universals: The Status of Order and Position in Gram-
mar. In: Sign Language Studies 91, 101⫺160.
Boyes-Braem, Penny/Fournier, Marie-Louise/Rickli, Francoise/Corazza, Serena/Franchi, Maria-
Louisa/Volterra, Virginia
1990 A Comparison of Techniques for Expressing Semantic Roles and Locative Relations in
Two Different Sign Languages. In: Edmondson, William H./Karlsson, Fred (eds.), SLR
87: Papers from the Fourth International Symposium on Sign Language Research. Ham-
burg: Signum, 114⫺120.
Brennan, Mary
1994 Word Order: Introducing the Issues. In: Brennan, Mary/Turner, Graham H. (eds.), Word
Order Issues in Sign Language. Working Papers. Durham: International Sign Linguistics
Association, 9⫺46.
Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro
2009 Another Way to Mark Syntactic Dependencies. The Case for Right Peripheral Specifi-
ers in Sign Languages. In: Language 85(2), 1⫺43.
12. Word order 261

Chen Pichler, Deborah C.


2001 Word Order Variation and Acquisition in American Sign Language. PhD Dissertation,
University of Connecticut.
Chen-Pichler, Deborah C.
2008 Views on Word Order in Early ASL: Then and Now. In: Quer, Josep (ed.), Signs of the
Time: Selected Papers from TISLR 8. Hamburg: Signum, 293⫺318.
Chomsky, Noam
1995 The Minimalist Program. Cambridge, MA: MIT Press.
Coerts, Jane
1994a Constituent Order in Sign Language of the Netherlands. In: Brennan, Mary/Turner,
Graham H. (eds.), Word Order Issues in Sign Language. Working Papers. Durham:
International Sign Linguistics Association, 44⫺72.
Coerts, Jane
1994b Constituent Order in Sign Language of the Netherlands and the Functions of Orienta-
tions. In: Ahlgren, Inger/Bergman/Brita/Brennan, Mary (eds.), Perspectives on Sign
Language Structure. Durham: ISLA, 69⫺88.
Cuxac, Christian
2000 La Langue des Signes Française (LSF). Les Voies de l’Iconicité. Paris: Ophrys.
Deuchar, Margaret
1983 Is BSL an SVO Language? In: Kyle, Jim/Woll, Bencie (eds.), Language in Sign. London:
Croom Helm, 69⫺76.
Dik, Simon C.
1989 The Theory of Functional Grammar. Dordrecht: Foris.
Dryer, Matthew S.
2007 Word Order. In: Shopen, Timothy (ed.), Language Typology and Syntactic Description.
Vol. I: Clause Structure (2nd Edition). Cambridge: Cambridge University Press, 61⫺131.
Dudis, Paul
2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15(2), 223⫺238.
Engberg-Pedersen, Elisabeth
1994 Some Simultaneous Constructions in Danish Sign Language. In: Brennan, Mary/Turner,
Graham H. (eds.), Word-Order Issues in Sign Language: Working Papers. Durham:
International Sign Linguistics Association, 73⫺87.
Fauconnier, Gilles
1985 Mental Spaces. Cambridge, MA: MIT Press.
Fauconnier, Gilles
1997 Mappings in Thought and Language. Cambridge: Cambridge University Press.
Fischer, Susan D.
1975 Influences on Word Order Change in ASL. In: Li, Charles (ed.), Word Order and Word
Order Change. Austin: University of Texas Press, 1⫺25.
Fischer, Susan D.
1990 The Head Parameter in ASL. In: Edmondson, William H./Karlsson, Fred (eds.), SLR
’87: Papers from the Fourth International Symposium on Sign Language Research. Ham-
burg: Signum, 75⫺85.
Friedman, Lynn A.
1976 Subject, Object, and Topic in American Sign Language. In: Li, Charles (ed.), Subject
and Topic. New York: Academic Press, 125⫺148.
Glück, Susanne/Pfau, Roland
1998 On Classifying Classification as a Class of Inflection in German Sign Language. In:
Cambier-Langeveld, Tina/Lipták, Anikó/Redford, Michael (eds.), ConSoleVIProceed-
ings. Leiden: SOLE, 59⫺74.
Greenberg, Joseph H. (ed.)
1966 Universals of Language. Second Edition. Cambridge, MA: MIT Press.
262 III. Syntax

Hale, Ken
1983 Warlpiri and the Grammar of Non-configurational Languages. In: Natural Language
and Linguistic Theory 1, 5⫺47.
Hendriks, Bernadet
2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD
Dissertation, University of Amsterdam. Utrecht: LOT.
Hopper, Paul J./Thompson, Sandra A.
1984 The Discourse Basis for Lexical Categories in Universal Grammar. In: Language 60(4),
703⫺752.
Jackendoff, Ray S.
1990 Semantic Structures. Cambridge, MA: MIT Press.
Janzen, Terry
1998 Topicality in ASL: Information Ordering, Constituent Structure, and the Function of
Topic Marking. PhD Dissertation, University of New Mexico.
Janzen, Terry
1999 The Grammaticization of Topics in American Sign Language. In: Studies in Language
23(2), 271⫺306.
Janzen, Terry
2005 Perspective Shift Reflected in the Signer’s Use of Space. CDS Monograph No. 1. Dublin,
Centre for Deaf Studies, School of Linguistic, Speech and Communication Sciences.
Janzen, Terry/O’Dea, Barbara/Shaffer, Barbara
2001 The Construal of Events: Passives in American Sign Language. In: Sign Language Stud-
ies 1(3), 281⫺310.
Johnston, Trevor/Vermeerbergen, Myriam/Schembri, Adam/Leeson, Lorraine
2007 ‘Real Data are Messy’: Considering Cross-linguistic Analysis of Constituent Ordering
in Auslan, VGT, and ISL. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.),
Visible Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de
Gruyter, 163⫺206.
Kegl, Judy A./Neidle, Carol/MacLaughlin, Dawn/Hoza, Jack/Bahan, Ben
1996 The Case for Grammar, Order and Position in ASL: A Reply to Bouchard and Dubuis-
son. In: Sign Language Studies 90, 1⫺23.
Kimmelman, Vadim
2011 Word Order in Russian Sign Language: An Extended Report. In: Linguistics in Amster-
dam 4. [http://www.linguisticsinamsterdam.nl/]
Kisch, Shifra
2008 “Deaf Discourse”: The Social Construction of Deafness in a Bedouin Community. In:
Medical Anthropology 27(3), 283⫺313.
Kiss, Katalin É. (ed.)
1995 Discourse Configurational Languages. Oxford: Oxford University Press.
Kiss, Katalin É.
2002 The Syntax of Hungarian. Cambridge: Cambridge University Press.
Langacker, Ronald W.
1991 Foundations of Cognitive Grammar, Vol. II: Descriptive Applications. Stanford, CA:
Stanford University Press.
LaPolla, Randy J.
1995 Pragmatic Relations and Word Order in Chinese. In: Dowling, Pamela/Noonan, Mich-
ael (eds.), Word Order in Discourse. Amsterdam: Benjamins, 297⫺330.
Leeson, Lorraine
2001 Aspects of Verbal Valency in Irish Sign Language. PhD Dissertation, Trinity College Du-
blin.
Leeson, Lorraine/Saeed, John I.
2007 Conceptual Blending and the Windowing of Attention in Irish Sign Language. In: Ver-
meerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed
Languages: Form and Function. Amsterdam: Benjamins, 55⫺72.
12. Word order 263

Leeson, Lorraine/Saeed, John I.


2012 Irish Sign Language. Edinburgh: Edinburgh University Press.
Li, Charles N./Thompson, Sandra A.
1976 Subject and Topic: A New Typology of Language. In: Li, Charles N. (ed.), Subject and
Topic. New York: Academic Press, 457⫺490.
Liddell, Scott K.
1977 An Investigation into the Structure of American Sign Language. PhD Dissertation, Uni-
versity of California, San Diego.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Massone, María Ignacia/Curiel, Mónica
2004 Sign Order in Argentine Sign Language. In: Sign Language Studies 5(1), 63⫺93.
Meir, Irit
1998 Thematic Structure and Verb Agreement in Israeli Sign Language. PhD Dissertation,
The Hebrew University of Jerusalem.
Milković, Marina/Bradarić-Jončić, Sandra/Wilbur, Ronnie
2006 Word Order in Croatian Sign Language. In: Sign Language & Linguistics 9(1/2), 169⫺
206.
Miller, Chris
1994 Simultaneous Constructions in Quebec Sign Language. In: Brennan, Mary/Turner, Gra-
ham H. (eds.), Word Order Issues in Sign Language. Durham: ISLA, 89⫺112.
Mithun, Marianne
1987 Is Basic Word Order Universal? In: Tomlin, Russell S. (ed.), Coherence and Grounding
in Discourse. Amsterdam: Benjamins, 281⫺328.
Nadeau, Marie/Desouvrey, Louis
1994 Word Order in Sentences with Directional Verbs in Quebec Sign Language. In: Ahl-
gren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Struc-
ture: Papers from the Fifth International Symposium on Sign Language Research, Vol. 1.
Durham: International Sign Linguistics Association, 149⫺158.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G.
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Nilsson, Anna-Lena
2010 Studies in Swedish Sign Language. Reference, Real Space Blending, and Interpretation.
PhD Dissertation, Stockholm University.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Pensalfini, Robert
2003 A Grammar of Jingulu: An Aboriginal Language of the Northern Territory. Canberra:
Pacific Linguistics.
Perniss, Pamela M.
2007 Space and Iconicity in German Sign Language (DGS). PhD Dissertation, Max-Planck
Institute for Psycholinguistics, Nijmegen.
Perniss, Pamela/Pfau, Roland/Steinbach, Markus
2007 Can’t You See the Difference? Sources of Variation in Sign Language Structure. In:
Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative
Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 1⫺34.
264 III. Syntax

Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.)


2007 Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de
Gruyter.
Pizzuto, Elena/Pietrandrea, Paola
2001 The Notation of Signed Texts. In: Sign Language & Linguistics 4(1/2), 29⫺45.
Quadros, Ronice Müller de
1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifícia Universidade
Cátolica do Rio Grande do Sul.
Risler, Annie
2007 A Cognitive Linguistic View of Simultaneity in Process Signs in French Sign Language.
In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in
Signed Languages: Form and Function. Amsterdam: Benjamins, 73⫺101.
Saeed, John I./Sutton-Spence, Rachel/Leeson, Lorraine
2000 Constituent Order in Irish Sign Language and British Sign Language ⫺ A Preliminary
Examination. Poster Presented at the 7 th International Conference on Theoretical Issues
in Sign Language Research (TISLR), Amsterdam.
Sallandre, Marie-Anne
2007 Simultaneity in French Sign Language Discourse. In: Vermeerbergen, Myriam/Leeson,
Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Languages: Form and Function.
Amsterdam: Benjamins, 103⫺125.
Sandler Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark
2005 The Emergence of Grammar: Systematic Structure in a New Language. In: Proceedings
of the National Academy of Sciences 102(7) 2661⫺2665.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Shaffer, Barbara
2004 Information Ordering and Speaker Subjectivity: Modality in ASL. In: Cognitive Lin-
guistics 15(2), 175⫺195.
Sze, Felix Y.B.
2003 Word Order of Hong Kong Sign Language. In: Baker, Anne/Bogaerde, Beppie van den/
Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign Language Research (Se-
lected Papers from TISLR 2000). Hamburg: Signum, 163⫺191.
Taub, Sarah
2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cam-
bridge: Cambridge University Press.
Taub, Sarah/Galvan, Dennis
2001 Patterns of Conceptual Encoding in ASL Motion Descriptions. In: Sign Language Stud-
ies 1(2), 175⫺200.
Thorvaldsdottir, Gudny
2007 Space in Icelandic Sign Language, MPhil Dissertation, School of Linguistic, Speech and
Communication Sciences, Trinity College Dublin.
Tomlin, Russell S.
1986 Basic Word Order: Functional Principles. London: Croom Helm.
Valli, Clayton/Lucas, Ceil/Mulrooney, Kristin J.
2006 Linguistics of American Sign Language. Fourth Edition. Washington, DC: Gallaudet
University Press.
Vermeerbergen, Myriam
1998 Word Order Issues in Sign Language Research: A Contribution from the Study of
Flemish-Belgian Sign Language. Paper Presented at the 6 th International Conference on
Theoretical Issues in Sign Language Research (TISLR), Washington, DC.
Vermeerbergen, Myriam/Leeson, Lorraine
2011 European Sign Languages ⫺ Towards a Typological Snapshot. In: Auwera, Johan van
der/Kortmann, Bernd (eds.), Field of Linguistics: Europe. Berlin: Mouton de Gruyter,
269⫺287.
13. The noun phrase 265

Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.)


2007 Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins.
Vermeerbergen, Myriam/Van Herreweghe, Mieke/Akach, Philemon/Matabane, Emily
2007 Constituent Order in Flemish Sign Language (VGT) and South African Sign Language
(SASL). In: Sign Language & Linguistics 10(1), 25⫺54.
Volterra, Virginia/Laudanna, Alessandro/Corazza, Serena/Radutzky, Elena/Natale, Francesco
1984 Italian Sign Language: The Order of Elements in the Declarative Sentence. In: Loncke,
Filip/Boyes-Braem, Penny/Lebrun, Yvan (eds.), Recent Research on European Sign
Language. Lisse: Swets and Zeitlinger, 19⫺48.
Wilbur, Ronnie B.
1987 American Sign Language: Linguistic and Applied Dimensions. Second Edition. Boston,
MA: Little Brown.
Wilcox, Phyllis
2000 Metaphor in American Sign Language. Washington, DC: Gallaudet University Press.
Wilcox, Sherman
2004 Cognitive Iconicity: Conceptual Spaces, Meaning and Gesture in Sign Languages. In:
Cognitive Linguistics 15(2), 119⫺147.
Woll, Bencie
2003 Modality, Universality and the Similarities Across Sign Languages: An Historical Per-
spective. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Cross-
linguistic Perspectives in Sign Language Research. Selected Papers from TISLR 2000.
Hamburg: Signum, 17⫺27.

Lorraine Leeson and John Saeed, Dublin (Ireland)

13. The noun phrase


1. Introduction
2. Characteristics of this modality with consequence for noun phrase structure
3. What’s in a noun phrase? A closer look inside
4. Number: expression of plurality
5. DP-internal word order
6. Conclusion
7. Literature

Abstract

This chapter considers, within the context of what is attested crosslinguistically, the struc-
ture of the noun phrase (NP) in American Sign Language (ASL). This includes discus-
sion of the component parts of the noun phrase and the linear order in which they occur.
The focus here is on certain consequences for the organization and properties of ASL
noun phrases that follow from the possibilities afforded by the visual-gestural modality;
266 III. Syntax

these are therefore also typical, in many cases, of noun phrases in other sign languages,
as well. In particular, the use of space for expression of information about reference,
person, and number is described, as is the use of the non-manual channel for conveying
linguistic information. Because of the organizational differences attributable to modality,
there are not always direct equivalents of distinctions that are relevant in spoken vs. sign
languages, and controversies about the comparative analysis of certain constructions are
also discussed.

1. Introduction
We use the term ‘noun phrase’ (or ‘NP’) to refer to the unit that contains a noun and
its modifiers, although, following Abney (1987) and much subsequent literature, these
phrases would be analyzed as a projection of the determiner node, and therefore, more
precisely, as determiner phrases (DPs). The NP in American Sign Language (ASL)
has the same basic elements and hierarchical structure as in other languages. There
are, however, several aspects of NP structure in sign languages that take advantage of
possibilities afforded by the visual-gestural modality. Some relevant modality-specific
characteristics are discussed in section 2. Section 3 then examines more closely the
components of NPs in ASL, restricting attention to singular NPs. Expression of number
is considered in section 4. Section 5 then examines the basic word order of these
elements within the NP.

2. Characteristics of this modality with consequence for


noun phrase structure

2.1. Use of space to express person and reference in ASL


and other sign languages

Sign languages generally associate referents with locations in the signing space. For
referents physically present, their actual locations are used; first- and second-persons
are associated spatially with the signer and addressee, respectively. Referential loca-
tions are established in the signing space for non-present third-person referents. See
Neidle and Lee (2006) for review of ASL person distinctions, a subject of some contro-
versy; on use of referential space for present vs. non-present referents, see e.g. Lid-
dell (2003).
The use of space provides a richer system for referential distinctions than the person
distinctions typical of spoken languages. Although referential NPs crosslinguistically
are generally assumed in the syntactic literature to contain abstract referential features,
sign languages are unique in enabling overt morphological expression of referential
distinctions through association of distinct third-person referents with specific locations
in the signing space (Kegl 1976 [2003]).
Moreover, in contradistinction to spoken languages, sign languages include referen-
tial features among the phi- (or agreement) features, i.e. those features that can be
13. The noun phrase 267

a. Definite c. Possessive d. Reflexive e. Verb agreement ⫺ give


determiner start and end positions
b. Pronoun
Fig. 13.1: Use of spatial locations associated with person and reference in ASL

expressed morphologically on multiple elements in an agreement relationship, either


within the NP (a phenomenon frequently described as ‘concord’) or between an NP
and other sentential elements that agree syntactically with the NP. For this reason, we
refer to these referentially significant locations as phi-locations (following Neidle and
Lee 2006), although not all spatial locations ⫺ and not even all types of deictic ges-
tures ⫺ that are used in signing take on this kind of referential significance. Space is
used for many distinct functions in sign languages.
ASL determiners, pronominals, possessives, and reflexives/intensifiers are produced
by pointing to these phi-locations (for pronouns, see chapter 11). As seen in Fig-
ure 13.1, different hand shapes distinguish these functions (a phenomenon also found
in other sign languages, cf. Bos 1989; Sutton-Spence/Woll 1999; Tang/Sze 2002; Eng-
berg-Pedersen 2003; Alibašić Ciciliani/Wilbur 2006; Johnston/Schembri 2007; Hatzo-
poulou 2008; Hendriks 2008). These referential locations are also accessed to mark
agreement with NPs by other sentential elements. Although not all classes of verbs
can occur with overt agreement morphology (see chapter 7 on verb agreement), agree-
ing verbs such as give, shown in Figure 13.1e, have start and end points that correspond
to the phi-locations of the subject and object, respectively. We return to these elements
in section 3.

2.2. NP-internal agreement relations in sign and spoken languages

The elements enumerated in section 2.1 and illustrated in Figure 13.1 for ASL corre-
spond to those that have been observed crosslinguistically to enter into agreement
relations by virtue of the potential for morphological expression of matching phi-fea-
tures. However, there has been some contention that sign languages do not exhibit
‘agreement’. In particular, Liddell (e.g., 2000a,b) has pointed to modality differences ⫺
such as the fact that the locations in the signing space used referentially do not consti-
tute a finite set of discrete elements, and that such referential features do not enter
into agreement in spoken languages ⫺ to suggest that agreement is not involved here.
However, even spoken languages differ in terms of the specific features that partake
in agreement/concord relations. In some but not all languages, gender agreement or
concord may be found; there may be agreement or concord with person and/or num-
ber. Person features themselves are also indexicals, as pointed out by Heim (2008, 37):
268 III. Syntax

“they denote functions defined with reference to an utterance context that determines
participant roles such as speaker and addressee.” What is unusual in sign languages ⫺
attributable to the richness afforded by the use of space for these purposes ⫺ is the
greater potential for expression of referential distinctions. (Number features, more lim-
ited in this respect in ASL, are considered in section 4.) However, the matching of
features among syntactic elements is of essentially the same nature as in other agree-
ment systems. Thus, we analyze the uses of phi-locations as reflexes of agreement.

2.3. Non-manual expression of syntactic information

There are also cases in which these same phi-locations that manifest agreement in
manual signing may be accessed non-manually. The use of facial expressions and head
gestures to convey essential syntactic information, such as negation and question status,
is well documented (for ASL, see, e.g., Baker/Padden 1978; Liddell 1980; Baker-Shenk
1983, and Neidle 2000 for discussion and other references). Such expressions play a
critical role in many aspects of the grammar of sign languages, but especially with
respect to conveying certain types of syntactic information (see also Sandler and Lillo-
Martin (2006), who consider these to be prosodic in nature, cf. also chapter 4 on pros-
ody). Generally these non-manual syntactic markings occur in parallel with manual
signing, frequently extending over the logical scope of the syntactic node (functional
head) that contains the features expressed non-manually (Neidle et al. 2000).
There are cases where phi-features can also be expressed non-manually, most often
through head tilt or eye gaze pointing toward the relevant phi-locations. Lip-pointing
toward phi-locations is also used in some sign languages (Obando/Elena 2000 discussed
Nicaraguan Sign Language). Neidle et al. (2000), Bahan (1996), and MacLaughlin
(1997) described cases in which head tilt/eye gaze can display agreement within both
the clause (with subject/object) and the NP (with the possessor/main noun), displaying
interesting parallels. Thompson et al. (2006) presented a statistical analysis of the fre-
quency of eye gaze in some data collected with an eye tracker that purports to discon-
firm this proposal; however, they seriously misrepresent the analysis and its predictions.
Further investigation (Neidle/Lee 2006) revealed that the manifestation of agree-
ment through head tilt/eye gaze (Bahan 1996; MacLaughlin 1997; Neidle et al. 2000)
is not semantically neutral, but rather is associated with focus. Thus it would appear
that what is involved in this previously identified construction is a focus marker instan-
tiated by non-manual expression of the subject and object agreement features.

3. What’s in a noun phrase? A closer look inside


In this section, elements that make up the NP will be discussed. The analysis of some
of these elements has been a subject of controversy in the literature, and in some
rather surprising cases, such as the expressions of plurality, comprehensive descriptions
have been lacking. For discussion of word order within NP, we crucially restrict atten-
tion to elements occurring within the NP and in the canonical word order. ASL exhibits
some flexibility with respect to word order: there are many constructions in which
13. The noun phrase 269

deviations from the base word order occur, as is frequently recoverable from prosodic
cues. Those marked orders are excluded from consideration here, as this chapter seeks
to describe the basic underlying word order within NP.

3.1. Determiners − definite vs. indefinite − and adverbials

Pointing to a location in the signing space can be associated with a range of different
functions, including several discussed here as well as the expression of adverbials of
location. We gloss this as ix since it generally involves the index finger. (In some very
specific situations, the thumb can be used instead. A different hand shape, an open
hand, can also be used for honorifics.) Subscripts are used to indicate person (first,
second, or third) and potentially a unique phi-location, so that, for example, ix3i and
poss3i (the possessive marker shown in Figure 13.1c) would be understood to involve
the same phi-location; the use of the same subscript for both marks coreference. This
multiplicity of uses of pointing has, in some cases, confounded the analysis of pointing
gestures, since if different uses are conflated, then generalizations about specific func-
tions are obscured. Bahan et al. (1995) and MacLaughlin (1997) have argued that the
prenominal ix is associated with definiteness and functions as a determiner, whereas
the postnominal ix is adverbial and does not display a definiteness restriction.
Previous accounts had generally treated the occurrences of prenominal and post-
nominal indexes as a unified phenomenon. There has been disagreement about
whether sign languages have determiners at all, although it has been suggested that
these indexes might be definite determiners (Wilbur 1979) or that they are some kind
of determiner but lacking any correlation with definiteness (Zimmer/Patschke 1990).
However, analysis focusing on prenominal indexes reveals not only a correlation with
definiteness, but also a contrast between definite and indefinite determiners.
An NP in ASL can contain a prenominal or postnominal ix, or both. In the construc-
tion in (1), the DP includes both a prenominal determiner and a postnominal adverbial,
not unlike the French or Norwegian constructions shown in (2) and (3).

(1) [ ix3i man ixloci]DP arrive [ASL]


‘The/that man there is arriving.’
(2) [ cet homme-là ] [French]
‘that man there’
(3) [ den mannen der ] [Norwegian]
‘that man there’

The ASL prenominal and postnominal ix, although frequently very similar in form,
are nonetheless distinguishable, in terms of:

A. Articulatory restrictions. The determiner, occurring prenominally, has a fixed path


length, whereas the postnominal index can be modified iconically to depict aspects of
the location (e.g., distance, with longer distances potentially involving longer path
length). This is consistent with other grammatical function words having a frozen form
relative to related adverbials. (Compare the fixed path length of the ASL modal mark-
270 III. Syntax

ing future, which has a relatively frozen path length, with the range of articulations
allowed for the related temporal adverbial meaning ‘in the future’; for the latter, dis-
tance in the future can be expressed iconically through longer or shorter path move-
ments, as discussed in, e.g., Neidle et al. (2000, 78). Thus there is a contrast in accepta-
bility between (4) and (5).

(4) [ ix3i man ixloc“over there” ]DP know president [ASL]


‘The/that man over there knows the president.’
(5) * [ ixloc“over there” man ix3i ]DP know president

B. Potential for distinct plural form. Only the prenominal ix can be inflected for plural
in the way to be discussed in section 4. This is shown by the following examples (from
MacLaughlin 1997, 122).

(6) [ ixplural-arc man ixloc“over there” ]DP know president [ASL]


‘The/those men over there know the president.’
(7) * [ ixplural-arc man ixplural-arc ]DP know president
(8) * [ ixloc“over there” man ixplural-arc ]DP know president

C. Semantic interpretation. The definiteness restriction, to be discussed in the next


subsection, is found only with the prenominal ix. Compare examples (9) and (10)
below (from MacLaughlin 1997, 117). Sentence (9) is infelicitous unless the man has
been previously been introduced into the discourse.

(9) [ ix3i man ]DP arrive [ASL]


‘The/that man is arriving.’
* ‘A man is arriving.’
(10) [ man ixloci]DP arrive
‘A/the man there is arriving.’

Although the postnominal index is compatible with an indefinite reading, the prenomi-
nal index is not.

3.1.1. Correlation with definiteness

Since expression of the definite determiner in ASL necessarily identifies reference


unambiguously, the packaging of information is such that this determiner carries refer-
ential features, features of a kind not associated with definite articles in spoken lan-
guages. Giusti (2002) argues, for example, that referential features are associated with
prenominal possessives and demonstratives (also intrinsically definite), but not with
definite articles. By this definition, the definite determiner in ASL would be catego-
rized as a demonstrative. There is also, however, an ASL sign glossed as that, shown
in Figure 13.2, with a somewhat restricted usage, functioning to refer back pronomi-
nally to entities previously established in the discourse, or to propositions (which can-
13. The noun phrase 271

not normally be designated by ix). This sign does not often occur prenominally within
the noun phrase, as in ‘that man’, although this is sometimes found (possibly as a
result of English influence). This sign that can also be contracted with one, to give a
sign glossed as that^one, also used pronominally.

Fig. 13.2: that

In usage, the determiner ix is somewhat intermediate between the definite article


and demonstrative of English. In fact, an NP such as ‘ix man’ might be optimally
translated into English sometimes as ‘the man’ and at other times as ‘that man’. Since
expression of the ASL determiner ix necessarily incorporates referential information,
it can only be used when the NP referred to is referential, which excludes its use with
generics. (As De Vriendt and Rasquinet 1989 observed, sign languages generally do
not make use of determiners in generic noun phrases.) Furthermore, it can only be
used for referents that already have been associated in the discourse with a phi-loca-
tion. Thus, the use of ix in ASL is more restricted than the use of definite articles.
Furthermore, the definite determiner ix is not required within a definite NP. This is a
significant difference when compared with definite articles in spoken languages that
have them. Van Gelderen (2007) has an interesting discussion of the transition that
has occurred in many languages whereby demonstratives (occurring in specifier posi-
tion) came to be reanalyzed as definite articles (occurring in the head of DP). The
exact status of these definite determiners in ASL is unclear; it is possible that such a
transition is in progress.
However, like articles in spoken languages (postulated to occur in the head deter-
miner of a DP), in ASL the determiner ix carries overt inflection for the nominal phi-
features of the language (and in sign languages, these include referential features).
Also like articles, the definite determiner is often produced without phonological stress
and can be phonologically cliticized to the following sign. A stressed articulation of the
prenominal ix is, however, possible; it forces a demonstrative reading.
So although ASL does not have exact equivalents of English definite articles or
demonstratives, it does have a determiner that (1) is correlated with definiteness;
(2) occurs, in the canonical surface word order of noun phrases, prenominally; (3) can
be phonologically unstressed and can cliticize to the following sign; (4) bears overt
agreement inflection; (5) is identical in form to pronouns (as discussed in section 3.2);
and (6) occurs in complementary distribution with elements analyzed as occurring in
the head of the DP (discussed below).

3.1.2. Distinction between definite and indefinite determiners

Unlike the definite determiner, which accesses a point in space, the indefinite deter-
miner in ASL involves articulatory movement within a small region. This general dis-
272 III. Syntax

a. Definite b. Indefinite c. give (him) d. give (someone)


determiner determiner start and end positions start and end positions
(or pronoun) (or pronoun)

Fig. 13.3: Spatial distinction between reference: definite (point) vs. indefinite (region)

Fig. 13.4: Indefinite determiner vs. numeral ‘one’

tinction between definiteness and indefiniteness in ASL, the latter being associated
with a region larger than a point, was observed by MacLaughlin (1997). Figure 13.3
illustrates the articulation of the definite vs. indefinite determiner. The latter, glossed
as something/one (because when used pronominally, it would be translated into Eng-
lish as either ‘something’ or ‘someone’), is articulated with the same hand shape as the
definite determiner, but with the index finger pointed upward and palm facing the
signer; there is a small back and forth motion of the hand, the degree of which can
vary with the degree of unidentifiability of the referent. The lack of certainty about
the identity of the referent is also expressed through a characteristic facial expression
illustrated in Figure 13.3, involving tensed nose, lowered brows, and sometimes also
raising of the shoulders.
When the referent is specific but indefinite (e.g., ‘I want to buy a book’ in a situation
where I know which book I want to buy, but you don’t), the sign is articulated as an
unstressed version of the numeral one (also illustrated in Figure 13.4), i.e., without the
shaking of the hand and head and without the facial expression of uncertainty. There
are many languages in which the indefinite article is an unstressed form of the numeral
‘one’ (e.g., Dutch, Greek, French, Spanish, Italian, and even English, historically,
among many others). As with indefinite articles in other languages, the sign glossed as
something/one also has a quantificational aspect to its meaning.
13. The noun phrase 273

A similar definiteness distinction is found in verbal agreement with the receiver


argument of the verb give: compare the end hand shape of the verb give when used
to mean ‘give him’ versus ‘give someone’ (with fingers spread, pointing to an area of
space larger than the point represented by the fingers coming together), also illustrated
in Figure 13.3. Although this kind of marking of indefiniteness as part of manual object
agreement is rare, it provides support for this spatial correlation with (in)definiteness.
Bahan (1996, 272⫺273) also observed that when eye gaze marks agreement, the gaze
used with specific vs. non-specific NPs differs, the former involving a direct gaze to the
phi-location, the latter, a somewhat darting gaze generally upward.
As with definite determiners in definite NPs, the indefinite determiner is not re-
quired in an indefinite NP, as shown in (11).

(11) [ (something/one) man ]DP arrive [ASL]


‘A man is arriving.’

Finally, like definite determiners (4), indefinite determiners can also occur with a post-
nominal adverbial index (see (1) and (12)).

(12) [ (something/one) man ixloc“over there” ]DP arrive [ASL]


‘A man over there is arriving.’

3.1.3. Analysis of noun phrases in ASL as determiner phrases

Work by Bahan, Lee, MacLaughlin, and Neidle has made the standard assumption in
the current theoretical literature that the determiner (D) is the head of a DP projec-
tion, with the NP occurring as a complement of D. The D head is the locus for the
agreement features that may be realized by a lexical element occupying that node,
such as a definite determiner. (It is also possible that in ASL, ix ⫺ when functioning
as a demonstrative (if demonstrative and non-demonstrative uses are structurally dis-
tinct) ⫺ might be analyzed as occurring in the specifier of DP. This is left as an area
for future research.) Other elements that may occupy this node will be discussed in
the next subsections, including pronouns and the possessive marker glossed as poss.
Determiners are in complementary distribution with those elements.

3.1.4. Non-manual expression of phi-features

The phi-features associated with the D node can be (but are not always) manifested
non-manually by head tilt or eye gaze or both toward the relevant phi-location. This
can occur simultaneously with the articulation of the determiner, or these non-manual
expressions can spread over the rest of the DP (i.e., over the c-command domain of
D). See MacLaughlin (1997, chapter 3) for further details, including ways in which phi-
features can be expressed non-manually in possessive and non-possessive DPs, display-
ing parallelism with what can occur in transitive and intransitive clauses (Bahan 1996).
It is also possible for the non-manual expression of those phi-features to occur in lieu
of the manual articulation of the determiner. This can also occur with the pronominal
use of ix, as mentioned in section 3.2.2.
274 III. Syntax

3.1.5. Summary

Thus ASL, and sign languages more generally, realize definite determiners by gestures
that involve pointing to the phi-locations associated with the main noun. Determiners
in ASL occur in prenominal position, whereas there is also another use of ix ⫺ distin-
guishable in its articulatory possibilities from the definite determiner ⫺ in which the ix
expresses adverbial information and occurs at the end of the NP. Typical of determiners
occurring as head of DP, ix in ASL manifests overt inflection for phi-features (including
referential features, despite the fact that such features are not included among phi-
features in spoken languages). Definite determiners in ASL are also often phonologi-
cally unstressed and may cliticize phonologically to the following sign. As a result of
the fact that they necessarily incorporate referential information (given the deictic
nature of the articulation), definite determiners in ASL have a more restricted distribu-
tion than definite articles in spoken languages and may function as demonstratives
(with phonological stress forcing a demonstrative reading). ASL also has an indefinite
determiner related to the sign one. However, determiners are not required in definite
or indefinite noun phrases.

3.2. Pronouns

3.2.1. Relation to determiners

As previously mentioned, both the indefinite and definite determiner can be used
pronominally. Compare (9) and (11) with (13) and (14).

(13) ix3i arrive [ASL]


‘He/she/it is arriving.’
(14) something/one arrive
‘Someone is arriving.’

This is also common in other sign languages (e.g., Danish Sign Language (DSL) and
Australian Sign Languages (Auslan), cf. Engberg-Pedersen 2003; Johnston/Schembri
2007, 271; see also chapter 11 on pronouns) as well as many spoken languages (dis-
cussed, e.g., in Uriagereka 1992). For example, the definite determiner and pronoun
are identical in form in the following Italian examples (Cardinaletti 1994, 199):

(15) La conosco [Italian]


(I) her know
(16) la ragazza
the girl

Since Postal’s (1966) proposal that pronouns are underlyingly determiners, a claim also
essential to Abney’s (1987) DP analysis, there have been several different proposals to
account for the parallelisms between pronouns and determiners, and for the different
types of pronouns found within and across languages in terms of categorical and/or
13. The noun phrase 275

structural distinctions (e.g., Cardinaletti 1994; Déchaine/Wiltschko 2002). ASL has


strong pronouns (i.e., pronouns that have the same syntactic distribution as full NPs)
that are identical in form with determiners, and MacLaughlin analyzes them as occur-
ring in the head of the DP. The issue of whether there is a null NP occurring as a sister
to D within the subject DP of a sentence like (13) is left open by MacLaughlin.

3.2.2. Non-manual expressions of phi-features occurring with


(or substituting for) manually articulated pronouns

The phi-features associated with a (non first-person) pronoun can also be expressed
non-manually by eye gaze toward the intended phi-location. This has been referred to
as ‘eye-indexing’ (e.g., Baker/Cokely 1980). Eye gaze can suffice for pronominal refer-
ence, occurring in lieu of manual realization of the pronoun. Baker and Cokely observe
(1980, 214) that “[t]his eye gaze is often accompanied by a slight brow raise and a head
nod or tilt to toward the referent,” that it is quite common for second-person reference,
and that it allows for discretion with third-person reference.

3.2.3. Consequences of overt expression in pronouns of referential features

The fact that in ASL (and other sign languages) pronouns are referentially unambigu-
ous is not without implications for syntactic constructions in which pronouns are in-
volved. For example, ASL makes productive use of right dislocation, as shown in (17):
an unstressed pronoun occurring sentence-finally and referring back to another NP
(overt or null) in the sentence. (This has been referred to as ‘subject pronoun copy’,
following Padden 1988, although not all constructions that have been described with
that term are, in fact, right dislocation, and right dislocation can occur as well with
non-subject arguments.) Moreover, the discourse conditions for use of right dislocation
appear to be similar in ASL and other languages in which it occurs productively, such
as French and Norwegian (Fretheim 1995; Gundel/Fretheim 2004, 188).

(17) j-o-h-n arrive ix3i [ASL]


‘John arrived, him.’

(18) Jean est arrivé, lui. [French]


‘John arrived, him.’

(19) Iskremen har jeg kjøpt, den. [Norwegian]


the.ice.cream have I bought it
‘I bought ice cream.’

Languages that make productive use of right dislocation typically also allow for the
possibility of a right-dislocated full NP, albeit serving a different function: to disambigu-
ate the pronoun to which it refers back, as shown for French in (21). However, given
that pronouns in ASL are unambiguous, this does not occur in ASL.
276 III. Syntax

(20) * ix3i arrive j-o-h-n [ASL]


‘He arrived, John.’
(21) Il est arrivé, Jean. [French]
‘He arrived, John.’

Rather than concluding from the ungrammaticality of (20) that ASL lacks right disloca-
tion entirely (as does e.g. Wilbur 1994), we view the absence of disambiguation by full
NP right-dislocation in ASL as a predictable consequence of the fact that referential
information is overtly expressed by ASL pronouns.

3.3. Possessives

The possessive marker is articulated in ASL with an open palm pointing toward the
phi-location of the possessor. British Sign Language (BSL) and related sign languages
use the closed fist to refer to possession that is or could be temporary, and ix for
permanent possession (Sutton-Spence and Woll 1999). For a typological survey of pos-
sessive and existential constructions in sign languages, see Zeshan (2008). When the
possessor is indefinite (and not associated with any phi-location), a neutral form of the
possessive marker is used, with the hand pointing toward a neutral (central) position
in the signing space.
Syntactically, we analyze this possessive marker, glossed as poss, as occurring in the
head D of the DP, and it can ⫺ but need not ⫺ co-occur with a possessor (a full DP)
in the specifier position of the larger DP. This is illustrated in examples (22) and (23).

(22) [ j-o-h-n [ poss3i [friend]NP ]D’ ]DP [ASL]


‘John’s friend’
(23) [ [ poss3i [friend]NP ]D’ ]DP
‘his friend’

It is occasionally possible (especially with kinship relations or inalienable possession)


to omit the poss sign, as shown in (24) and (25).

(24) j-o-h-n (poss3i) mother [ASL]


‘John’s mother’
(25) j-o-h-n (poss3i) leg
‘John’s leg’

When the possessive occurs without an overt ‘possessee’, it typically occurs in a redu-
plicated form, two quick movements, rather than one, of the open palm toward the
phi-location. As also observed by MacLaughlin (1997), this is one typical effect of the
phonological lengthening that occurs in constituent- or sentence-final position (Gros-
jean 1979; Coulter 1993) or in a position immediately preceding a deletion site or a
syntactically empty node. There have been several studies of the effects of prosodic
prominence and syntactic position on sign production (e.g., Coulter 1990, 1993; Wilbur
13. The noun phrase 277

1999). ASL has phonological lengthening in environments similar to those in which it


has been attested in spoken languages (Cooper/Paccia-Cooper 1980; Selkirk 1984).
Liddell observed, for example, that a head nod accompanying manual material is often
found in constructions involving gapping or ellipsis (Liddell 1980, 29⫺38). Phonologi-
cal reduplication appears to be another such process that is more likely in contexts
where phonological lengthening is expected, i.e., constituent-final position and syntac-
tic positions immediately preceding null syntactic structures. (Frishberg 1978, for exam-
ple, observed a diachronic change that affected a number of ASL compounds: when
one element of a compound was lost over time, there was a compensatory lengthening
of the remaining part that took the form of reduplication. So beakCwings evolved
into a reduplicated articulation of just the first part of that original compound, giving
the current sign for ‘bird’. )
It is presumably not a coincidence that when one finds poss in a DP-final position ⫺
in a situation where there is either (a) no overt head noun, or (b) a marked word order
in which the poss marker follows the main noun ⫺ poss is generally reduplicated, as
indicated by ‘C’ in the glosses in (26) and (27).

(26) Context: ‘Whose book is that?’ [ASL]


Reply: poss 1C
‘Mine.’
(27) ix2 prefer [car poss1C]
‘You prefer my car.’

Similar reduplication is possible with other DP-internal elements, as will be discussed


below. Interestingly, when ix3 is used as a personal pronoun, i.e., when it occurs as the
sole overt element within a DP, it is not reduplicated. However, when it is used as a
demonstrative without an overt head, on the meaning ‘that one’, the pointing gesture
typically is reduplicated (providing some evidence that the ix may occupy distinct struc-
tural positions in the two cases; cf. section 3.1.3):

(28) Context: ‘Which one would you like?’ [ASL]


Reply: ix3iC
‘That one.’
(29) Context: ‘Who do you like?’
Reply: ix3i
‘Him.’

3.4. Reflexives

As shown in Figure 13.1d, the reflexive is articulated with the thumb facing upward,
thumb pad pointing to the phi-location of its antecedent. (For first-person, the orienta-
tion is variable: the pad of the thumb can either be facing toward or away from the
signer as the hand makes contact with the signer’s chest.)
A reflexive can be used either pronominally (30) as an argument coreferential with
an NP antecedent, or as an intensifier, as in (31) and (32).
278 III. Syntax

(30) j-o-h-n hurt self3i [ASL]


‘John hurt himself.’
(31) j-o-h-n self3i arrive
‘John himself is arriving.’
(32) j-o-h-n write story self3i
‘John is writing the story himself.’

When self occurs in a prosodically prominent environment, it can also be produced


with a reduplicated motion, of the kind just described for possessive pronouns.
Crosslinguistically, there is a distinction between simplex reflexives, such as se in
French or seg in Norwegian, and morphologically complex reflexives found in, e.g.,
English (himCself and herCself) or Auslan (composed of the personal pronoun fol-
lowed by the sign self (Johnston/Schembri 2007)). See, for example, the discussion in
König and Siemund (1999). Although it might appear that ASL self is a simplex form,
it is in fact a morphological combination of the reflexive nominal self and the pronom-
inal phi-features. The ASL (pro-)self forms have the syntactic properties that tend to
characterize complex anaphors: notwithstanding claims to the contrary by Lillo-Martin
(1995) (refuted by Lee et al. 1997), they are locally ⫺ rather than long-distance ⫺
bound, and they are not restricted to subject antecedents.

3.5. Nouns and adjectives

The spatial location in which nouns and adjectives are articulated in ASL does not
typically convey referential information. However, there are some nouns and adjectives
(a relatively limited set) whose articulation can occur in, or oriented toward, the rele-
vant phi-location, as discussed by MacLaughlin (1997). So for example, a sign like
house or a fingerspelled name like j-o-h-n can be articulated in (or oriented in the
direction of) the phi-location of the referent. See chapter 4 of MacLaughlin (1997) for
more detailed description of nouns and adjectives that are articulated either in or
oriented toward the relevant phi-location. (See also Rinfret (2010) on the spatial asso-
ciation of nouns in Quebec Sign Language (LSQ).)

3.6. Other elements that are − and are not − found in ASL NPs

Given the availability of classifier constructions for rich expression of spatial relations
and motion, the use of prepositional phrases is more limited in sign than in spoken
languages, within both clauses and noun phrases. It is also noteworthy that nouns in
ASL do not take arguments (thematic adjectives or embedded clauses). Constructions
that would be expressed in other languages by complex NPs (e.g., ‘the fact that it
rained’) require paraphrases in ASL. The information conveyed by relative clauses in
languages like English can be expressed instead by use of correlatives ⫺ clauses that
occur in sentence-initial position, with a distinctive non-manual marking (traditionally,
if inappropriately, referred to as ‘relative clause’ marking) ⫺ rather than by clauses
embedded within NP arguments of the sentence. An example is provided in (33).
13. The noun phrase 279

rc
(33) cat chase dog ix3i [ eat mouse ]IP [ASL]
‘The cat that chased the dog ate the mouse.’

The non-manual marking described by Liddell (1978) and labeled here as ‘rc’ includes
raised eyebrows, a backward tilt of the head, and “contraction of the muscles that raise
both the cheeks and the upper lip” (Liddell 2003, 54). Frequently non-manual markings
of specificity (e.g., nose wrinkle (Coulter 1978)) are also found. Note, however, that
Liddell’s (1977) claims about the syntactic analysis of relative clauses differ from what
is presented here. See, e.g., Cecchetto et al. (2006) and chapter 14 for discussion of
strategies for relativization in LIS. For further discussion about what can occur in the
left periphery in ASL, including correlative clauses, see Neidle (2003).

3.7. Summary

This section has surveyed some of the essential components of ASL NPs, restricting
attention to singular NPs. We have shown that person/reference features participate in
agreement relations within the noun phrase, and we have seen overt morphological
inflection instantiating these features in determiners, pronouns, possessive markers,
and reflexives. Predicate agreement with noun phrases, by verbs and adjectives (of the
appropriate morphological class), also involves morphological expression of these same
features. Section 4 examines expression of plurality within noun phrases. Section 5 then
considers the canonical word order of elements within the noun phrase.

4. Number: expression of plurality


The discussion of the spatial locations associated with referential information has, up
to this point, been restricted to noun phrases without any overt marking for plurality.
Grammatically, ASL noun phrases (and particular elements within them) do not bear
number features of singular vs. plural, but rather are generally either unmarked for
number (consistent with either singular or plural interpretations) or overtly marked
for plural. Pfau and Steinbach (2006) analyze the plural form as distinguished from the
singular by a Ø affix in the cases that are treated here as unmarked for number. Cases
where plurals are not explicitly marked as such have often been described in the ASL
literature as involving a plurality viewed as a collective (Padden 1988, e.g.). For a more
detailed discussion of plurality, see chapter 6.

4.1. Use of space

When plurality is expressed through explicit number morphology ⫺ as it is produc-


tively for determiners, pronouns, reflexives, and those agreeing verbs that can be so
marked (although this is subject to certain restrictions) ⫺ the phi-location is generally
represented spatially as an arc (rather than a point), using the same hand shapes that
280 III. Syntax

Point used for Arc used for Movement at the end of the verb GIFT to agree with a
referent unmar- referent marked plural object
ked for number as plural
Fig. 13.5: Phi-locations used for un- Fig. 13.6: Plural object agreement
marked vs. plural 3rd-person
referent

Fig. 13.7: Index articulated in an arc to indicate plural

occur for the singular forms illustrated in Figure 13.1. The same general principles
discussed earlier apply with respect to the way in which these phi-locations are ac-
cessed. This is illustrated schematically in Figure 13.5 and by an example of a plural
ix (determiner or pronoun) in Figure 13.7. Plural object agreement, involving a final
articulation of the verb with a sweeping motion across the arc associated referentially
with the plural object, is shown in Figure 13.6. This can also interact with aspectual
markings such as distributive; see MacLaughlin et al. (2000) for details.
Thus, when definite determiners, pronouns, possessives, reflexives, and agreeing
verbs are overtly marked for plural number, there is a sweeping motion between the
endpoints of the plural arc (rather than the pointing motion described in section 2)
but utilizing the same hand shapes as for the singular forms illustrated in Figure 13.1.
Thus, like the person features and referential features discussed earlier, number fea-
tures (and specifically, plurality), when present, also have a spatial instantiation; how-
ever, plurality is associated not with a point but with an arc-like region of space.

4.2. Marking of plurality on nouns and adjectives within noun phrases

There has not been a comprehensive account of plural formation of ASL, but Pfau
and Steinbach (2005, 2006) give a comprehensive overview of plural formation in Ger-
man Sign Language (DGS) and discuss modality-specific and typological aspects of the
expression of plural in sign languages. A few generalizations about the marking of
plurality on nouns in ASL are contained in Wilbur (1987) and attributed to the unpub-
lished Jones and Mohr (1975); Baker and Cokely (1980, 377) list sentence, language,
rule, meaning, specialty-field, area room/box, house, street/way, and statue as al-
lowing an overt plural form formed by a kind of reduplication.
13. The noun phrase 281

The kind of arc that is used for predicate agreement (e.g., for verbs or predicative
adjectives) can also mark plurality for a small class of nouns that originated as classifi-
ers, such as box, seen in Figure 13.8. However, most nouns that can be overtly marked
for plural ⫺ although this is still a limited set ⫺ are so marked through reduplicative
morphology.

Fig. 13.8: Plural of box, articulated along an arc

For example, the singular and plural of way are illustrated in Figure 13.9a; the latter
has a horizontal translation between the two outward movements. When a bisyllabic
singular form is pluralized, the resulting form does not increase in overall number of
syllables, but remains bisyllabic: consisting of a single syllable ⫺ reduced from the
singular form ⫺ which is reduplicated, with the horizontal translation characteristic of
non body-anchored signs. This is shown in Figure 13.9b for poster; the singular is
produced with two small outward movements at different heights relative to the signer,
whereas the plural involves two downward movements, separated by a horizontal trans-
lation, between the positions used for each of the two movements in the singular. Perry
(2005) examined the morphological classes for which plurals overtly marked in this
way are possible. She found some variation among ASL signers in terms of which signs
have distinct plural forms, as well as the exact form(s) that the plural could take. She
presented an Optimality Theoretic account of some of the principles that govern how
a reduplicated plural can be related to a mono- or bi-syllabic singular form. What is
perhaps surprising, however, is that use of the overtly plural form (even when a distinct
plural form exists) is not obligatory for a noun that is semantically plural. Whereas the

(a) way (b) poster


Singular: Plural: sequence of Singular: two Plural: two downward movements
one move- two movements outward
ment movements

Fig. 13.9: Unmarked (singular) vs. plural forms of sign (a) way and (b) poster
282 III. Syntax

-------- cop (first articulation)----------------|----------------(reduplication)------------------


From a sentence meaning: ‘The cop pulled behind the car …’

---------------------- other ------------------------------- ------------------ cop -------------------


From a sentence meaning: ‘Another cop pulled the car over.’
Fig. 13.10: cop signed with (above) and without (below) reduplication

plural form of poster (articulated with a reduplicated motion) is unambiguously plural


in (35), (34) can be interpreted to refer to one or more posters.

(34) ix1 like poster [ASL]


‘I like (a/the) poster(s).’
(35) ix1 like poster-pl
‘I like (the) posters.’

Moreover, consistent with the observation in section 3.3 that reduplication may be
correlated with prosodic prominence and length, the reduplicated (overtly plural) form
is more likely to be used in prosodically prominent positions (e.g., for constituent- or
sentence-final nouns, or those that receive stress associated with pragmatic focus).
These same conditions appear to correlate with the likelihood of use of reduplicated
forms for singulars that can optionally occur as reduplicated (e.g., cop, boy) (Neidle
2009). Compare the examples in Figure 13.10, taken from a story by Ben Bahan. In
the first, with prosodic prominence on cop, it is articulated with a reduplicated motion;
in the second, where the focus is on other, it is not.
Although almost all seemingly singular forms are simply unmarked for number (and
therefore compatible with either a singular or plural reading), there are a few cases of
a real distinction between singular and plural: e.g., child vs. children, person vs. peo-
ple. In such cases, the plural is irregular, in that it is not formed from the singular by
addition of regular reduplicative plural morphology. The difference in the behavior of
inherently singular nouns, as compared with nouns simply unmarked for plurality, will
be demonstrated in section 4.3.
13. The noun phrase 283

4.3. Lack of concord

Within a noun phrase, an overt expression of plurality does not normally occur on
more than one element. As observed by Pfau and Steinbach (2006), there are also
other sign languages, including DGS, in which plurality can be overtly expressed only
once within a noun phrase, as in spoken Hungarian and Turkish. They note that not
all sign languages have restrictions on NP-internal number agreement (Hausa Sign
Language and Austrian Sign Language (ÖGS) do not). If there is some other semantic
indicator of plurality ⫺ e.g., a numeral or quantifier such as many, few, etc. ⫺ then
overt plural morphology on the main noun is superfluous. Similarly, if a plural NP
contains both a definite determiner and a noun that has distinct plural form, the plural-
ity is overtly marked on one or the other but not both, as illustrated by the following
noun phrases:

(36) [ many poster ] [ASL]


‘many posters’
(37) [ three poster ]
‘three posters’
(38) ?* [many poster-pl ]
‘many posters’
(39) ?* [ three poster-pl ]
‘three posters’
(40) [ ix3pl-arcposter ]
‘the/those posters’
(41) [ ix3i poster-pl ]
‘the/those posters’
(42) ?* [ix3pl-arc poster-pl ]
‘the/those posters’

In many spoken languages, grammatical number (singular vs. plural) is a phi-feature


that triggers obligatory agreement/concord within noun phrases. However, in ASL it
appears that, although there is the possibility of overtly marking plurality, multiple
indicators of plurality within a single noun phrase are not needed. Thus, when a noun
like poster is unmarked for number, it is consistent with either a singular or plural
interpretation, which can be determined contextually. In contrast, in (41), where there
is an overt morphological expression of plurality, the head noun, and therefore the
noun phrase as a whole, are marked as plural.
Overt plural marking on adjectives in ASL through reduplication is extremely rare.
However, at least one adjective, different, can bear plural (reduplicative) inflection.
Consistent with the above observation, overt marking for plurality normally occurs on
only one element within a noun phrase. In an NP such as [ different language ]
referring to ‘different languages’, one or the other of those elements can occur in the
plural form (with reduplication most likely on the element that is prosodically promi-
nent), but not both.
284 III. Syntax

However it is worth noting that, in ASL at least (although this appears to be differ-
ent from DGS, based on Pfau and Steinbach 2006, 170), there is not an absolute prohi-
bition against multiple expressions of plurality within an NP. An irregular plural form
such as children is related to a form child that is intrinsically singular. Thus a word
like many or three could only by followed by the plural form children, not by the
singular child, which would be semantically incompatible. This is true for other singu-
lar/plural pairs, in which the plural does not contain regular plural morphology (e.g.,
people). Compare the following phrases with those presented above:

(43) * [ many child ] [ASL]


‘many children’
(44) * [ three child ]
‘three children’
(45) [ many children]
‘many children’
(46) [ three children]
‘three children’
(47) * [ ix3pl-arc child ]
‘the/those children’
(48) [ ix3i children ]
‘the/those children’

Thus, some nouns in ASL have overt plurals, many (but not all) formed through regu-
lar plural inflection involving reduplication (e.g., way-pl, poster-pl). An even smaller
number of ASL nouns have forms that are intrinsically singular (e.g., child[sg], per-
son[sg]). Nouns unmarked for number (e.g., poster) are compatible with either singu-
lar or plural interpretations, subject to a strong preference to avoid redundant expres-
sion of plurality within NPs when it is possible to do so.

5. DP-internal word order


As has been observed going back to Greenberg’s (1963) ‘Universal 20’ ⫺ and as has
been subsequently considered within more recent theoretical frameworks (Hawkins
1983; Cinque 2005) and also within the context of sign language research (Zhang
2007) ⫺ when demonstratives, numerals, and adjectives occur prenominally (as they
do in their canonical order in ASL), they universally occur in this order: Demonstrative
> Numeral > Adjective > Noun. ASL is no exception. Worth noting, however, are
phenomena involving numeral incorporation, to be discussed below.

5.1. Expression of quantity


There is obligatory numeral incorporation of ix with numerals (which would have been
expected to occur immediately following ix). English phrases like ‘we three’ or ‘the
13. The noun phrase 285

two of them’ are expressed by single signs in ASL (as described, for example, by Baker
and Cokely (1980, 370)). The supinated hand (i.e., palm up) with the numeral hand
shape 2 shakes back and forth between two referents; the hand shapes of 3, 4, or 5
circle once or twice in a movement either inclusive or exclusive of the signer. The sign
we-two when it includes the addressee is signed using a back and forth motion of the
2 hand shape. This kind of numeral incorporation has also been described for Croation
Sign Language (HZJ), BSL, and other sign languages (e.g., Alibašić Ciciliani/Wilbur
2006; see also chapters 6 and 11).
It is also not uncommon for specific nouns to undergo incorporation with numerals
(which would otherwise have been expected to occur immediately before them); for
information about numeral incorporation in Argentine Sign Language (LSA) and Cat-
alan Sign Language (LSC), see Fuentes et al. (2010). In ASL the numerals 1 through
9 (smaller numbers doing this more commonly than larger ones) can be melded into
signs of time and money, for example: two-hours, three-days, four-weeks, five-dol-
lars, six-months, seven-years-old, time-eight (8:00), nine-seconds. Sutton-Spence
and Woll (1999) give examples of the same incorporation in BSL (£3, three-years-
old), with five being the highest numeral that can be incorporated (and this form is
rare compared to the lower numerals). For excellent examples and illustrations of the
use of numerals and quantifiers in various constructions, see also Numbering in Ameri-
can Sign Language (DawnSignPress 1998). ASL also has a variety of quantifiers that
can also be used, although those will not be discussed here. See (Boster 1996) for
discussion about possible variations in word order that have been claimed to occur
with quantifiers and numerals.
ASL can also make use of classifier constructions to convey notions of quantity (see
chapter 8). This can be done through classifiers that express a range of types of infor-
mation about such things as quantity, form, and spatial distribution of objects. There
are also cases where numerals incorporate with classifiers, giving rise to what have
been called ‘specific-number classifiers’ (Baker/Cokely 1980, 301), which represent a
specific number of people or animals through the use of the hand shapes corresponding
to numerals.

5.2. Ordering of adjectives


ASL has both prenominal and postnominal adjectives within the noun phrase, and
there are some significant differences between them. There may also be differences in
usage among signers, with some generational differences having been reported. Padden
(1988) reported a preference for postnominal adjectives; Gee and Kegl (1983) reported
that older signers showed a preference for postnominal adjectives.
Adjectives in ASL that occur predicatively (i.e., that are in the clause, but not con-
tained within a noun phrase) exhibit properties distinct from those that occur NP-
internally. As observed by MacLaughlin (1997), only predicative adjectives can inflect
for aspect and agreement in the same way that verbs can. As analyzed by MacLaughlin
(1997, 186):

Prenominal adjectives are … attributive modifiers, occurring in the specifier position of a


functional projection above NP (following Cinque 1994), while postnominal adjectives are
predicative modifiers, right-adjoined at an intermediate position in the DP projection.
286 III. Syntax

Prenominal (but not postnominal) adjectives in ASL are strictly ordered, and the order
is comparable to that found in English and discussed by Cinque (1994) as attested in
many languages. This is illustrated by MacLaughlin’s examples showing the contrast
between the prenominal adjective sequences in (49) and (50) and the postnominal
sequences in (51) and (52). When the adjectives occur prenominally, the NP is not
well-formed if red precedes big, whereas postnominally, either word order is allowed
(examples from MacLaughlin 1997, 193).

(49) [ big red ball ixadvi]DP beautiful [ASL]


‘The big red ball over there is beautiful.’
(50) * [ red big ball ixadvi]DP beautiful
(51) [ ball red big ixadvi]DP beautiful
‘The ball that is red and big over there is beautiful.’
(52) [ ball big red ixadvi]DP beautiful
‘The ball that is big and red over there is beautiful.’

Certain adjectives can only occur prenominally in canonical word order; for example:
basic, true/real, former. Other adjectives are interpreted differently when used pre-
nominally vs. postnominally, such as old (examples from MacLaughlin 1997, 196).

(53) [poss1 old friend]


‘my old friend’
(54) [poss1 friend old]
‘my friend who is old’

5.3. Summary

Sign languages are subject to the same general constraints on word order as spoken
languages. The relative canonical order of demonstratives, numerals, and adjectives
that occur prenominally in ASL is consistent with what is found universally. However,
it is also true, as previously noted, that ASL allows considerable flexibility with respect
to surface word order. Deviations from the canonical word orders, attributable to dis-
placements of constituents from their underlying positions, are frequently identifiable
by prosodic cues. See Zhang (2007) for discussion of word order variation. Focusing
on Taiwan Sign Language, Zhang investigates the ways in which variations in word
order both within a given language and across languages can be derived. See also
Bertone (2010) for discussion of noun phrase structure in LIS.

6. Conclusion
Sign languages are governed by the same fundamental syntactic principles as spoken
languages. ASL includes the same basic inventory of linguistic elements. In particular,
13. The noun phrase 287

we have argued for the existence of both definite and indefinite determiners occurring
prenominally in the canonical word order within a DP.
Sign languages also exhibit standard syntactic processes, such as agreement, al-
though the specifics of how agreement works are profoundly affected by the nature of
spatial representations of reference. In ASL and many other sign languages, referential
features, along with person features, are involved in agreement/concord relations.
These features are realized morphologically not only on determiners but also on pro-
nouns, possessives, reflexives/intensifiers, and agreement affixes that attach to predi-
cates (including verbs and adjectives), and they can also be realized non-manually
through head tilt and eye gaze.
In contrast, number features are not among those features that exhibit concord
within noun phrases. The base form of most nouns is unmarked for number. Certain
nouns allow for plurality to be overtly marked morphologically, through a regular in-
flectional process that involves reduplication. However, multiple markings of plurality
within a noun phrase are strongly dispreferred.

Acknowledgements: We are grateful to the many individuals who have participated in


the American Sign Language Linguistic Research Project involving research on ASL
syntax and in collection and annotation of video data for this research. The research
reported here has benefitted enormously from the contributions of Ben Bahan, Lana
Cook, Robert G. Lee, Dawn MacLaughlin, Deborah Perry, Michael Schlang, and
Norma Bowers Tourangeau. Further information is available from http://www.bu.edu/
asllrp/. This work has been supported in part by grants from the National Science
Foundation (IIS-0705749, IIS-0964385, and CNS-04279883).

7. Literature
Abney, Steven P.
1987 The English noun phrase in its Sentential Aspect. PhD Dissertation, MIT. Cambrige,
MA.
Alibašić Ciciliani, Tamara/Wilbur, Ronnie B.
2006 Pronominal System in Croatian Sign Language. In: Sign Language & Linguistics 9,
95⫺132.
Bahan, Benjamin
1996 Non-manual Realization of Agreement in American Sign Language. PhD Dissertation,
Boston University. Boston, MA.
Bahan, Benjamin/Kegl, Judy/MacLaughlin, Dawn/Neidle, Carol
1995 Convergent Evidence for the Structure of Determiner Phrases in American Sign Lan-
guage. In: Leslie, Gabriele/Hardison, Debra/Westmoreland, Robert (eds.), FLSM VI:
Proceedings of the Sixth Annual Meeting of the Formal Linguistics Society of Mid-
America. Bloomington, Indiana: Indiana University Linguistics Club, 1⫺12.
Baker, Charlotte/Cokely, Dennis
1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver
Spring, MD: T. J. Publishers.
Baker, Charlotte/Padden, Carol A.
1978 Focusing on the Nonmanual Components of American Sign Language. In: Siple, Patri-
cia (ed.), Understanding Language through Sign Language Research. New York: Aca-
demic Press, 27⫺57.
288 III. Syntax

Baker-Shenk, Charlotte
1983 A Micro-analysis of the Nonmanual Components of Questions in American Sign Lan-
guage. PhD Dissertation, University of California, Berkeley, CA.
Bertone, Carmela
2010 The Syntax of Noun Modification in Italian Sign language (LIS). In: Working Papers
in Linguistics 2009, Venezia, Dipartimento di Scienze del Linguaggio. Università Ca’
Foscari, 7⫺28.
Bos, Heleen
1989 Person and Location Marking in Sign Language of the Netherlands: Some Implications
of a Spatially Expressed Syntactic System. In: Prillwitz, Siegmund/Vollhaber, Tomas
(eds.), Current Trends in European Sign Language Research: Proceedings of the 3 rd
European Congress on Sign Language Research. Hamburg: Signum, 231⫺246.
Boster, Carole Tenny
1996 On the Quantifier-noun Phrase Split in American Sign Language and the Structure of
Quantified noun phrases. In: Edmondson, William H./Wilbur, Ronnie B. (eds.), Interna-
tional Review of Sign Linguistics. Mahwah, NJ: Lawrence Erlbaum, 159⫺208.
Cardinaletti, Anna
1994 On the Internal Structure of Pronominal DPs. In: The Linguistic Review 11, 195⫺219.
Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro
2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguis-
tic Theory 24, 945⫺975.
Cinque, Guglielmo
1994 On the Evidence for Partial N-Movement in the Romance DP. In: Cinque, Guglielmo/
Koster, Jan/Pollock, Jean-Yves/Rizzi, Luigi/Zanuttini, Raffaella (eds.), Paths Towards
Universal Grammar: Studies in Honor of Richard S. Kayne. Georgetown University
Press, 85⫺110.
Cinque, Guglielmo
2005 Deriving Greenberg’s Universal 20 and Its Exceptions. In: Linguistic Inquiry 36, 315⫺
332.
Cooper, William E./Paccia-Cooper, Jeanne
1980 Syntax and Speech. Cambridge, MA: Harvard University Press.
Coulter, Geoffrey R.
1978 Raised Eyebrows and Wrinkled Noses: The Grammatical Function of Facial Expression
in Relative Clauses and Related Constructions. In: Caccamise, Frank/Hicks, Doin (eds.),
American Sign Language in a Bilingual, Bicultural Context: Proceedings of the Second
National Symposium on Sign Language Research and Teaching. Coronado, CA: Na-
tional Association of the Deaf, 65⫺74.
Coulter, Geoffrey R.
1990 Emphatic Stress in ASL. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical Issues
in Sign Language Research, Volume 1: Linguistics. Chicago: University of Chicago Press,
109⫺125.
Coulter, Geoffrey R.
1993 Phrase-level Prosody in ASL: Final Lengthening and Phrasal Contours. In: Coulter,
Geoffrey R. (ed.), Phonetics and Phonology: Current Issues in ASL Phonology. New
York: Academic Press, 263⫺272.
DawnSignPress (ed.)
1998 Numbering in American Sign Language: Number Signs for Everyone. San Diego, CA:
DawnSignPress.
De Vriendt, Sera/Rasquinet, Max
1989 The Expression of Genericity in Sign Language. In: Prillwitz, Siegmund/Vollhaber, To-
mas (eds.), Current Trends in European Sign Language Research: Proceedings of the 3 rd
European Congress on Sign Language Research. Hamburg: Signum, 249⫺255.
13. The noun phrase 289

Déchaine, Rose-Marie/Wiltschko, Martina


2002 On pro-nouns and other “Pronouns”. In: Coene, Martine/D’Hulst, Yves (eds.), From
NP to DP, Volume 1: The Syntax and Semantics of Noun Phrases. Amsterdam: Benja-
mins, 71⫺89.
Engberg-Pedersen, Elisabeth
2003 From Pointing to Reference and Predication: Pointing Signs, Eyegaze, and Head and
Body Orientation in Danish Sign Language. In: Kita, Sotaro (ed.), Pointing: Where
Language, Culture, and Cognition Meet. Hillsdale, NJ: Lawrence Erlbaum, 269⫺292.
Fretheim, Thorstein
1995 Why Norwegian Right-dislocated Phrases are not Afterthoughts. In: Nordic Journal of
Linguistics 18, 31⫺54.
Frishberg, Nancy
1978 The Case of the Missing Length. In: Communication and Cognition 11, 57⫺68.
Fuentes, Mariana/Massone, María Ignacia/Pilar Fernández-Viader, María del/Makotrinsky, Ale-
jandro/Pulgarín, Francisca
2010 Numeral-incorporating Roots in Numeral Systems: A Comparative Analysis of Two
Sign Languages. In: Sign Language Studies 11, 55⫺75.
Gee, James Paul/Kegl, Judy
1983 Performance Structures, Discourse Structures, and ASL. Manuscript, Hampshire Col-
lege and Northeastern University.
Gelderen, Elly van
2007 The Definiteness Cycle in Germanic. In: Journal of Germanic Linguistics 19, 275⫺308.
Giusti, Giuliana
2002 The Functional Structure of noun phrases: A Bare Phrase Structure Approach. In:
Cinque, Guglielmo (ed.), Functional Structure in the DP and IP: The Cartography of
Syntactic Structures. Oxford: Oxford University Press, 54⫺90.
Greenberg, Joseph
1963 Some Universals of Grammar with Particular Reference to the Order of Meaningful
Elements. In: Greenberg, Joseph (ed.), Universals of Language. Cambridge, MA: MIT
Press, 73⫺113.
Grosjean, François
1979 A Study of Timing in a Manual and a Spoken Language: American Sign Language and
English. In: Journal of Psycholinguistic Research 8, 379⫺405.
Gundel, Jeanette K./Fretheim, Thorstein
2004 Topic and Focus. In: Horn, Laurence R/Ward, Gregory L. (eds.), Handbook of Prag-
matic Theory. Oxford: Blackwell, 174⫺196.
Hatzopoulou, Marianna
2008 Acquisition of Reference to Self and Others in Greek Sign Language: From Pointing
Gesture to Pronominal Pointing Signs. PhD Dissertation, Stockholm University.
Hawkins, John
1983 Word Order Universals. New York: Academic Press.
Heim, Irene
2008 Features on Bound Pronouns. In: Harbour, Daniel/Adger, David/Béjar, Susana (eds.),
Phi Theory: Phi-Features across Modules and Interfaces. Oxford: Oxford University
Press, 35⫺56.
Hendriks, Bernadet
2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD
Dissertation, University of Amsterdam. Utrecht: LOT.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language: An Introduction to Sign Language Linguistics Cambridge:
Cambridge University Press.
290 III. Syntax

Jones, N./Mohr, K.
1975 A Working Paper on Plurals in ASL. Manuscript, University of California, Berkeley.
Kegl, Judy
1976 Pronominalization in American Sign Language. Manuscript, MIT. [Reissued 2003, Sign
Language & Linguistics 6, 245⫺265].
König, Ekkehard/Siemund, Peter
1999 Intensifiers and Reflexives: A Typological Perspective. In: Frajzyngier, Zygmunt/Curl,
Traci S. (eds.), Reflexives: Forms and Functions. Amsterdam: Benjamins, 41⫺74.
Lee, Robert G./Neidle, Carol/MacLaughlin, Dawn/Bahan, Benjamin/Kegl, Judy
1997 Role Shift in ASL: A Syntactic Look at Direct Speech. In: Neidle, Carol/MacLaughlin,
Dawn/Lee, Robert G. (eds.), Syntactic Structure and Discourse Function: An Examina-
tion of Two Constructions in ASL, Report Number 4. Boston, MA: American Sign
Language Linguistic Research Project, Boston University, 24⫺45.
Liddell, Scott K.
1977 An Investigation into the Syntax of American Sign Language. PhD Dissertation, Univer-
sity of California, San Diego.
Liddell, Scott K.
1978 Nonmanual Signals and Relative Clauses in American Sign Language. In: Siple, Patricia
(ed.), Understanding Language through Sign Language Research. New York: Academic
Press, 59⫺100.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lillo-Martin, Diane
1995 The Point of View Predicate in American Sign Language. In: Emmorey, Karen/Reilly,
Judy S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 155⫺
170.
MacLaughlin, Dawn
1997 The Structure of Determiner Phrases: Evidence from American Sign Language. PhD
Dissertation, Boston University, Boston, MA.
MacLaughlin, Dawn/Neidle, Carol/Bahan, Benjamin/Lee, Robert G.
2000 Morphological Inflections and Syntactic Representations of Person and Number in
ASL. In: Recherches linguistiques de Vincennes 29, 73⫺100.
Neidle, Carol
2003 Language Across Modalities: ASL Focus and Question Constructions. In: Linguistic
Variation Yearbook 2, 71⫺98.
Neidle, Carol
2009 Now We See It, Now We Don’t: Agreement Puzzles in ASL. In: Uyechi, Linda/Wee,
Lian Hee (eds.), Reality Exploration and Discovery: Pattern Interaction in Language &
Life. Stanford, CA: CSLI Publications.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G.
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Neidle, Carol/Lee, Robert G.
2006 Syntactic Agreement across Language Modalities. In: Costa, João/Figueiredo Silva, Ma-
ria Cristina (eds.), Studies on Agreement. Amsterdam: Benjamins, 203⫺222.
Obando, Vega/Elena, Ivonne
2000 Lip Pointing in Idioma de Señas de Nicaragua (Nicaraguan Sign Language). Paper
presented at the 7th International Conference on Theoretical Issues in Sign Language
Research, July 23rd⫺27th, Amsterdam.
13. The noun phrase 291

Padden, Carol A.
1988 Interaction of Morphology and Syntax in American Sign Language. New York: Gar-
land Publishing.
Perry, Deborah
2005 The Use of Reduplication in ASL Plurals. MA Thesis, Boston University, Boston, MA.
Pfau, Roland/Steinbach, Markus
2005 Plural Formation in German Sign Language: Constraints and Strategies. In: Leuninger,
Helen/Happ, Daniela (eds.), Gebärdensprache. (Linguistische Berichte Sonderheft 13.)
Hamburg: Buske, 111⫺144.
Pfau, Roland/Steinbach, Markus
2006 Pluralization in Sign and in Speech: A Cross-modal Typological Study. In: Linguistic
Typology 10, 135⫺182.
Postal, Paul
1966 On So-called “Pronouns” in English. In: Dinneen, Francis P. (ed.), Report of the Seven-
teenth Annual Round Table Meeting on Linguistics and Language Studies. Washington,
D.C.: Georgetown University Press, 177⫺206.
Rinfret, Julie
2010 The Spatial Association of Nouns in Langue des Signes Québécoise: Form, Function
and Meaning. In: Sign Language & Linguistics 13(1), 92⫺97.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Selkirk, Elisabeth O.
1984 Phonology and Syntax: The Relation between Sound and Structure. Cambridge, MA:
MIT Press.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. Cambridge: Cambridge University Press.
Tang, Gladys/Sze, Felix Y. B.
2002 Nominal Expressions in Hong Kong Sign Language: Does Modality Make a Differ-
ence? In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and
Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press,
296⫺319.
Thompson, Robin/Emmorey, Karen/Kluender, Robert
2006 The Relationship between Eye Gaze and Verb Agreement in American Sign Language:
An Eye-Tracking Study. In: Natural Language and Linguistics Theory 24, 571⫺604.
Uriagereka, Juan
1992 Aspects of the Syntax of Clitic Placement in Western Romance. In: Linguistic Inquiry
26, 79⫺123.
Wilbur, Ronnie B.
1979 American Sign Language and Sign Systems: Research and Application. Baltimore, MD:
University Park Press
Wilbur, Ronnie B.
1987 American Sign Language: Linguistic and Applied Dimensions. Boston, MA: College-
Hill Press.
Wilbur, Ronnie B.
1994 Foregrounding Structures in American Sign Language. In: Journal of Pragmatics 22,
647⫺672.
Wilbur, Ronnie B.
1999 Stress in ASL: Empirical Evidence and Linguistic Issues. In: Language and Speech 42,
229⫺250.
Zeshan, Ulrike/Perniss, Pamela (eds.)
2008 Possessive and Existential Constructions in Sign Languages. Nijmegen: Ishara Press.
292 III. Syntax

Zhang, Niina Ning


2007 Universal 20 and Taiwan Sign Language. In: Sign Language & Linguistics 10, 55⫺81.
Zimmer, June/Patschke, Cynthia
1990 A Class of Determiners in ASL. In: Lucas, Ceil (ed.), Sign Language Research: Theo-
retical Issues. Washington, DC: Gallaudet University Press, 201⫺210.

Carol Neidle and Joan Nash, Boston, Massachusetts (USA)

14. Sentence types


1. Introduction
2. Polar (yes-no) questions
3. Content (wh) questions
4. Other constructions with wh-phrases
5. Conclusion
6. Literature

Abstract
Although sentence types are declaratives, interrogatives, imperatives and exclamatives,
this chapter focuses on declaratives and interrogatives, since imperatives and exclama-
tives have not been systematically studied yet in sign languages. Polar (yes/no) questions
in all known sign languages are invariably marked by a special non-manual marker
(NMM), although in some sign languages also sentence-final question particles can
mark them.
Content (wh) questions are an area of possible macrotypological variation between
spoken and sign languages. In the overwhelming majority of spoken languages, wh-
phrases either occur at the left edge of the sentence or remain in situ. However, a possible
occurrence of wh-phrases at the right periphery is reported in most of the sign languages
for which a description of content questions is available, although, for many of them,
occurrence of wh-phrases at the left periphery or in situ is also possible. In some analyses,
wh-phrases in sign languages access positions not available to wh-phrases in spoken
languages, while other analyses deny or minimize this macrotypological difference. An
area in which these analyses make different prediction is wh-NMM. Finally, some con-
structions different from content questions in which wh-signs nonetheless occur are re-
ported in this chapter.

1. Introduction
‘Sentence types’ is a traditional linguistic category that refers to the pairing of
grammatical form and conversational use (cf. Sadock/Zwicky 1985). Well-estab-
14. Sentence types 293

lished sentence types in spoken language are declaratives, interrogatives, and im-
peratives. Another less established sentence type is exclamatives (cf. Zanuttini/
Portner 2003).
Since sign languages can be used to make an assertion, to ask a question, to give
an order, it is no surprise that they develop grammaticalized forms associated to these
conversational uses. However, while the sign language literature contains a consider-
able body of work on declaratives and interrogatives, research on other sentence types
is extremely limited. In fact, no study has been exclusively dedicated to imperatives or
exclamatives in any sign language. Sparse and unsystematic information is scattered in
works that are devoted to other topics. Baker and Cokely (1980) mention that com-
mands in American Sign Language (ASL) are usually indicated by stress (emphasis)
on the verb and direct eye gaze at the addressee. This stress usually involves making
the sign faster and sharper. De Quadros (2006) reports work (in Brazilian Portuguese)
by Ferreira-Brito (1995) on questions that are marked by a special non-manual mark-
ing (NMM) and function as polite command in Brazilian Sign Language (LSB). Zeshan
(2003) mentions that Indo-Pakistani Sign Language (IPSL) uses positive and negative
particles to express imperatives. Spolaore (2006), a work in Italian, identifies a sign
(glossed as ‘hand(s) forward’) that tends to appear in sentence-final position in impera-
tive sentences in Italian Sign Language (LIS). Johnston and Schembri (2007) claim
that in Australian Sign Language (Auslan) imperatives the actor noun phrase is often
omitted and signs are produced with a special stress, direct eye gaze at the addressee
and frowning.
While this information indicates that (some) sign languages have developed gram-
maticalized forms for imperatives, the limited amount of the research does not justify
a review of the literature. For this reason, this chapter will be devoted to interrogatives.
The properties of declarative sentences in a given language (the unmarked word order,
the presence of functional signs, etc.) will be discussed only when this is necessary to
show how interrogatives are distinguished from declaratives, for example by a change
in the order of signs or in the distribution of NMM. Declarative sentences are also
discussed in the chapters devoted to word order (chapter 12) and complex sentences
(chapter 16).
All three approaches to the study of sign languages that the handbook explores,
namely the comparability of sign and spoken languages, the influence of modality on
language, and typological variation between sign languages, strongly interact in this
chapter. In particular, in the discussion of content questions, conclusions emerging
from the typological literature will be reported along with more theoretically oriented
analyses concerning specific sign languages.

2. Polar (yes/no) questions

Sign languages tend to employ the same strategy to mark polar (yes/no) questions to
a notable degree. In fact, polar questions in all known sign languages are invariably
marked by a special NMM (for a detailed discussion of NMM, see chapter 4, Prosody).
According to Zeshan (2004), the NMM associated with yes/no questions typically in-
volves a combination of several of the following features:
294 III. Syntax

⫺ eyebrow raise
⫺ eyes wide open
⫺ eye contact with the addressee
⫺ head forward position
⫺ forward body posture

Tab. 14.1: Research on polar questions in sign languages


American Sign Language (ASL), cf. Wilbur and Patschke (1999)
Australian Sign Language (Auslan), cf. Johnston and Schembri (2007)
Austrian Sign Language (ÖGS) cf. Šarac et al. (2007)
Brazilian Sign Language (LSB), cf. de Quadros (2006)
British Sign Language (BSL), cf. Sutton-Spence and Woll (1999)
Catalan Sign Language (LSC), cf. Quer et al. (2005)
Croatian Sign Language (HZJ), cf. Šarac and Wilbur (2006)
Flemish Sign Language (VGT), cf. Van Herreweghe and Vermeerbergen (2006)
Finnish Sign Language (FinSL), cf. Savolainen (2006)
Hong-Kong Sign Language (HKSL), cf. Tang (2006)
Israeli Sign Language (Israeli SL), cf. Meir (2004)
Indo-Pakistani Sign Language (IPSL), cf. Zeshan (2004)
Japanese Sign Language (NS), cf. Morgan (2006)
Quebec Sign Language (LSQ), cf. Dubuisson et al. (1991)
New Zealand Sign Language (NZSL), cf. McKee (2006)
Sign Language of the Netherlands (NGT), cf. Coerts (1992)
Spanish Sign Language (LSE), cf. Herrero (2009)
Turkish Sign Language (TİD), cf. Zshan (2006)

In many cases, only NMM can differentiate polar questions and declarative sentences.
For example, Morgan (2006) reports that in NS a declarative sentence and the corre-
sponding polar question may be distinguished only by the occurrence of a special
NMM, namely eyebrow raise, slight head nod and chin tuck on the last word. However,
he notes that the index sign may be moved to the sentence-final position in polar
questions, as in (2):

(1) index2 book buy [NS]


‘You bought a book.’
(2) pol-q
book buy index2
‘Did you buy the book?’

The importance of the eyebrow raise feature should be stressed, since it also discrimi-
nates polar questions from content (wh) questions in the many sign languages in which,
as we will see in section 3, content questions are marked by eyebrow lowering. Al-
though in other grammatical constructions (like negative sentences and content ques-
tions), the scope of non-manual marking can vary significantly both crosslinguistically
and language internally, non-manual marking in polar questions shows relatively minor
variation. In fact, it typically extends over the whole clause but for signs that are
marked by a different non-manual marking (for example topicalized constituents).
14. Sentence types 295

In many sign languages, eyebrow raise marking is shared by polar questions and
other grammatical constructions. ASL is a well-documented case. Coulter (1979) ob-
serves that eyebrow raise marks any material in left peripheral position. This includes,
as further discussed by Wilbur and Patschke (1999), diverse constructions like topics,
left dislocated phrases, relative clauses, conditionals, and focused phrases (MacFarlane
(1998) contains crosslinguistic data confirming the occurrence of eyebrow raise in a
subset of these constructions). After excluding alternative analyses, Wilbur and
Patschke conclude that the commonality among all the ASL structures that show eye-
brow raise is that this NMM shows up in A-bar positions which are associated with
operator features that are [⫺wh]. So, the three distinctive brow positions, raised, fur-
rowed, and neutral, would be each associated with a different operator situation,
[⫺wh], [Cwh], and none, respectively.
The fact that eyebrow raise is shared by polar questions and the protasis of condi-
tionals introduces a possible complication. In sign languages in which a functional sign
corresponding to ‘if’ is not required, distinguishing a question-answer pair introduced
by a polar question and a conditional may be difficult. This is so because a question-
answer pair may express the same information as a conditional (cf. the similar meaning
of (3a) and (3b)):

(3) a. Does it rain? I go to the cinema.


b. If it rains, I go to the cinema.

This raises the possibility that some sign languages might lack conditionals altogether,
since they might be functionally replaced by a question-answer pair introduced by a
polar question. However, this is unlikely. For one thing, eyebrow raise might be associ-
ated to a cluster of NMMs rather than being a single independent feature. Therefore,
closer examination might reveal that NMMs associated to hypotheticals and to ques-
tion-answer pairs are different.
Furthermore, Barattieri (2006) identified some devices that can disentangle ques-
tion-answer pairs as (3a) and genuine conditionals in LIS, a language in which the sign
corresponding to if can be easily omitted and eyebrow raise marks both polar questions
and (alleged) protases of conditionals. For example, in LIS (as in English) counterfac-
tual conditionals like ‘Had Germany won, Europe would be now controlled by Nazis’
cannot felicitously be replaced by the corresponding question-answer pair ‘Did Ger-
many win? Now Europe is controlled by Nazis’. By using this and similar devices, a
polar question and the protasis of a conditional can be distinguished even in languages
in which they are marked by the same (or by a similar) non-manual marking.
If NMM is the sign language counterpart of intonation (cf. Sandler 1989, among
many, others for this proposal), sign and spoken languages do not seem to pattern very
differently as far as polar questions are concerned, since intonation (for example, rising
intonation at the end of questions) can mark polar questions in spoken languages as
well (colloquial English is an example, and Italian is a more extreme one, since a rising
intonation is the only feature which can discriminate an affirmative sentence and the
corresponding polar question). However, a difference between spoken and sign lan-
guages might be at stake here as well. According to the most comprehensive typologi-
cal source available at the moment of writing (Dryer 2009a), in spoken languages the
use of strategies distinct from intonation to mark polar questions is extremely common.
296 III. Syntax

These strategies include a special interrogative morphology on the verb, the use of a
question particle and a change in word order. Sign languages might use strategies other
than intonation to a significantly lesser extent than spoken languages do. The only
notable exception is the use of sentence-final question particles to mark polar questions
in languages like ASL, HKSL, and HZJ. However, even in these languages, question
particles complement NMMs as a way to mark questions, rather than fully replacing
them. More specifically, ASL eyebrow raise is obligatory on the question particle and
may optionally spread over the entire clause (Neidle et al. 2000, 122⫺124). In HKSL,
eyebrow raise occurs only on the question particle and cannot spread (Tang 2006, 206).
In HZJ, the NMM associated to polar questions spreads over the entire sentence (Ša-
rac/Wilbur 2006, 154⫺156).
This notwithstanding, it cannot be excluded that the difference between spoken and
sign languages is not a real one but is due to our current limited knowledge of the
grammar of the latter. It is possible that there are sign languages which do not use
intonation to mark polar questions, but, if so, these have been poorly studied. Similarly,
a closer examination of word order and morphology of sign languages that are thought
to mark polar questions only with NMM might reveal that they use other strategies as
well. Only future research can determine this.

3. Content (wh) questions

Content (wh) questions have been investigated in close detail in various sign languages
and some controversy arose both about the data and about the possible analyses. A
reason why content questions attract much attention is that they might be an area of
macrotypological variation between spoken and sign languages. In the overwhelming
majority of spoken languages, wh-phrases either occur at the left edge of the sentence
or remain in situ. Cases of spoken languages in which wh-phrases systematically occur
at the right edge of the sentence are virtually unattested. In WALS Online (cf. Dryer
2009b) only one language (Tennet) is indicated as a potential exception. Considering
that WALS Online database covers more than 1200 spoken languages, this generaliza-
tion is very robust.
However, a possible occurrence of wh-phrases at the right periphery is reported in
most of the sign languages for which a description of content questions is available,
although, for many of them, occurrence of wh-phrases at the left periphery or in situ
is also possible. Based on this pattern, various authors have proposed that wh-phrases
in sign languages may access positions not available to wh-phrases in spoken languages.
Since content questions in ASL have been the first to be analyzed in detail and the
following investigation of wh interrogatives has been influenced by this debate, two
competing analyses for the ASL questions will be described initially. Later in this chap-
ter, other sign languages will be considered. The leftward movement analysis, mostly
due to work by Karen Petronio and Diane Lillo-Martin, is presented in section 3.1.
Section 3.2 summarizes the rightward movement analysis, which is systematically de-
fended in Neidle et al. (2000) (from now on, NKMBL). In section 3.3 content questions
in LIS are discussed, while section 3.4 summarizes the remnant movement analysis,
which is a device that can explain the occurrence of wh-signs in the right periphery
14. Sentence types 297

without assuming rightward movement. Section 3.5 is devoted to the analysis of dupli-
cation of the wh-phrase. Section 3.6 concludes the discussion of content questions by
drawing a provisory conclusion on the issue of the (alleged) macrotypological variation
between spoken and sign languages concerning the position of wh-items.

3.1. The leftward movement analysis for ASL content questions

One reason that makes content questions in ASL difficult to interpret is that wh-signs
may appear in many different positions, namely in situ, sentence-finally, or doubled in
the left and in the right periphery. In (4) this is illustrated with a wh-object, but there
is consensus in the literature (Petronio/Lillo-Martin 1997; NKMBL) that the same hap-
pens with wh-signs playing other grammatical roles. (4a) indicates the unmarked SVO
word order of ASL. It is important to stress that adverbs like yesterday are clause-
final in ASL. This allows us to check if the direct object is in situ (namely, it precedes
yesterday) or has moved to the right periphery of the sentence (namely, it follows
yesterday). (4b) illustrates a case of doubling of the wh-sign, which surfaces both in
the left and in the right periphery. In (4c) the wh-phrase is in situ and, finally, in (4d)
the wh-phrase surfaces only in the right periphery. Content questions are accompanied
by a specific non-manual marking (wh-NMM), namely a cluster of expressions of the
face and upper body, consisting most notably of furrowed eyebrows:

(4) a. john buy book yesterday [ASL]


‘Yesterday John bought a book.’
wh
b. what john buy yesterday what
‘What did John buy yesterday?’
wh
c. john buy what yesterday
‘What did John buy yesterday?’
wh
d. john buy yesterday what
‘What did John buy yesterday?’

Since rightward movement of wh-elements is crosslinguistically very rare, if existing at


all, Petronio and Lillo-Martin assume that wh-movement is universally leftward and
explain the pattern in (4) as follows. In (4b) a wh-sign is found in the left periphery,
as expected if wh-movement is leftward. As for the fact that the wh-sign is doubled at
the right edge, they assume that the wh-double is a clause-final complementizer which
occupies the COMP position, much like interrogative complementizers that are found
in many SOV languages. Although ASL is SVO, it has been proposed that it was SOV
(cf. Fischer 1975), so the placement of the interrogative complementizer at the right
edge might be a residue of this earlier stage. Furthermore, Petronio and Lillo-Martin
observe that the doubling in (4b) is an instance of a more general phenomenon which
occurs with non-wh-signs as well. For example, modal, lexical verbs and quantifiers can
be doubled in the right periphery for focus or emphasis (the phenomenon of doubling
will be discussed in section 3.5). Since they take wh-doubling in the right periphery to
298 III. Syntax

be a case of focalization on par with other cases of doubling, Petronio and Lillo-Martin
claim that wh-NMM expresses the combination of wh and Focus features that are
hosted in the COMP node of all direct questions. Spreading occurs over the c-com-
mand domain of COMP (namely the entire sentence).
Cases of in situ wh-signs like (4c) are not surprising since it is not uncommon to
find languages displaying both the leftward movement option and the in situ option.
The order in (4d) is more difficult to explain if the right peripheral wh-sign is a comple-
mentizer, since this question would lack an argument wh-phrase altogether. However,
Petronio and Lillo-Martin (following Lillo-Martin/Fischer 1992) observe that ASL al-
lows null wh-words, as in examples like (5):

wh
(5) time [ASL]
‘What time is it?’

Therefore, they explain the pattern in (4d) by arguing that this sentence contains a
null wh-phrase in the object position.
A natural question concerns sentences like (6), in which the wh-phrase is found
where it is expected if wh-movement is leftward and no doubling is observed (the
symbol ‘#’ indicates that the grammaticality status of this sentence is controversial):

wh
(6) #who john hate [ASL]
‘Who does John hate?’

Unfortunately, there is no consensus on the grammatical status of sentences of this


type. For example, Petronio and Lillo-Martin say that they elicited varying judgments
from their informants, while NKMBL claim that their informants rejected this type of
sentence altogether. Note that, if wh-movement is leftward, at least under the simplest
scenario, a question like (6) should be plainly grammatical, much like its translation in
English. So, its dubious status is a potential challenge for Petronio and Lillo-Martin’s
account. They deal with this issue by arguing that, for stylistic reasons, some signers
prefer the position of the head final complementizer to be filled with overt material. So,
(6) is disliked or rejected in favor of the much more common structure with doubling
exemplified in (4b) above. They support this conjecture by observing that judgments
become much sharper when the question with an initial wh-sign and no doubling is
embedded under a predicate like wonder, as in (7). They interpret (7) as an indirect
question with the order that is just expected under the assumption that wh-movement
is leftward:

ponder
(7) i wonder what john buy [ASL]
‘I wonder what John bought.’

As indicated, sentences like (7) are reported by Petronio and Lillo-Martin not to occur
with familiar wh-NMM, but with a NMM consisting of a puzzled, pondering facial
expression. Partly for this reason, Neidle et al. (1998) deny that embedded structures
marked by this type of NMM are genuine indirect questions.
14. Sentence types 299

Petronio and Lillo-Martin observe that another advantage of their analysis is that it
can explain why a full phrase cannot occupy the right peripheral position. For example,
structures like (8) are reported by them to be ungrammatical ((8) is marked here with
the symbol ‘#’, because this data has been contested as well, as we will see shortly).
The ungrammaticality of (8) straightforwardly follows if the clause-final wh-sign is
indeed a complementizer (phrases cannot sit in the position of heads, under any stand-
ard version of phrase structure theory, like X-bar theory):

wh
(8) #which computer john buy which computer [ASL]

Summarizing, Petronio and Lillo-Martin, confronted with the complex pattern of ASL
wh-questions, give an account that aims at explaining the data by minimizing the differ-
ence with spoken languages, in which rightward wh-movement is virtually unattested.

3.2. The rightward movement analysis for ASL content questions

Proponents of the rightward movement analysis take the rightward placement of wh-
signs at face value and claim that wh-movement is rightward in ASL. This analysis has
been systematically defended by NKMBL. Of course, the rightward movement analysis
straightforwardly explains the grammaticality of examples like (4d), in which the
wh-item is clause-final. NKMBL also report examples in which the wh category in the
right periphery is a phrase, not a single wh-sign, although this data has been contested
by Petronio and Lillo-Martin. For example, informants of NKMBL find a sentence like
(8) above fully acceptable.
Examples in which the wh-phrase is in situ (cf. (4c)) are also not surprising, since,
as already mentioned, many languages with overt wh-movement admit the in situ strat-
egy as well. The hardest cases for the rightward movement analysis are those in which
the wh-category is in the left periphery. Banning sentences like (6), which have a dubi-
ous status, the only uncontroversial case of left placement of the wh-phrase is in cases
of doubling like (4b). NKMBL deal with these cases by assuming that the wh-phrase
in the left periphery is a wh-topic. They support this conjecture by observing that
wh-topics display the same distributional properties as base generated topics and that
their NMM results from the interaction of wh-NMM and of the NMM that marks
topics. This proposal faces the potential challenge that not many languages allow wh-
phrases in topic positions. However, NKMBL list some languages that do, so ASL
would not be a real exception.
One piece of evidence advocated by NKMBL in favor of the hypothesis that the
category that sits at the right edge is a wh-phrase (and not a wh complementizer) is
the fact that their informants accept questions like (9), in which a complex phrase is
rightward moved. As usual, the symbol ‘#’ indicates a disagreement, since Petronio
and Lillo-Martin would mark questions with a right peripheral wh-phrase as ungram-
matical:

wh
(9) #john buy yesterday which computer [ASL]
300 III. Syntax

NKMBL claim that spreading of wh-NMM over the entire sentence is optional when
the wh-phrase occupies the clause-final position (Spec,CP in their account), while it is
mandatory when the wh-phrase is in situ. They analyze this distribution as an instance
of a more general pattern, which is found with other types of grammatical NMMs
(such as the NMMs associated with negation, yes-no questions, and syntactic agree-
ment). NMMs are linked to syntactic features postulated to occur in the heads of
functional projections. In all these cases, the domain of NMM is the c-command do-
main of the node with which NMM is associated. Spreading of the relevant NMM is
optional, unless it is required for the purpose of providing manual material with which
the NMM can be articulated. Since the node with which the wh-NMM is associated is
the head of the CP position, the domain of wh-NMM is the c-command domain of
COMP, which corresponds to the entire sentence.
The distribution of NMM has been used as an argument both in favor and against
the rightward movement analysis. NKMBL claim that the rightward movement analysis
is supported by the fact that the intensity of the wh-NMM increases as the question is
signed. This is expected if the source of the wh feature occurs at the right edge, as the
intensity of wh-NMM is greatest nearest the source of the wh feature and it diminishes
as the distance from that node increases.
On the other hand, Petronio and Lillo-Martin observe that the generalization that
spreading of wh-NMM is optional when the wh-phrase has moved to its dedicated
position at the right edge makes a wrong prediction in cases of sentences like (10),
which should be acceptable, but are not (the structure is grammatical if the wh-NMM
occurs over the entire sentence as in (4b)):

wh wh
(10) *what john buy yesterday what [ASL]
‘What did John buy yesterday?’

NKMBL account for the ungrammaticality of (10) by capitalizing on the notion of


perseveration, namely the fact that, if the same articulatory configuration will be used
multiple times in a single sentence, it tends to remain in place between those articula-
tions (if this is possible). Perseveration, according to NKMBL, is a general phenom-
enon which is found in other domains as well (for example, in classifier constructions,
as discussed by Kegl (1985)). The problem with (10) would be a lack of perseveration,
so the sentence would contain a phonological violation.
A revised version of the rightward movement analysis has been proposed by Neidle
(2002), who claims that the wh-phrase passes through a focus position in the left pe-
riphery in its movement towards the Spec,CP position in the right periphery. She shows
that this focus position houses not only focused DPs, but also ‘if’, ‘when’, and relative
clauses. Neidle supports her analysis by showing that wh-phrases (including non-fo-
cused wh-phrases) remain in situ when the focus position in the left periphery, being
already filled, cannot be used as an intermediate step. This pattern can be straightfor-
wardly reduced to a case of Relativized Minimality, in the sense of Rizzi (1990).
The disagreement found in the literature extends to data that are crucial to the
choice between the leftward or the rightward movement analysis for ASL content
questions. It is not entirely clear if the source of disagreement is a dialectal variation
between consultants of NKMBL and consultants of Petronio and Lillo-Martin (for
14. Sentence types 301

example, a different behavior of native and non-native signers) or some misinterpreta-


tion of the data occurred. At the moment of writing, only NKMBL made available a
large sample of videos at the website http://www.bu.edu/asllrp/book/ so a direct inspec-
tion of all the controversial data is not possible. Given this situation, it seems fair to
conclude that the choice between the leftward and the rightward analysis for ASL
content questions is still contentious.

3.3. Content questions in LIS

The pattern of content questions in LIS, which has been discussed by Cecchetto et al.
(2009) (from now on CGZ), bears on the question of the choice between the leftward
and the rightward movement analysis. Although, as other sign languages do, LIS has
a relatively free word order due to scrambling possibilities, CGZ note that LIS is a
head final language. The verb (the head of the VP) follows the direct object and signs
as modal verbs (cf. (11)), aspectual markers (cf. (12)), and negation (cf. (13)) follow
the verb. If these signs sit in the head of dedicated functional projections, this word
order confirms that LIS is head final. (Following CGZ, LIS signs are glossed here
directly in English. Videos of LIS examples are available at the web site http://
www.filosofia.unimi.it/~zucchi/ricerca.html.)

(11) gianni apply can [LIS]


‘Gianni can apply.’
(12) gianni house buy done [LIS]
‘Gianni bought a house.’
neg
(13) gianni maria love not [LIS]
‘Gianni doesn’t love Maria.’

In LIS a wh-sign sits in the rightmost position in the postverbal area, following any
functional sign (the same happens for wh-phrases composed by a wh-determiner and
by its restriction, as CGZ show):

wh
(14) cake eat not who [LIS]
‘Who did not eat the cake?’
wh
(15) house build done who [LIS]
‘Who built the house?’

Although wh-words in LIS can remain in situ under a restricted set of circumstances,
namely if they are discourse-linked, they cannot sit in the left periphery under any
condition. In this sense, the pattern of wh-items is sharper in LIS than in ASL.
CGZ adopt a version of the rightward movement analysis inspired by NKMBL’s
analysis of ASL and explicitly ask why sign languages, unlike spoken languages, should
allow rightward wh-movement. Their answer to this question capitalizes on the pattern
302 III. Syntax

of wh-NMM in LIS. In both ASL and LIS the main feature of wh-NMM is furrowing
of the eyebrows (incidentally, although this type of NMM for wh-questions is crosslin-
guistically very common, it is not a sign language universal, since in languages like
HZJ and ÖGS the main wh-NMM is not eyebrow positions, but ‘chin up’, which may
be accompanied with a head thrust forward (cf. Šarac et al. 2007)).
There is an important difference in the distribution of wh-NMM between ASL and
LIS, though. In ASL, if wh-NMM spreads, it does so over the entire sentence. In LIS
the extent of spreading depends on the grammatical function of the wh-phrase (this is
a slight simplification, see CGZ for a more complete description). If the wh-phrase is
the subject, wh-NMM spreads over the entire sentence (cf. (16)). However, if
wh-phrase is the object, wh-NMM spreads over object and verb, but it is not co-articu-
lated with the subject (cf. (17)):

wh
(16) t gianni see who [LIS]
‘Who saw Gianni?’
wh
(17) gianni t eat what [LIS]
‘What does Gianni eat?’

CGZ interpret this pattern as an indication that wh-NMM in LIS marks the depend-
ency between the base position of the wh-phrase and the sentence-final COMP posi-
tion (this is indicated in (16)⫺(17) by the fact that wh-NMM starts being articulated
in the position of the trace/copy). In this respect, wh-NMM would be similar to wh-
movement, since both unambiguously connect two discontinuous positions. While wh-
movement would be the manual strategy to indicate a wh-dependency, wh-NMM would
be the non-manual strategy to do the same.
Under the assumption that NMM is a prosodic cue that realizes the CWH feature,
CGZ relate the LIS pattern to the pattern found in various spoken languages, in which
wh-dependencies are prosodically marked (this happens in Japanese, as discussed by
Deguchi/Kitagawa (2002) and Ishihara (2002), but also in other spoken languages,
which are discussed by Richards (2006)). However, one difference remains between
LIS and spoken languages in which wh-dependencies are phonologically marked.
Wh-movement and the prosodic strategy of wh-marking do not normally co-occur in
spoken languages that prosodically mark wh-dependencies, as wh-phrases remain in
situ in these languages (this holds for Japanese and for other languages discussed by
Richards). CGZ explain the lack of co-occurrence of prosodic marking and overt
movement in spoken languages by saying that this would introduce a redundancy, since
two strategies would be applied to mark the very same wh-dependency. As for the fact
that wh-NMM and wh-movement do co-occur in LIS, CGZ propose that LIS might be
more tolerant of the redundancy between movement and NMM because sign lan-
guages, unlike spoken languages, are inherently multidimensional. So, ultimately they
explain the possibility of rightward wh-movement as an effect of the different modality.
CGZ extend their analysis to ASL. This extension is based on the revised version of
the rightward movement analysis proposed by Neidle (2002), according to which the
wh-phrase passes through a focus position in the left periphery in its movement to-
14. Sentence types 303

wards Spec,CP in the right periphery. CGZ claim that this intermediate step in the left
periphery can explain the different distribution of wh-NMM in LIS and ASL.
To date, CGZ’s account is the only attempt to explain the difference between spo-
ken and sign languages in the availability of a position for wh-phrases in the right
periphery. However, the hypothesis that NMM can mark discontinuous dependencies
is controversial, since it is not supported in sign languages other than LIS. Typically,
NMM are associated with lexical material or with the c-command domain of a func-
tional head. So CGZ’s analysis requires a significant revision of the theory of grammat-
ical markers. It remains to be seen if this revision is supported by evidence coming
from NMM in sign languages other than LIS.

3.4. Remnant movement analyses

If wh-movement is rightward in sign languages, as argued by NKMBL and by CGZ,


the problem arises of explaining the difference with spoken languages, in which it is
leftward. CGZ tackle this issue, as already mentioned, but another possible approach
is that in both sign and spoken languages wh-movement is leftward, but in sign lan-
guages it appears to be rightward, due to the systematic occurrence of remnant move-
ment.
According to a standard version of the remnant movement analysis, first, the wh-
phrase moves to a dedicated position in the left periphery, say Spec,CP (as in spoken
languages). Then the constituent out of which the wh-phrase has moved (the remnant)
is moved to its left. This is schematically represented in Figure 14.1.
The result is that the location of the wh-phrase on the right side is only apparent
because, structurally speaking, the wh-phrase sits in the left periphery. If one adopts
the remnant movement analysis, the gap between spoken and sign languages is partially

Fig. 14.1: Schematic representation of the remnant movement analysis for right peripheral
wh-phrases.
304 III. Syntax

filled, since this analysis has been systematically applied to many constructions in spo-
ken languages by supporters of the antisymmetric framework (cf. Kayne 1994, 1998).
The antisymmetric framework bans rightward movement and rightward adjunction al-
together, whence the widespread use of the remnant movement option to explain the
right placement of various categories. For example, Poletto and Pollock (2004) propose
a remnant movement analysis for wh-constructions in some Romance dialects that
display instances of in situ wh-phrases.
The standard version of the remnant movement analysis has been criticized by
NKMBL, who claim that it runs into difficult accounting for the distribution of wh-
NMM in ASL.
A modified version of the remnant movement analysis is applied to content ques-
tions in Indo-Pakistani Sign Language (IPSL) by Aboh, Pfau, and Zeshan (2005) and
to content questions in LSB by de Quadros (1999). Aboh and Pfau (2011) extend this
type of analysis to content questions in the Sign Language of the Netherlands (NGT).
All these analyses are compatible with the antisymmetric framework. According to the
modified version, the sentence-final wh-sign is a head in the complementizer system.
Since this head sits in the left periphery of the structure, its right placement is derived
by moving the entire clause to a structural position to its left. In this account, as in
more standard remnant movement analyses, the wh-sign does not move rightward, and
its right placement is a by-product of the fact that other constituents move to its left.
This version can apply to sign languages in which the right peripheral wh-phrase is a
single sign (not a phrase). IPSL, LSB, and NGT all share this property. IPSL content
questions will be described here, since they have been used as an argument for a
specific theory of clause typing by Aboh and Pfau (2011).
Aboh, Pfau, and Zeshan (2005) report that IPSL is an SOV language in which a
single wh-sign (glossed as g-wh) covers the whole range of question words in other
languages. Its interpretation depends on the context and, if this does not suffice, g-wh
may combine with other non-interrogative signs to express more specific meanings.
Crucially, g-wh must occur sentence-finally. Examples (18) and (19) are from Aboh
and Pfau (2011) (subscripts refer to points in the signing space, i.e. localizations of
present referents or localizations that have been established for non-present referents).

wh
(18) father index3 search g-wh [IPSL]
‘What is/was father searching?’
wh
(19) index3 come g-wh [IPSL]
‘Who is coming?’

Wh-NMM (raised eyebrows and backward head position with the chin raised) mini-
mally scopes over g-wh but can extend to successively bigger constituents, with the
exclusion of topics. A consequence of this scope pattern is that the whole proposition
(or clause) may (but does not need to) be within the scope of wh-NMM.
Assuming the modified version of the remnant movement analysis summarized
above, g-wh is a complementizer, so content questions in IPSL never surface with a
wh-phrase (the object position in (18) and the subject position in (19) would be occu-
pied by a silent phrase that is unselectively bound, following a proposal by Cheng
14. Sentence types 305

(1991)). Aboh and Pfau (2011) stress the theoretical implications of the IPSL pattern:
even if wh-phrases typically participate in the meaning of questions cross-linguistically,
IPSL would show that they are not necessary to type a content question as interroga-
tive, since there are content questions with no wh-phrase. They discuss the consequence
of this implication for the general theory of clause-typing.
A complication with Aboh et al.’s (2005) account is that g-wh may (although it
does not need to) combine with non-interrogative signs to express more specific mean-
ings. This is illustrated in (20) and (21), in which the sign place is associated to g-wh
to express the meaning ‘where’:

(20) index2 friend place sleep g-wh [IPSL]

(21) index2 friend sleep place g-wh [IPSL]


‘Where does your friend sleep?’

As (20) and (21) indicate, the sign optionally associated with g-wh, namely place, may
either appear at the right periphery, where it is adjacent to g-wh, or in situ. Since,
under Aboh et al.’s (2005) account, place and g-wh do not form a constituent, deriving
the word order in (21) is not straightforward. In fact, Aboh, Pfau, and Zeshan must
assume that remnant movement applies within the clausal constituent which in turn
moves to the left of the head that hosts g-wh. A rough simplification of this derivation
is illustrated in (22). Presumably, a similar (complicated) derivation would be given to
sign languages displaying interrogative phrases in the right periphery, should Aboh et
al.’s (2005) account be extended to them.

(22) [ [ [index2 friend tz sleep]i placez ti ]j g-wh tj ] [IPSL]

Summarizing, remnant movement analyses can explain the right placement of wh-items
in sign languages and can reduce the gap with spoken languages, in which remnant
movement analyses have been systematically exploited. A possible concern is that it is
not always clear which features trigger the movement of the remnant. If movement of
the remnant is not independently motivated, the remnant movement analysis can de-
rive the correct word order but it runs the risk of being an ad hoc device.

3.5. Wh-duplication

A feature that often surfaces in content questions in the sign languages analyzed in
the literature is that the wh-sign may be duplicated. This phenomenon has been de-
scribed in ASL, LSB, LIS, HZJ, ÖGS, and NGT (see references for these languages
listed above) but has been reported, although less systematically, in other sign lan-
guages as well. Although cases of duplication of a wh-word are not unheard of in
spoken languages (cf. Felser 2004), the scope of the phenomenon in sign languages
seems much wider. From a theoretical point of view, it is tempting to analyze duplica-
tion of a wh category by adopting the copy theory of traces, proposed by Chomsky
(1993) and much following work. This theory takes traces left by movement to be
306 III. Syntax

perfect copies of the moved category, apart from the fact that (in a typical case) they
are phonologically empty. Assuming the copy theory of traces, duplication is the null
hypothesis and what must be explained is the absence of duplication, namely cancella-
tion of one copy (typically, the lower one).
Given their pervasive pattern of duplication, sign languages are a good testing
ground for the copy theory of traces. Nunes’s (2004) theory on copy cancellation will
be summarized, since it is extended by Nunes and de Quadros (2008) to cases of
wh-duplication in sign languages (see also Cecchetto (2006) for a speculation on why
copies are more easily spelled-out in sign languages than in spoken languages).
Nunes (2004) claims that, in the normal case, only one copy can survive because, if
two identical copies were present, the resulting structure could not be linearized under
Kayne’s (1994) Linear Correspondence Axiom (LCA), which maps asymmetric c-com-
mand into linear precedence. This is so because LCA would be required to assign
different positions to the ‘same’ element. For example, in a structure like (23), the
subject ‘John’ would asymmetrically c-command and would be asymmetrically c-com-
manded by the same element, namely ‘what’. This would result in a contradiction,
since ‘what’ should both precede and be preceded by ‘John’. Cancellation of the lower
copy of ‘what’ fixes the problem.

(23) What did John buy what?

In Kayne’s framework, LCA is a condition determining word order inside the sentence,
while LCA does not determine the order of morphemes inside the word. In other
terms, LCA cannot see the internal structure of the word. Nunes and de Quadros
capitalize on the word internal ‘blindness’ of LCA to explain wh-reduplication in LSB
and ASL. They assume that multiple copies of the same category can survive only if
one of these copies undergoes a process of morphological fusion with another word
from which it becomes indistinguishable as far as LCA is concerned. More specifically,
they claim that the duplicated sign becomes fused with the silent head of a focus
projection. This explains why reduplication is a focus marking strategy. Since only a
head can be fused with another head, Nunes and de Quadros can explain why phrases
(including wh-phrases) can never be duplicated in LSB (and, possibly, in ASL as well).
This approach naturally extends to other cases in which duplication is a focus marking
device, namely lexical verbs, modals, etc.

3.6. Conclusion on content questions

At the beginning of this section it was pointed out that content questions might be an
area of macrotypological variation between spoken and sign languages. It is time to
evaluate the plausibility of that hypothesis on the basis of the evidence that I presented
and of other information present in the literature. Table 14.2 summarizes the informa-
tion on the position of wh-signs in sign languages for which the literature reports
enough data. For sign languages that have not already been mentioned, the source
is indicated.
Finally, Zeshan (2004), in a study that includes data from 35 different sign languages,
claims that “across the sign languages in the data, the most common syntactic positions
14. Sentence types 307

Tab. 14.2: Position of wh-signs in sign languages


American Sign Language (ASL)
Brazilian Sign Language (LSB)
Wh-items may occur at the left periphery, at the right periphery and in situ. The extent to which
these options are available in ASL remains controversial.
Croatian Sign Language (HZJ), cf. Šarac and Wilbur (2006)
Finnish Sign Language (FinSL), cf. Savolainen (2006)
New Zealand Sign Language (NZSL), cf. McKee (2006)
Wh-items can appear sentence-initially, sentence-finally or doubled in both positions.
Australian Sign Language (Auslan), cf. Johnston and Schembri (2007)
Wh-items can appear in situ, in sentence-initial position or doubled in sentence-initial and in
sentence-final position.
Austrian Sign Language (ÖGS), cf. Šarac et al. (2007)
The most ’neutral’ position for wh-items is at the left edge.
Israeli Sign Language (Israeli SL), cf. Meir (2004)
Sign Language of the Netherlands (NGT), cf. Aboh and Pfau (2011)
Catalan Sign Language (LSC), cf. Quer et al. (2005)
Spanish Sign Language (LSE), cf. Herrero (2009)
The natural position of wh-phrases is at the right edge.
Japanese Sign Language (NS), cf. Morgan (2006)
Wh-signs are typically, but not necessarily, clause-final. Wh-phrases can also occur in situ and on
the left, in which case placement of a copy at the end of the sentence is not unusual.
Hong Kong Sign Language (HKSL), cf. Tang (2006)
The wh-signs for argument questions are either in situ or in clause-final position. Wh-signs for
adjuncts are generally clause-final. Movement of the wh-sign in clause-initial position is not allowed.
Italian Sign Language (LIS)
Indo-Pakistani Sign Language (IPSL)
Wh-phrases move to the right periphery, while movement to the left periphery is altogether
banned.

for question words are clause-initial, clause-final, or both of these, that is, a construc-
tion with a doubling of the question word […]. In situ placement of question words
occurs much less often across sign languages and may be subject to particular restric-
tions”.
One should be very cautious when drawing a generalization from these data, since
the set of sign languages for which the relevant information is available is still very
restricted, not to mention the fact that much controversy remains even for better stud-
ied sign languages, such as ASL. However, it is clear that there are some languages
(LIS, IPSL, and HKSL being the clearest cases and Israeli SL, LSC, LSE, NGT, and
NS being other plausible candidates) in which the right periphery of the clause is the
only natural position for wh-items. In other sign languages the pattern is more compli-
cated, since other positions for wh-signs are available as well. Finally, in only one sign
language in this group (ÖGS), the right periphery might not be accessible at all. There-
fore, it seems that best guess based on the available knowledge is that the macrotypo-
308 III. Syntax

logical variation between sign and spoken languages in the positioning of wh-items is
real. This is not necessarily an argument in favor of the rightward movement analysis,
since there are other possible explanations for the right peripheral position of wh-
phrases, i.e. remnant movement accounts. Still, even if some form of the remnant move-
ment proposals is right, it remains to be understood why remnant movement is more
widespread in content questions in sign languages than in spoken languages. All in all,
it seems fair to conclude that one argument originally used against the rightward move-
ment analysis for ASL by Petronio and Lillo-Martin, namely that it would introduce a
type of movement unattested in other languages, has been somewhat weakened by
later research on other sign languages.
There is another tentative generalization that future research should evaluate. Sign
languages for which a formal account has been proposed seem to come in two main
groups. On the one side, one finds languages like ASL, LSB, and HZJ. In these lan-
guages, both the left and the right periphery are accessed by the wh-sign, although the
extent to which this can happen remains controversial (at least in ASL). On the other
side, IPSL and LIS are clearly distinct, since wh-words are not allowed to sit in the left
periphery under any condition (this is a pre-theoretical description; if remnant move-
ment analyses are right, wh-phrases access the left periphery in LIS and IPSL as well).
Interestingly, ASL, LSB, and HZJ are SVO, while IPSL and LIS are SOV. It has been
proposed that the position of wh-phrases may be correlated to word order. In particu-
lar, Bach (1971), by having in mind leftward movement in spoken languages, claimed
that wh-movement is confined to languages that are not inherently SOV. The status of
Bach’s generalization is not entirely clear. An automatic search using the tools made
available by the World Atlas of Language Structures Online reveals that, out of 497
languages listed as SOV, 52 display sentence-initial interrogatives (this search was made
by combining “Feature 81: Order of Subject, Object and Verb” (Dryer 2009c) and
“Feature 93: Position of Interrogative Phrases in Content Questions” (Dryer 2009b)).
However, Bach’s generalization is taken for granted in much theoretically oriented
work (for example, Kayne (1994) tries to capture it in his antisymmetric framework)
and it is rather clear that it holds for better-studied SOV languages (Basque, Japanese,
Turkish, or Hindi, among others).
Assuming that Bach’s generalization is on the right track, it should be qualified
once sign languages enter into the picture. The qualified generalization would state
that in both sign and spoken languages wh-phrases can access the left periphery only
if the language is not SOV. However, while wh-phrases remain in situ in SOV spoken
languages, they can surface in the right periphery in SOV sign languages. It should be
stressed that at present this is a very tentative generalization and only further crosslin-
guistic research on sign (and spoken) languages can confirm or reject it.

4. Other constructions with wh-phrases

In spoken languages, wh-phrases are found in constructions distinct from content ques-
tions. These include full relative clauses, free relatives, exclamatives, rhetorical ques-
tions, and pseudoclefts. It is interesting to ask whether the occurrence of wh-movement
is also observed in the correspondent constructions in sign languages. This issue is
14. Sentence types 309

relevant for the debate concerning the role of wh-phrases in content questions (cf.
Aboh and Pfau’s (2011) claim, based on IPSL, that wh-phrases, being not inherently
interrogative, are not the crucial factor that makes a sentence interrogative).
The first observation is that in no known sign language are (full) relative clauses
formed by wh-movement, notwithstanding the fact that relative constructions in sign
languages replicate all the major strategies of relativization identified in spoken lan-
guages, namely internally headed relatives, externally headed relatives, and correla-
tives. Detailed descriptions of relative constructions are available for three sign lan-
guages: ASL, LIS, and DGS. LIS relative constructions have been analyzed as either
internally headed relatives (Branchini 2006; Branchini/Donati 2009) or as correlatives
(Cecchetto et al. 2006). Pfau and Steinbach (2005) claim that DGS displays externally
headed relative clauses. According to Liddell (1978, 1980), in ASL both internally and
externally headed relative clauses can be identified (cf. Wilbur/Patschke (1999) for
further discussion on ASL relatives; also see chapter 16, Complex Sentences, for discus-
sion of relative clauses). Interestingly, although relative markers have been identified in
all these languages, they are morphologically derived from demonstrative or personal
pronouns, not from wh-signs. The lack of use of wh-items in full relative clauses (if
confirmed for other sign languages) is an issue that deserves further analysis.
A related question is whether wh-NMM, intended as the non-manual marking nor-
mally found in content questions, is intrinsically associated with wh-signs. The answer
to this question must be negative, since it is clear that there are various constructions
in which wh-signs occur with a NMM different from wh-NMM. We already mentioned
structures like (7) above, which are analyzed as indirect questions by Petronio and
Lillo-Martin (1997) and do not display the wh-NMM normally found in ASL.
However, the better studied case of a wh-construction occurring without wh-NMM
is the ASL construction illustrated in (25) (Branchini (2006) notes a similar construc-
tion in LIS):

re
(25) john buy what, book [ASL]
‘The thing/What John bought is a book.’

Superficially the construction in (25) resembles a question-answer pair at the discourse


level, but there is evidence that it must be analyzed as a sentential unit. The first
obvious observation is that, if the sequence john buy what were an independent ques-
tion, we would expect the canonical wh-NMM to occur. However, eyebrow raise (in-
stead of furrowing) occurs. Davidson et al. (2008, in press) discuss further evidence
that structures like (25) are declarative sentences. For example, they show that these
structures can be embedded under predicates which take declarative clauses as comple-
ments (hope, think, or be-afraid), but not under predicates that take interrogative
clauses as complements, such as ask (see also Wilbur 1994):

re
(26) those girls hope [their father buy what, car] [ASL]
‘Those girl hope that the thing/what their father bought is a car.’

(27) *those girls ask [their father buy what, car]


310 III. Syntax

A natural analysis takes the ASL sentence (25) to be the counterpart of the English
pseudocleft sentence ‘What John bought is a book’ (cf. Petronio (1991) and Wilbur
(1996) for this type of account). Under this analysis, the wh-constituent in (25) would
be taken to be a free relative (but see Ross (1972), den Dikken et al. (2000), Schlenker
(2003) for analyses that reject the idea that a pseudocleft contains a free relative).
However, Davidson et al. (2008, in press) object to a pseudocleft analysis, based on
various facts, including the observation that, unlike free relatives in English, any wh-
word (who, where, why, which, etc.) can appear in structures like (25). As a result,
they conclude that the wh-constituent in (25) is an embedded question, not a free rela-
tive.
The proper characterization of the wh-constituent in sentences like (25) bears on
the controversy concerning the position of wh-items in ASL, since there seems to be
a consensus that, at least in this construction, wh-items must be clause-final. So, if the
wh-constituent in (25) were a question, it would be an undisputed case of a question
in which the wh-item must be right peripheral.
One question that arises is what can explain the distribution of wh-NMM, since it
is clear that wh-items are not intrinsically equipped with it. There is consensus that the
distribution of wh-NMM is largely determined by syntactic factors, although different
authors may disagree on the specifics of their proposal (NKMBL and Wilbur and
Patschke 1999 claim that wh-NMM is a manifestation of the wh feature in COMP,
Petronio and Lillo-Martin (1997) argue that wh-NMM expresses the combination of
wh and Focus features in COMP, and CGZ claim that wh-NMM marks the wh-depend-
ency). However, it has been proposed that non-syntactic factors play an important role
as well. For example, Sandler and Lillo-Martin (2006), reporting work in Hebrew by
Meir and Sandler (2004), remark that the facial expression associated with content
questions in Israeli SL (furrowed brow) is replaced by a different expression if the
question does not require an answer but involves reproach (like in the Israeli SL ver-
sion of the question “Why did you just walk out of my store with that shirt without
paying?”). Sandler and Lillo-Martin conclude that the pragmatic condition of a content
question is crucial to determine the type of NMM that surfaces: when the speaker
desires an answer involving content, wh-NMM is typically used, but when the informa-
tion being questioned is already known, wh-NMM is replaced with a different expres-
sion.
Since it is commonly assumed that wh-NMM has the characteristics of a prosodic
element (intonation), it is not surprising that prosodic considerations play a role in its
distribution. In particular, Sandler and Lillo-Martin discuss some cases in which
wh-NMM is determined by Intonation Phrasing (for example, if a parenthetical inter-
rupts a wh-question, wh-NMM stops being articulated over the parenthetical and is re-
enacted over the portion of the clause that follows it).
All in all, wh-NMM is a phenomenon at the interface between syntax and phonol-
ogy with important consequences for the pragmatic uses of content questions. Whereas
its syntactic role is not in discussion, only a combined account can explain its precise
distribution.

5. Conclusion
Results emerging from the research on questions in sign languages have proved impor-
tant both for linguists interested in formal accounts and for those interested in language
14. Sentence types 311

typology. On the one hand, some well established cross-linguistic generalizations about
the position of interrogative elements in content questions need some revision or quali-
fication once sign languages are considered. On the other, pieces of the formal appara-
tus of analysis, like the position of specifiers in the syntactic structure, the notion of
chain and that of copy/trace, may need refining, since the sign language pattern is
partially different from that emerging from spoken languages.
Thus, the formal theory of grammar may be considerably enriched and modified by
the study of sign languages. The opposite holds as well, however. The pattern observed
with sign languages is so rich and complex that no adequate description could be
reached without a set of elaborate working hypotheses that can guide the research.
Eventually, these working hypotheses can be revised or even rejected, but they are
crucial in order to orientate the research.
It is unfortunate that the same virtuous interaction between empirical observation
and theoretical approaches has not been observed in the study of other sentence types.
In particular, a deep investigation of imperatives (and exclamatives) in sign languages
is still to be done and one must hope that this gap will soon be filled.

6. Literature
Aboh, Enoch/Pfau, Roland/Zeshan, Ulrike
2005 When a Wh-Word Is Not a Wh-Word: The Case of Indian Sign Language. In: Bhatta-
charya, Tanmoy (ed.), The Yearbook of South Asian Languages and Linguistics 2005.
Berlin: Mouton de Gruyter, 11⫺43.
Aboh, Enoch/Pfau, Roland
2011 What’s a Wh-Word Got to Do with It? In: Benincà, Paola/Munaro, Nicola (eds.), Map-
ping the Left Periphery: The Cartography of Syntactic Structures, Vol. 5. Oxford: Oxford
University Press, 91⫺124.
Bach, Emmon
1971 Questions. In: Linguistic Inquiry 2, 153⫺166.
Baker, Charlotte/Cokely, Dennis
1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver
Spring, MD: T.J. Publishers.
Barattieri, Chiara
2006 Il periodo ipotetico nella Lingua dei Segni Italiana (LIS). MA Thesis, University of
Siena.
Branchini, Chiara
2006 On Relativization and Clefting in Italian Sign Language (LIS). PhD Dissertation, Uni-
versity of Urbino.
Branchini, Chiara/Donati, Caterina
2009 Relatively Different: Italian Sign Language Relative Clauses in a Typological Perspec-
tive. In: Liptàk, Anikó (ed.), Correlatives Cross-Linguistically. Amsterdam: Benjamins,
157⫺194.
Cecchetto, Carlo
2006a Reconstruction in Relative Clauses and the Copy Theory of Traces. In: Pica, Pierre/
Rooryck, Johan (eds.), Linguistic Variation Yearbook 5. Amsterdam: Benjamins, 73⫺
103.
Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro
2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguis-
tic Theory 24, 945⫺975.
312 III. Syntax

Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro


2009 Another Way to Mark Syntactic Dependencies. The Case for Right Peripheral Specifi-
ers in Sign Languages. In: Language 85(2), 278⫺320.
Cheng, Lisa
1991 On the Typology of Wh-questions. PhD Dissertation, MIT.
Chomsky, Noam
1993 A Minimalist Program for Linguistic Theory. In: Hale, Kenneth/Keyser, Samuel Jay
(eds.), The View from Building 20. Cambridge, MA: MIT Press, 1⫺52.
Coerts, Jane
1992 Nonmanual Grammatical Markers. An Analysis of Interrogatives, Negations and Topi-
calisations in Sign Language of the Netherlands. PhD Dissertation, University of Am-
sterdam.
Coulter, Geoffrey R.
1979 American Sign Language Typology. PhD Dissertation, University of California, San
Diego.
Davidson, Kathryn/Caponigro, Ivano/Mayberry, Rachel
2008 Clausal Question-answer Pairs: Evidence from ASL. In: Abner, Natasha/Bishop, Jason
(eds.), Proceedings of the 27 th West Coast Conference on Formal Linguistics. Somerville,
MA: Cascadilla Press, 108⫺115.
Davidson, Kathryn/Caponigro, Ivano/Mayberry, Rachel
in press The Semantics and Pragmatics of Clausal Question-Answer Pairs in American Sign
Language. To appear in Proceedings of SALT XVIII.
Deguchi, Masanori/Kitagawa, Yoshihisa
2002 Prosody and Wh-Questions. In: Hirotani, Masako (ed.), Proceedings of the Thirty-Sec-
ond Annual Meeting of the North East Linguistic Society. Amherst, MA: GLSA, 73⫺92.
Dikken, Marcel den/Meinunger, André/Wilder, Chris
2000 Pseudoclefts and Ellipses. In: Studia Linguistica 54, 41⫺89.
Dryer, Matthew S.
2009a Polar Questions. In: Haspelmath, Martin/Dryer, Matthew S./Gil, David/Comrie, Ber-
nard (eds.), The World Atlas of Language Structures Online. Munich: Max Planck Digi-
tal Library, Chapter 116. [http://wals.info/feature/116]
Dryer, Matthew S.
2009b Position of Interrogative Phrases in Content Questions. In: Haspelmath, Martin/Dryer,
Matthew S./Gil, David/Comrie, Bernard (eds.), The World Atlas of Language Structures
Online. Munich: Max Planck Digital Library, Chapter 92. [http://wals.info/feature/92]
Dryer, Matthew S.
2009c Order of Subject, Object and Verb. In: Haspelmath, Martin/Dryer, Matthew S./Gil,
David/Comrie, Bernard (eds.), The World Atlas of Language Structures Online. Munich:
Max Planck Digital Library, Chapter 81. [http://wals.info/feature/81]
Dubuisson, Colette/Boulanger, Johanne/Desrosiers, Jules/Lelièvre Linda
1991 Les mouvements de tête dans les interrogatives en langue des signes québécoise. In:
Revue québécoise de linguistique 20(2), 93⫺122.
Dubuisson, Colette/Miller, Christopher/Pinsonneault, Dominiqu
1994 Question Sign Position in LSQ (Québec Sign Language). In: Ahlgren, Inger/Bergman,
Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Papers from the
Fifth International Symposium on Sign Language Research (Vol. 1). Durham: Interna-
tional Sign Linguistics Association and Deaf Studies Research Unit, University of Dur-
ham, 89⫺104.
Felser, Claudia
2004 Wh-copying, Phases and Successive Cyclicity. In: Lingua 114, 543⫺574.
Ferreira-Brito, Lucinda
1995 Por uma gramática das línguas de sinais. Rio de Janeiro: Tempo Brasileiro, UFRJ.
14. Sentence types 313

Fischer, Susan D.
1975 Influences on Word-order Change in American Sign Language. In: Li, Charles (ed.),
Word Order and Word Order Change. Austin: University of Texas Press, 1⫺25.
Geraci, Carlo
2006 Negation in LIS. In: Bateman, Leah/Ussery, Cherlon (eds.), Proceedings of the Thirty-
Fifth Annual Meeting of the North East Linguistic Society, Vol. 2. Amherst, MA: GLSA,
217⫺230.
Herrero, Ángel
2009 Gramática didáctica de la lengua de signos española. Madrid: Ediciones SM-CNSE.
Ishihara, Shinichiro
2002 Invisible but Audible Wh-Scope Marking: Wh-Constructions and Deaccenting in Japa-
nese. In: Mikkelsen, Line/Potts, Christopher (eds.), Proceedings of the 21st West Coast
Conference on Formal Linguistics (WCCFL 21). Somerville, MA: Cascadilla Press,
180⫺193.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language: An Introduction to Australian Sign Language Linguistics.
Cambridge: Cambridge University Press.
Kayne, Richard
1994 The Antisymmetry of Syntax. Cambridge, MA: MIT Press.
Kayne, Richard
1998 Overt vs. Covert Movement. In: Syntax 1(2), 128⫺191.
Liddell, Scott K.
1978 Nonmanual Signals and Relative Clauses in American Sign Language. In: Siple, Patricia
(ed.), Understanding Language Through Sign Language Research. New York: Academic
Press, 59⫺90.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Lillo-Martin, Diane/Fischer, Susan D.
1992 Overt and Covert Wh-Questions in American Sign Language. Paper Presented at the
Fifth International Symposium on Sign Language Research, Salamanca, Spain.
MacFarlane, James
1998 From Affect to Grammar: Ritualization of Facial Affect in Signed Languages. Paper
Presented at the Theoretical Issues in Sign Language Research Conference (TISLR),
Gallaudet University. [http://www.unm.edu/~ jmacfarl /eyebrow.html]
McKee, Rachel
2006 Aspects of Interrogatives and Negation in New Zealand Sign Language. In: Zeshan,
Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen:
Ishara Press, 70⫺90.
Meir, Irit
2004 Question and Negation in Israeli Sign Language. In: Sign Language & Linguistics 7,
97⫺124.
Meir, Irit/Sandler, Wendy
2004 Safa bamerxav: Eshnav le- sfat hasimanim hayisraelit (Language in Space: A Window
on Israeli Sign Language). Haifa: University of Haifa Press.
Morgan, Michael
2006 Interrogatives and Negatives in Japanese Sign Language (JSL). In: Zeshan, Ulrike
(ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara
Press, 91⫺127.
Neidle, Carol/MacLaughlin, Dawn/Lee, Robert/Bahan, Benjamin/Kegl, Judy
1998 Wh-Questions in ASL: A Case for Rightward Movement. American Sign Language
Linguistic Research Project Reports, Report 6. [http://www.bu.edu/asllrp/reports.html]
314 III. Syntax

Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert


2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Neidle, Carol
2002 Language Across Modalities: ASL Focus and Question Constructions. In: Pica, Pierre/
Rooryck, Johan (eds.), Linguistic Variation Yearbook 2. Amsterdam: Benjamins, 71⫺93.
Nunes, Jairo
2004 Linearization of Chains and Sideward Movement. Cambridge, MA: MIT Press.
Nunes, Jairo/Quadros, Ronice M. de
2008 Phonetically Realized Traces in American Sign Language and Brazilian Sign Language.
In: Quer, Josep (ed.), Signs of the Time, Selected Papers from TISLR 2004. Hamburg:
Signum, 177⫺190.
Petronio, Karen
1991 A Focus Position in ASL. In: Bobaljik, Jonathan D./Bures, Tony (eds.), Papers from
the Third Student Conference in Linguistics. (MIT Working Papers in Linguistics 14.)
Cambridge, MA: MIT, 211⫺225.
Petronio, Karen/Lillo-Martin, Diane
1997 Wh-Movement and the Position of Spec-CP: Evidence from American Sign Language.
In: Language 73, 18⫺57.
Pfau, Roland/Steinbach, Markus
2005 Relative Clauses in German Sign Language: Extraposition and Reconstruction. In: Ba-
teman, Leah/Ussery, Cherlon (eds.), Proceedings of the Thirty-Fifth Annual Meeting of
the North East Linguistic Society, Vol. 2. Amherst, MA: GLSA, 507⫺521.
Poletto, Cecila/Pollock, Jean Yves
2004 On the Left Periphery of Some Romance Wh-questions. In: Rizzi, Luigi (ed.), The
Structure of CP and IP: The Cartography of Syntactic Structures. Oxford: Oxford Uni-
versity Press, 251⫺296.
Quadros, Ronice M. de
1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifícia Universidade
Católica, Rio Grande do Sul.
Quadros, Ronice M. de
2006 Questions in Brazilian Sign Language (LSB). In: Zeshan, Ulrike (ed.), Interrogative
and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 270⫺283.
Quer, Josep et al.
2005 Gramàtica bàsica LSC. Barcelona: DOMAD-FESOCA.
Richards, Norvin
2006 Beyond Strength and Weakness. Manuscript, MIT.
Rizzi, Luigi
1990 Relativized Minimality. Cambridge, MA: MIT Press.
Ross, John R.
1972 Act. In: Davidson, Donald/Harman, Gilbert (eds.), Semantics of Natural Languages.
Dordrecht: Reidel, 70⫺126.
Sadock, Jerrold M./Zwicky, Arnold M.
1985 Speech Act Distinctions in Syntax. In: Shopen, Timothy (ed.), Language Typology and
Syntactic Description. Cambridge: Cambridge University Press, 155⫺196.
Sandler, Wendy
1989 Prosody in Two Natural Language Modalities. In: Language and Speech 42, 127⫺142.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Schlenker, Philippe
2003 Clausal Equations (A Note on the Connectivity Problem). In: Natural Language and
Linguistic Theory 21, 157⫺214.
14. Sentence types 315

Šarac Kuhn, Ninoslava/Wilbur, Ronnie


2006 Interrogative Structures in Croatian Sign Language: Polar and Content Questions. In:
Sign Language & Linguistics 9, 151⫺167.
Šarac, Ninoslava/Schalber, Katharina/Alibašić, Tamara/Wilbur, Ronnie
2007 Crosslinguistic Comparison of Interrogatives in Croatian Austrian and American Sign
Languages. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Varia-
tion: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter,
207⫺244.
Savolainen, Leena
2006 Interrogatives and Negatives in Finnish Sign Language: An Overview. In: Zeshan, Ul-
rike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara
Press, 284⫺302.
Spolaore, Chiara
2006 Italiano e Lingua dei Segni Italiana a confronto: l’imperativo. MA Thesis, University
of Venice.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Tang, Gladys
2006 Questions and Negation in Hong Kong Sign Language. In: Zeshan, Ulrike (ed.), Inter-
rogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 198⫺
224.
Van Herreweghe, Mieke/Vermeerbergen, Myriam
2006 Interrogatives and Negatives in Flemish Sign Language. In: Zeshan, Ulrike (ed.), Inter-
rogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 225⫺
257.
Wilbur, Ronnie
1994 Foregrounding Structures in American Sign Language. In: Journal of Pragmatics 22,
647⫺672.
Wilbur, Ronnie
1996 Evidence for the Function and Structure of Wh-Clefts in American Sign Language. In:
Edmondson, William/Wilbur, Ronnie (eds.), International Review of Sign Linguistics.
Hillsdale, NJ: Lawrence Erlbaum Associates, 209⫺256.
Wilbur, Ronnie/Patschke, Cynthia
1999 Syntactic Correlates of Brow Raise in ASL. In: Sign Language & Linguistics 2(3), 3⫺41.
Zanuttini, Raffaella/Portner, Paul
2003 Exclamative Clauses: At the Syntax-semantics Interface. In: Language 79(3), 39⫺81.
Zeshan, Ulrike
2003 Indo-Pakistani Sign Language Grammar: A Typological Outline. In: Sign Language
Studies 3, 157⫺212.
Zeshan, Ulrike
2004 Interrogative Constructions in Sign Languages ⫺ Cross-linguistic Perspectives. In: Lan-
guage 80, 7⫺39.
Zeshan, Ulrike
2006 Negative and Interrogatives Structures in Turkish Sign Language (TID). In: Zeshan,
Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen:
Ishara Press, 128⫺164.

Carlo Cecchetto, Milan (Italy)


316 III. Syntax

15. Negation
1. Introduction
2. Manual negation vs. non-manual marking of negation
3. Syntactic patterns of negation
4. Negative concord
5. Lexical negation and morphological idiosyncrasies of negatives
6. Concluding remarks
7. Literature

Abstract
The expression of sentential negation in sign languages features many of the morphologi-
cal and syntactic properties attested for spoken languages. However, non-manual mark-
ers of negation such as headshake or facial expression have been shown to play a central
role in this type of languages and they interact in various interesting ways with manual
negatives and with syntactic structure of negative clauses, thus introducing modality-
specific features. Particular sign language grammars are parametrized as to whether sen-
tential negation can be encoded solely with a manual or a non-manual element, or with
both. Multiple expression of negation at the manual level is another point of variation.
Pending further detailed descriptions and syntactic analyses of negation in a larger pool
of sign languages, it can be safely concluded that negation systems in the visual-gestural
modality show the richness and complexities attested for natural languages in general.

1. Introduction
Within the still limited body of research on the grammar of sign languages, the expres-
sion of negation is one of the few phenomena that has received a considerable amount
of attention. Apart from quite a number of descriptions and analyses of negative struc-
tures in individual sign languages, negation has been the object of a crosslinguistic
project which investigated selected aspects of the grammar of a wide sample of sign
languages from a typological perspective (Zeshan 2004, 2006a,b). A reason for the
special attention devoted to the grammar of negation might lie in the fact that it consti-
tutes a domain of grammar where manual and non-manual elements interact in very
rich and intricate ways: beyond the superficial first impression that all sign languages
negate by resorting to similar mechanisms, their negation systems display remarkably
diverse constraints that interact in complex ways with the different components of
each individual grammar. The main manual and non-manual ingredients of linguistic
negation can be traced back to affective and conventionalized gestures of the hearing
community the languages are embedded in, and it is precisely for this reason that the
results of the research carried out in this domain provide strong evidence for the lin-
guistic properties that recruited those gestures and integrated them into sophisticated
linguistic systems. At the same time, the origin of many negative markers reinforces
the hypothesis that contemporary sign languages, as a consequence of their relative
15. Negation 317

youth and the medium in which they are articulated and perceived, systematically
feature gestural and spatial resources which have been available during their genesis
period and subsequent (re)creolization phases.
Looking into the properties of sign language negation systems is motivated by the
need to offer a more accurate characterization of the role of the different non-manual
markers that are used. It has been argued that non-manuals play different roles at each
linguistic level (lexical marking, morphology, syntax, prosody; for an overview, see
Pfau/Quer 2010), and detailed analyses of negatives in different languages strongly
suggest that non-manual markers can be recruited for different functions at different
levels across languages. This result is of utmost importance in order to tease apart
linguistic vs. gestural non-manuals, which systematically co-occur within the same me-
dium in visual-gestural languages.
This chapter offers an overview of the most representative traits of the sentential
negation systems of the sign languages reported upon so far and highlights general
tendencies as well as interesting language-specific particularities. As Zeshan (2004)
points out, it might be too early to offer comprehensive typological analyses of sign
languages, given the insufficient number of studied sign languages for statistical analy-
sis as well as their unbalanced geographical distribution. Still, crosslinguistic compari-
son already yields quite a robust picture of the existing variation and it also allows for
analyzing the attested variation against the background of spoken language negation.
At the same time, theoretical syntax can also benefit from in-depth analyses of sign
language negation systems, as they constitute the testing ground for existing accounts
of the syntactic representation of functional elements.
The focus of section 2 is on the main types of manual and non-manual compo-
nents of negation. The form of manual sentence negators is reviewed and regular
and irregular negative signs are characterized. Next, the different head movements
and facial expressions that encode negation are described. Section 3 focuses on
certain syntactic properties attested in sign language negation: the interaction with
other syntactic categories, manual negation doubling, and spreading of non-manual
markers. Section 4 addresses the multiple expression of negation in patterns of split
negation and negative concord from a syntactic point of view. In section 5, non-
sentential manual negation is discussed, together with some morphological idiosyn-
crasies associated with it.

2. Manual negation vs. non-manual marking of negation

For almost all sign languages described to date, sentential negation has been found
to rely on two basic components: manual signs that encode negative meanings
ranging from the basic negative operator to very specific ones, as well as different
types of non-manual markers that can be either co-articulated with manual negative
signs or, in some cases, with other lexical signs in order to convey negation on
their own. With respect to these two components, we find a first parameter of
crosslinguistic variation: while some sign languages appear to be able to encode
sentential negation by means of a non-manual marker alone which is obligatory
(e.g., American Sign Language (ASL), German Sign Language (DGS), and Catalan
318 III. Syntax

Sign Language (LSC)), in other languages, the presence of a non-manual marker


is insufficient to negate the sentence and thus, a manual negator is required for
that function (e.g., Italian Sign Language (LIS), Jordanian Sign Language (LIU),
and Turkish Sign Language (TİD)). Zeshan (2006b, 46) labels languages of the
former type “non-manual dominant” and languages of the latter type “manual
dominant” languages. On the basis of her language sample, she establishes that
non-manual dominant languages are a majority. In (1) and (2), examples of the
two types of language with respect to this parameter illustrate the combinatorial
possibilities of headshake with and without manual negation in LSC (Quer 2007)
and LIS (Geraci 2005), which are non-manual dominant and manual dominant, re-
spectively.

( ( )) hs
(1) a. santi meat eat not [LSC]
( ) hs
b. santi meat eat
‘Santi doesn’t eat meat.’
hs
(2) a. paolo contract sign non [LIS]
‘Paolo didn’t sign the contract.’
( ( ( )))
b. * paolo contract sign

As we observe in the LIS example in (2), it is not the case that in manual dominant
languages non-manual markings are totally absent. Rather, they are generally co-
articulated with the manual negation and tend not to spread over other manual
material. When there are several negative markers, the choice of non-manual is
usually determined by the lexical negation, unlike what happens in non-manual
dominant languages.
It is important to notice that the function of non-manual marking of negation in
non-manual dominant languages is almost exclusively to convey sentential negation
(although see section 2.2 for some data that qualify this generalization). This is in
contrast to manual negations, which often include more specific signs encoding
negation and some other functional category such as aspect or modality, or a
portmanteau sign conveying the negation of existence.

2.1. Manual negation

2.1.1. Negative particles

Standard sentential negation is realized in many sign languages by a manual sign


that simply negates the truth of the proposition, such as the one found in LIU and
LSC consisting of an index handshape with the palm facing outwards and slightly
moving from side to side, as illustrated in Figure 15.1 and exemplified in (3) for
LIU (Hendriks 2007, 107).
15. Negation 319

Fig. 15.1: Neutral sentential negation neg in LIU. Copyright © 2007 by Bernadet Hendriks.
Reprinted with permission.

(3) father mother deaf index1 neg // speak [LIU]


‘My father and mother aren’t Deaf, they speak.’

However, basic sentential negation can occasionally carry an extra layer of pragmatic
meaning: in a few instances, sentential negation signs have been claimed to convey
some presupposition, as neg-contr in Indo-Pakistani Sign Language (IPSL) (Zeshan
2004, 34 f.) or no-no in TİD (Zeshan 2006c, 154 f.). In such cases, the negative
particle explicitly counters a conversational presupposition, which may be implicit,
as in (4), or explicit in the preceding discourse, as in (5).

(4) problem neg-contr [IPSL]


‘There is no problem (contrary to what has been said/what is usually as-
sumed/what you may beexpecting).’

(5) village good / city neg-contr [IPSL]


‘Villages are nice, but cities are not.’

A further nuance that is often added to basic sentential negation is emphasis,


normally expressed through dedicated non-manual markings accompanying the man-
ual negator, as reported in McKee (2006, 82) for New Zealand Sign Language
(NZSL). Nevertheless, some languages have specialized manual negations that have
been characterized as emphatic, with the meaning ‘not at all’ or ‘absolutely not’.
An example of this is the Finnish Sign Language (FinSL) sign no (‘absolutely not’)
illustrated in (6) (Savolainen 2006, 296).

re head turnCback/neg mouthing/squint


(6) index1 come no [FinSL]
‘I’m definitely not coming!’

As we will see in section 4, doubling of a negative sign or negative concord results


in emphasis on the negation as well.
320 III. Syntax

As syntactic markers of negation, negative signs occasionally display features


that are normally relevant in other domains of syntax. One such example might be
what has been characterized as person inflection for the NZSL sign nothing, which
can be articulated in locations associated with person features (McKee 2006, 85).

2.1.2. Irregular negatives

Manual sentential negation across sign languages usually features a number of


lexical signs that incorporate negation either in a transparent way or opaquely in
suppletive forms. Both types are usually referred to as instances of negation incorpo-
ration. Zeshan (2004) calls this group of items irregular negatives and points out
that sign languages tend to display some such items crosslinguistically. The majority
of such signs belong to recognizable semantic classes of predicates such as those
expressing cognition (‘know’, ‘understand’), emotion or volition (‘like’, ‘want’), a
modal meaning (‘can’, ‘need’, ‘must’), or possession/existence (‘have’, ‘there-be’).
See, for instance, the minimal LSC pair can vs. cannot in Figure 15.2.

can cannot
Fig. 15.2: LSC pair can vs. cannot

In addition, evaluative predicates (‘good’, ‘enough’) and grammatical tense/aspect


notions such as perfect or future tend to amalgamate with negation lexically as
well, as in the Hong Kong Sign Language (HKSL) negated future sign won’t and
the negated perfect sign not-yet (Tang 2006, 219):

neg
(7) kenny february fly taiwan won’t [HKSL]
‘Kenny won’t fly to Taiwan in February.’
neg
(8) (kenny) participate research not-yet [HKSL]
‘Kenny has not yet participated in the research.’
15. Negation 321

Some items belonging in this category can also have an emphatic nuance, as never-
past or never-future in Israeli Sign Language (Israeli SL, Meir 2004, 110).
Among the set of irregular negative signs, two different types can be distin-
guished from the point of view of morphology: on the one hand, transparent forms
where the negative has been concatenated or cliticized onto a lexical sign or else
a negative morpheme (simultaneous or sequential) has been added to the root
(Zeshan 2004, 45⫺49); on the other hand, suppletive negatives, that is, totally
opaque negative counterparts of existing non-negated signs. An example of the
latter group has been illustrated in Figure 15.2 (above) for LSC. In the case of
negative cliticization, a negative sign existing independently is concatenated with
another sign but the resulting form remains recognizable and both signs retain their
underlying movement, albeit more compressed, and no handshape assimilation oc-
curs. The interpretation of both signs together is fully compositional. An illustration
of such a case is shown in Figure 15.3 for TİD, where the cliticized form of not
can be compared to the non-cliticized one (Zeshan 2004, 46).

a. know^not b. not
Fig. 15.3: TİD cliticized (a) vs. non-cliticized (b) negation. Copyright © 2004 by Ulrike Zeshan.
Reprinted with permission.

a. need b. need-not
Fig. 15.4: Irregular simultaneous affixal negation with the verb need in FinSL. Copyright ©
2004 by Ulrike Zeshan. Reprinted with permission.
322 III. Syntax

The other process found in the formation of irregular negatives is affixation,


which can be simultaneous or sequential. An illustrative case of simultaneous nega-
tive affixation found in FinSL (Zeshan 2004, 47⫺49; Savolainen 2006, 299⫺301) is
illustrated in Figure 15.4: the affix consists of a change in palm orientation that,
depending on the root it combines with, can result in a negative with a horizontal
upwards or vertical inwards oriented open handshape. The simultaneous morpheme
does not have an independent movement and in some cases it assimilates its
handshape to the one of the root (e.g., see-not). In addition, the resulting negative
sign can have a more specific or idiosyncratic meaning (e.g., perfective/resultative
in see-not with the interpretation ‘have not seen, did not see’; hear-not meaning
‘have not heard, do not know’). The negative morpheme has no free-occurring
counterpart and combines with a restricted set of lexical items, thus displaying
limited productivity.
Occasionally, affixation involves a specific handshape, as the extended pinky
handshape in HKSL (see section 5 for further discussion). This handshape is derived
from a sign meaning bad/wrong and is affixed to certain items, giving rise to
negative signs such as know-bad (‘don’t know’) or understand-bad (‘don’t under-
stand’), as illustrated in Figure 15.5 (Tang 2006, 223).

know know-bad
Fig. 15.5: Irregular simultaneous affixal negation by means of handshape with the verb know
in HKSL. Copyright © 2006 by Ishara Press. Reprinted with permission.

Sequential affixation has been shown most clearly to be at stake in the ASL
suffix ^zero, which is formationally related to the sign nothing. Aronoff et al.
(2005, 328⫺330) point out that ^zero shows the selectivity and behavior typical of
morphological affixes: it only combines with one-handed plain verbs; the path
movements get compressed or coalesce; the non-manuals span the two constituent
parts of the sign; no handshape assimilation occurs; and some of the negative forms
yield particular meanings. These phenomena clearly distinguish this process from
compound formation. A similar derivational process is observed in Israeli SL, where
the relevant suffix ^not-exist can give rise to negative predicates with idiosyncratic
15. Negation 323

meanings such as surprise+not-exist (‘doesn’t interest me at all’) or enthusiasm+


not-exist (‘doesn’t care about it’) (Meir 2004, 116; for more details, see section
5 below).
When an irregular negative is available in the language, it normally blocks the
option of combining the non-negative predicate with an independent manual negator
or with a non-manual marker, if this non-manual can convey sentential negation
on its own. Compare the ungrammatical LSC example (9) with Figure 15.2 above,
which shows the suppletive form cannot:

( ) hs
(9) * can (not) [LSC]

Nevertheless, this is not always the case and sometimes both options co-exist, as
reported for LIS in Geraci (2005).

2.1.3. Negation in the nominal and adverbial domain

Apart from negative marking related to the predicate, negation is often encoded in
the nominal domain and in adverbials as well. Negative determiners glossed as no
and negative quantifiers (pronouns, in some descriptions) such as none, nothing,
or no one occur in many of the sign languages for which a description of the
negation system exists. The LIS example in (10) illustrates the use of a nominal
negative (Geraci 2005):

hs
(10) contract sign nobody [LIS]
‘Nobody signed the contract.’

Two distinct negative determiners have been identified for ASL: nothing and noº,
illustrated in (11) (Wood 1999, 40).

(11) john break fan nothing/noº [ASL]


‘John did not break any (part of the) fan.’

Negative adverbials such as never are also very common. For ASL, Wood (1999)
has argued that different interpretations result from different syntactic positions of
never: when preverbal, it negates the perfect (12a), while in postverbal position, it
yields a negative modal reading (12b).

(12) a. bob never eat fish [ASL]


‘Bob has never eaten fish.’
b. bob eat fish never
‘Bob won’t eat fish.’
324 III. Syntax

Fig. 15.6: LSC negative imperative sign don’t!

2.1.4. Other pragmatically specialized negators

Beyond the strict domain of sentential negation, other occurrences of negatives


must be mentioned. Negative imperatives (or prohibitives) are usually expressed
non-manually in combination with a more general negative particle, but some lan-
guages have a specialized sign for this type of speech act, as in the LSC negative
imperative shown in Figure 15.6 (Quer/Boldú 2006).
In the domain of pragmatically specialized negations, a whole range of signs is
found, including negative responses, refusal of an offer, or denial. Some of these
signs constitute one-word utterances and have been classified as interjections. Meta-
linguistic negation can be included in this set of special uses. Although it is not
common, Japanese Sign Language (NS) has a specialized manual negation to refute
a specific aspect of a previous utterance: differ (Morgan 2006, 114).

2.2. Non-manual markers of negation

It has already pointed out in the beginning of this section that negation is not only
realized at the manual level, but also at the non-manual one, and that languages
vary as to how these two types of markers combine and to what extent they are
able to convey sentential negation independently of each other (for non-manual
markers, cf. also chapter 4, Prosody). It seems clear that such markers have their
origin in gestures and facial expressions that occur in association with negative
meanings in human interaction. In sign languages, however, these markers have
evolved into fully grammaticalized elements constrained by language-specific gram-
matical rules (see Pfau/Steinbach 2006 and chapter 34). Beyond the actual restric-
tions to be discussed below, especially in section 3, there is psycholinguistic and
neurolinguistic evidence indicating that non-manuals typically show linguistic pat-
terns in acquisition and processing and can be clearly distinguished from affective
communicative behavior (Reilly/Anderson 2002; Corina/Bellugi/Reilly 1999; Atkin-
15. Negation 325

son et al. 2004). Moreover, unlike gestures, linguistic non-manuals in production


have a discrete onset and offset, are constant, and have a clear and linguistically
defined scope (Baker-Shenk 1983).

2.2.1. Head movements

The main non-manual markers of negation involve some sort of head movement.
The most pervasive one is headshake, a side-to-side movement of the head which
is found in virtually all sign languages studied to date (Zeshan 2004, 11). The
headshake normally associates with the negative sign, if present, but it commonly
spreads over other constituents in the clause. The spreading of negative headshake
is determined by language-specific grammar constraints, as will be discussed in
section 3. In principle, it must be co-articulated with manual material, but some
cases of freestanding headshake have been described, for instance, for Chinese Sign
Language (CSL). Example (13) illustrates that co-articulation of the negative head-
shake with the manual sign leads to ungrammaticality, the only option being articula-
tion after the lexical sign (Yang/Fischer 2002, 176).

(* hs) hs
(13) understand [CSL]
‘I don’t understand.’

Although other examples involving free-standing negative headshakes have been


documented, they can be reduced to instances of negative answers to (real or
rhetorical) questions, as shown in (14a) for NZSL (McKee 2006, 84), or to structures
with contrastive topics, where the predicate is elided, as in (14b) from CSL (Yang/
Fischer 2002, 178).

rhet-q hs
(14) a. worth go conference [NZSL]
‘Is it worth going to the conference? I don’t think so.’
t hs
b. hearing teachers [CSL]
‘(but some) hearing teachers do not [take care of deaf students].’

A different use of a free-standing negative headshake is the one described for


Flemish Sign Language (VGT) (Van Herreweghe/Vermeerbergen 2006, 241), where
it functions as a tag question after an affirmative sentence, as shown in (15).

hsCyn
(15) can also saturday morning / [VGT]
‘It is also possible on Saturday morning, isn’t it?’

Headturn, a non-manual negative marker that is much less widespread than head-
shake, could be interpreted as a reduced form of the latter. It has been described
for British Sign Language (BSL), CSL, Greek Sign Language (GSL), Irish Sign
326 III. Syntax

Language (Irish SL), LIU, Quebec Sign Language (LSQ), Russian Sign Language
(Zeshan 2006b, 11), and VGT.
A third type of non-manual negative marker that has been reported because of
its singularity is head-tilt, which is attested in some sign languages of the Eastern
Mediterranean such as GSL, Lebanese Sign Language (LIL), LIU, and TİD. Just
like the headshake, this non-manual is rooted in the negative gesture used in the
surrounding hearing societies, but as part of the relevant sign language grammars,
it obeys the particular constraints of each one of them. Although it tends to co-
occur with a single negative sign, it can sometimes spread further, even over the
whole clause, in which case it yields an emphatic reading of negation in GSL (cf.
(16)). It can also appear on its own in GSL (unlike in LIL or LIU) (Antzakas
2006, 265).

ht
(16) index1 again go want-not [GSL]
‘I don’t want to go (there) again.’

It is worth noting that when two manual negatives co-occur in the same sentence
and are inherently associated with the same non-manual marker, the latter tends
to spread between the two. This behavior reflects a more general phenomenon
described for ASL as perseveration of articulation of several non-manuals (Neidle
et al. 2000, 45⫺48): both at the manual and non-manual levels, “if the same
articulatory configuration will be used multiple times, it tends to remain in place
between those articulations (if this is possible)”. Spreading of a negative non-manual
is common in sign languages where two negative signs can co-occur in the same
clause, as described for TİD (Zeshan 2006c, 158 f.). If both manual negators are
specified for the same non-manual, it spreads over the intervening sign (17a); if
the non-manuals are different, they either remain distinct (17b) or one takes over
and spreads over the whole domain, as in (17c).

hs
(17) a. none(2) appear no-no [TİD]
hs ht
b. none(2) go^not
hs
c. none(2) go^not

Non-manual markers are recruited in sign language grammars for a wide range of
purposes in the lexicon and in the different grammatical subcomponents (for an
overview, see Pfau/Quer 2010). Given the types of distribution restrictions reported
here and in the next section, the negative non-manuals appear to perform clear
grammatical functions and cannot just be seen as intonational contours typical of
negative sentences. Building on Pfau (2002), it has been proposed that in some sign
languages (e.g., DGS and LSC), the negative headshake should be analyzed as a
featural affix that modifies the prosodic properties of a base form, in a parallel
fashion to tonal prosodies in tonal languages (Pfau/Quer 2007, 133; also cf. Pfau
2008). As a consequence of this characterization, its spreading patterns follow
naturally and mirror the basic behavior of tone spreading in some spoken languages.
15. Negation 327

It is worth mentioning that another non-manual, headnod, is reported to system-


atically mark affirmation ⫺ be it in affirmative responses to questions or for
emphasis. The LIS example in (18) illustrates the latter use of headnod. Geraci
(2005) interprets it as the positive counterpart of negative headshake, both being
the manifestation of the same syntactic projection encoding clausal polarity, in line
with Laka (1990).

hn
(18) arrive someone [LIS]
‘Someone did arrive.’

2.2.2. Facial expression

Beyond head movements, other non-manuals are associated with the expression of
negation. Among the lexically specified non-manuals, the ones that are more wide-
spread crosslinguistically include frowning, squinted eyes, nose wrinkling, and lips
spread, pursed or with the corners down. Other markers are more language-specific
or even sign-specific, such as puffed cheeks, air puff, tongue protruding, and other
mouth gestures. The more interesting cases are probably those in which negative
facial non-manuals clearly have the grammatical function of negating the clause.
Brazilian Sign Language (LSB) features both headshake and negative facial expres-
sion (lowered corners of the mouth or O-like mouth gesture), which can co-occur
in negative sentences. However, it is negative facial expression (nfe) and not head-
shake that functions as the obligatory grammatical marker of negation, as the
following contrast illustrates (Arrotéia 2005, 63).

nfe
(19) a. ix1 1seeajoãoaix1 (not) [LSB]
‘I didn’t see João.’
hs
b. *ix1 1seeajoãoaix1 (not)

Other facial non-manuals have also been described as sole markers of sentential
negation for Israeli SL (mouthing lo ‘no, not’: Meir 2004, 111 f.), for LIU (negative
facial expression: Hendriks 2007, 118 f.), and for TİD (puffed cheeks: Zeshan
2003, 58 f.).

3. Syntactic patterns of negation

It has been observed across sign languages that negative signs show a tendency to
occur sentence-finally, although this is by no means an absolute surface property.
Unsurprisingly, negation, as a functional category, interacts with other functional
elements and with lexical items as well (see section 2.1) and it lexicalizes as either
a syntactic head or a phrase. Moreover, both the manual and non-manual compo-
328 III. Syntax

nents of negation must be taken into account in the analysis of negative clauses.
As expected, the range of actual variation in the syntactic realization of negation
is greater than a superficial examination might reveal. In this section, we will look
at the syntactic encoding of sentential negation, concentrating on some aspects of
structural variation that have been documented and accounted for within the genera-
tive tradition. It should be mentioned, however, that the structural analyses of sign
language negation are still limited and that many of the existing descriptions of
negative systems do not offer the amount of detail required for a proper syntactic
characterization.

3.1. Interaction of negation with other syntactic categories

An interesting syntactic fact documented for Israeli SL is that different negators


are selective as to the syntactic categories they can combine with (Meir 2004, 114 f.).
As illustrated in (20), not, neg-exist(1), and neg-past can only co-occur with an
adjective, a noun, or a verb, respectively.

(20) a. chair indexa comfortable not/*neg-past/*neg-exist(1/2) [Israeli SL]


‘The chair is/was not comfortable.’
b. index1 computer neg-exist(1/2)/*neg-past/*not
‘I don’t have a computer.’
c. index3 sleep neg-past/*neg-exist(1/2)
‘He didn’t sleep at all.’

This is an important observation that deserves further exploration, also in other


languages for which it has been noted that certain negators exhibit similar combina-
torial restrictions.
A clear case of an impact of negation on clausal structure has been documented
for LSB. De Quadros (1999) describes and analyzes the distributional patterns of
sentential negation with both agreeing and plain verbs and shows that only the
former allow for preverbal negation (21a) while the latter bar preverbal negation
and induce clause-final negation, as the examples in (21b) and (21c) show. De
Quadros accounts for this difference in distribution within the framework of Genera-
tive Grammar. In particular, she derives this basic fact from the assumption that
preverbal negation blocks the movement required for the lexical verb to pick up
abstract agreement features in a higher syntactic position, thus resulting in clause-
final negation. Agreeing verbs, by virtue of carrying inflection overtly, do not need
to undergo this type of movement and allow for a preverbal negator.

neg
(21) a. ix johna no agiveb book [LSB]
‘John does not give the book to her/him.’
neg
b. * ix johna no desire car
(‘John does not like the car.’)
15. Negation 329

neg
c. ix johna desire car no
‘John does not like the car.’

Interestingly, although both LSB and ASL are SVO languages, ASL does allow for
the pattern excluded in LSB (21b), as is illustrated in (22). Such fine-grained
crosslinguistic comparisons make it clear that surface properties require detailed
analyses for each language, given that other factors in the particular grammars at
hand are likely to play a role and lead to diverging patterns.

neg
(22) john not eat meat [ASL]
‘John does not eat meat.’

3.2. Doubling

Another interesting fact concerning the syntactic realization of negation has been
noted for several languages (ASL, Petronio 1993; CSL, Yang/Fischer 2002; LSB, de
Quadros 1999; NZSL, McKee 2006): negative markers are doubled in structures in
which an emphatic interpretation ⫺ in some cases identified as focus ⫺ is at play.
In this sense, negation resembles other categories that enter the same doubling
pattern (modals, wh-words, quantifiers, lexical verbs, or adverbials). An example
from CSL is displayed in (23), taken from Yang and Fischer (2002, 180).

nfe nfe
(23) none/nothing master big-shape none/nothing [CSL]
‘There is nothing to show that you master the whole shape first.’

There is no unified account of such doubling structures, which feature several other
categories beyond negation. At least for ASL and LSB, however, analyses of double
negatives have been proposed that interpret the clause-final instance of negation as
a copy of the sentence-internal one occupying a functional head high up in the
clausal structure. For ASL, it has been proposed that this position is the Cº head
occurring on the right branch and endowed with a [Cfocus] feature (Petronio 1993).
For LSB, de Quadros (1999) argues that the clause-final double is in fact base-
generated in the head of a Focus Phrase under CP; moving everything below Focusº
to the specifier of FocusP results in the attested linear order. For both analyses, it
is crucial that doubling structures always feature heads and never phrases (for an
opposing view on this leading to a different analysis in ASL, see Neidle et al. 2000;
cf. the discussion about the proper characterization of wh-movement in ASL in
chapter 14, on which the analysis of doubling structures also hinges). Interestingly,
the categories that are susceptible to undergoing doubling can merge together,
showing the same behavior as a single head, as exemplified for the modal can and
negation in (24) from ASL (Petronio 1993, 134).
330 III. Syntax

neg
(24) ann can’t read can’t [ASL]
‘Ann CAN’T read.’

Only a single double can appear per clause, be it matrix or embedded. This
restriction follows naturally from an interpretation as emphatic focus, which gener-
ally displays such a constraint. This line of analysis builds on doubling data that
do not feature a pause before the sentence-final double. Petronio and Lillo-Martin
(1997) distinguish these cases from other possible structures in which the repeated
constituent at the end of the clause is preceded by a pause. As Neidle et al. (2000)
propose, the latter cases are amenable to an analysis as tags in ASL.

3.3. Spreading

A general property of double negative structures is that the non-manual feature


can spread between the two negative elements. Therefore, next to cases like (23)
above, CSL also provides structures where spreading is at play, such as (25) (Yang/
Fischer 2002, 181).

nfe
(25) start time not-need grab details not-need [CSL]
‘Don’t pay attention to a detail at the beginning.’

This is an instance of what Neidle et al. (2000, 45) have dubbed as perseveration
of a non-manual articulation (see section 2.2). In this case, perseveration of the
non-manual takes place between two identical manual signs, a situation different
from the one described in (17) above.
Spreading patterns of non-manual negation are subject to restrictions. It is clear,
for instance, that if a topic or an adjunct clause is present sentence-initially, the
negative marker cannot spread over it and supersede other non-manuals associated
with that constituent, as noted, for instance, in Liddell (1980, 81) for the ASL
example in (26).

t neg
(26) dog chase cat [ASL]
‘As for the dog, it didn’t chase the cat.’

It has been argued that spreading of negative non-manuals is clearly restricted by


syntactic factors. Specifically, in ASL, headshake can co-occur with the manual
negator not only or optionally spread over the manual material within the verb
phrase (VP) that linearly follows the negator, as exemplified in (27a). However, if
the manual negation is absent, spreading is obligatory, as the contrast in (27b⫺c)
shows. Neidle et al. (2000) interpret this paradigm as evidence that headshake is
the overt realization of a syntactic feature [Cneg] residing in Negº, the head of
NegP, which needs to associate with manual material. Generally, non-manuals must
spread whenever there is no lexical material occupying the relevant functional head,
15. Negation 331

the spreading domain being the c-command domain of that head. In the case of
the headshake, the relevant head is Negº and the c-command domain is the VP.

( neg)
(27) a. john not buy house [ASL]
neg
b. john buy house
‘John didn’t buy the house.’
neg
c. * john buy house

Another piece of evidence in favor of the syntactic nature of negative non-manual


spreading is offered in Pfau (2002, 287) for DGS, where it is shown that, unlike
ASL, spreading of the headshake over the VP material is optional, even if the
manual negator is absent from the structure. However, the spreading must target
whole constituents (28a) and is barred otherwise, as in (28b), where it is articulated
only on the adjectival sign of the object NP.

neg
(28) a. man flower buy [DGS]
‘The man is not buying a flower.’
neg
b. * man flower red buy

In some sign languages at least, non-manual spreading can have interpretive effects,
which strictly speaking renders it non-optional. This is the case in LSC, where
spreading over the object NP results in a contrastive corrective reading of negation
(Quer 2007, 44).

hs hn
(29) santi vegetables eat, fruit [LSC]
‘Santi doesn’t eat vegetables, but fruit (he does).’

Spreading of the headshake over the whole sentence gives rise to an interpretation
as a denial of a previous utterance, as in (30).

hs
(30) santi summer u.s. go [LSC]
‘It is not true/It is not the case that Santi is going to the U.S. in the summer.’

A parameter of variation between DGS and LSC has been detected in the expres-
sion of negation: while in LSC, the non-manual marker can co-appear with the
manual negator only (31), in DGS, it must extend at least over the predicate as
well, as is evident from the sentence pair in (32).

hs
(31) santi meat eat not [LSC]
‘Santi does not eat meat.’
332 III. Syntax

neg
(32) a. mother flower buy not [DGS]
‘Mother is not buying a flower.’
neg
b. * mother flower buy not

Pfau and Quer (2002, 2007) interpret this asymmetry as a reflection of the well-
known fact that negative markers can have head or phrasal status syntactically (for
an overview, see Zanuttini 2001). They further assume that headshake is the realiza-
tion of a featural affix. This affix must be co-articulated with manual material, on
which it imposes a prosodic contour consisting in headshake. In LSC, the manual
marker not is a syntactic head residing in Negº and [Cneg] naturally affixes to it,
giving rise to structures such as (31). If the structure does not feature not, then
the predicate will have to raise to Negº, where [Cneg] will combine with it and
trigger headshake on the predicate sign, as in (1b). The essentials of both types of
derivations are depicted in Figure 15.7.

Fig. 15.7: LSC negative structures, with and without negative marker not.

In contrast, DGS not is a phrasal category that occupies the (right-branching)


Specifier of NegP (the headshake it carries is lexically marked). Since [Cneg] needs
to combine with manual material, it always attracts the predicate to Negº (cf. Figure
15.8), thus explaining the ungrammaticality of (32b) with headshake only on not.
This type of analysis is able to account for the negation patterns found in other
languages too, if further specific properties of the language are taken into account.
This is the case for ASL, as shown in Pfau and Quer (2002). It also provides a
natural explanation for the availability of (manual) negative concord in LSC and
its absence in DGS, as will be discussed in the next section. The idiosyncratic
behavior of negative modals (and semi-modals) also follows naturally from this line
15. Negation 333

Fig. 15.8: DGS negative structure, with negative marker not and obligatory V-movement to
Neg.

of analysis: being generated in the Tense head, they are always forced to raise to
Negº in order to support the [Cneg] affix and surface as forms with a cliticized
negation or as suppletive negative counterparts of the positive verb (cf. section
2.1 above).

4. Negative concord
It is a well-known fact that in many languages two or more negative items can
appear in the same clause without changing its polarity, which remains negative.
This phenomenon is known as split negation or negative concord (for a recent
overview see Giannakidou 2006). In non-manual dominant sign languages (i.e. those
languages where a clause can be negated by non-manual negation only), the non-
manual negative marker (the [Cneg] affix in Pfau/Quer’s (2002) proposal) must be
taken to be the main negator. Since in this type of language, manual sentential
negation often co-occurs with non-manual negation yielding a single negation read-
ing, it must be concluded that negative concord is at play between the manual and
non-manual component, as in the following LSC example (Quer 2002/2007, 45).

hs
(33) ix1 smoke no [LSC]
‘I do not smoke.’
334 III. Syntax

In addition, a second type of negative concord has been attested at the manual
level, namely when two or more negative manual signs co-appear in a clause but
do not contribute independent negations to the interpretation, as exemplified in the
LSC example in (34). Crucially, the interpretation of this sentence is not ‘Your
friend never doesn’t come (i.e. he always comes)’.

hs hs
(34) friend ix2come no never [LSC]
‘Your friend never comes.’

With a few exceptions (Arrotéia 2005 on LSB; Hendriks 2007 on LIU; Pfau/Quer
2002, 2007 and Quer 2002/2007 on LSC; Wood 1999 on ASL), the phenomenon of
negative concord has not received much attention in descriptions of sign language
negation systems. However, scattered cases of negative concord examples are re-
ported for languages such as BSL (Sutton-Spence/Woll 1999, 77), CSL (Yang/Fischer
2002, 181), TİD (Zeshan 2006c, 157) and VGT (Van Herreweghe/Vermeerbergen
2006, 248). Some of the examples are characterized as encoding emphatic or strong
negation. See (35) for a CSL example in which a lexically negative verb co-occurs
with sentential negation. Again, the combination of two negative signs does not
yield a positive reading.

nfe
(35) index dislike see no [CSL]
‘I don’t like to watch it.’

In the LIU example in (36), cliticized negation is duplicated in the same clause by
the basic clause negator neg (Hendriks 2007, 124).

y/n hs
(36) maths, like^neg index1neg [LIU]
‘I don’t like maths.’

As is the case in the better-known instances of spoken languages, negative concord


is not a uniform phenomenon but rather shows parameters of variation. Comparable
variation has also been documented across structurally very similar languages like
LSC and DGS (Pfau/Quer 2002, 2007). While both display negative concord between
manual and non-manual components of negation, only the former features negative
concord among manual signs. This follows partly from the fact that the basic manual
clause negator in LSC is a head category sitting in Negº, whereas in DGS, it is a
phrase occupying the Specifier of NegP, as depicted in Figures 15.7 and 15.8 above.
It might therefore be argued that the difference follows from the availability of a
Specifier position for further phrasal negative signs in LSC, which is standardly
occupied in DGS. Nevertheless, this cannot be the whole explanation, because LSC
allows for two phrasal negatives under the same negative concord reading (cf. (37)),
a situation that could be expected in DGS, contrary to fact.
15. Negation 335

hs hs hs
(37) ix1 smoke neg2 never [LSC]
‘I never ever smoke.’

The difference must be attributed to the inherent properties of negative signs, which
may or may not give rise to concord readings depending on the language. DGS
can thus be characterized as a non-negative concord language at the manual level,
despite having split negation (i.e., the non-manual negative affix and the manual
sentential negator not jointly yield a single sentential negation reading). The pres-
ence of further negative signs leads to marked negation readings or simply to
ungrammaticality. LIS is another language that does not display negative concord
structures (Geraci 2005).

5. Lexical negation and morphological idiosyncrasies of negatives

In section 2.1, some processes of negative affixation were mentioned that yield
complex signs conveying sentential negation. A number of those processes have
also been shown to result in the formation of nouns and adjectives with negative
meaning, normally the antonym of a positive lexical counterpart, with the important
difference that in these items, the negation does not have sentential scope. As is
common in processes of lexical formation, however, the output can have an idiosyn-
cratic meaning that does not correspond transparently to its antonym. In accordance
with the lack of sentential scope, no negative non-manuals co-occur and if they do
as a consequence of lexical marking, they never spread. Occasionally, negative
affixes are grammaticalized from negative predicates. For Israeli SL, for instance,
Meir (2004, 15) argues that not-exist is a suffix which originates from the negative
existential predicate not-exist(1). Independent of the category of the root, the
suffix invariably gives an adjective as a result, as can be observed in (38).

(38) a. interesting+not-exist ‘uninteresting’ [Israeli SL]


b. shame+not-exist ‘shameless’
c. strength+not-exist ‘exhausted’

A negative handshape characterized by pinkie extension has been shown to be


operative in some East Asian sign languages in the formation of positive-negative
pairs of lexical items. In HKSL, the negative handshape, which as a stand alone
sign means bad/wrong, can replace the handshape of the sign it is affixed to, thus
functioning as a simultaneous affix, or else be added sequentially after the lexical
sign, as illustrated for the sign taste/mouth in Figure 15.9 (Zeshan 2004, 45). In
this case, the resulting form mouth^bad (‘dumb’) has non-transparent meaning
compositionally derived from its parts. Next to transparent derivations such as
reasonable/unreasonable, appealing/unappealing, and lucky/unlucky, some opa-
que ones are also attested, such as mouth^bad ‘dumb’ (see Figure 15.9), ear^bad
‘deaf’, and eye^bad ‘blind’ (Tang 2006, 223).
336 III. Syntax

mouth^bad

Fig. 15.9: Example of sequential negative handshape in deriving an adjective in HKSL. Copy-
right © 2004 by Ulrike Zeshan. Reprinted with permission.

In a number of CSL signs, such as those in (39), the positive-negative pattern


is marked by an actual opposition of handshapes: the positive member of a pair
has the thumb up handshape (2), the negative one an extended pinkie (Yang/
Fischer 2002, 187).

(39) a. correct/right wrong [CSL]


b. neat dirty
c. skillfull unskillfull
d. fortunate unfortunate

Some formational features that occur in sentential negatives can appear in irregular
lexically negative signs as well, such as the diagonal inward-outward movement that
occurs in DGS modals but also in the sign not^valid, or a change in hand orienta-
tion in FinSL (for an overview, see Zeshan 2004, 41 ff.). Sometimes the contrasts
are not really productive, like the orientation change in the pair legal/illegal in
LIU (Hendriks 2007, 114).
Beyond lexically marked negation, it is worth mentioning that for some sign
languages, certain peculiar features have been noted in the morphology associated
with negation. One of these features is person inflection of the sign nothing in
NZSL (McKee 2006, 85). The sign, which is standardly used to negate predicates,
can be articulated at person loci and is interpreted in context. For instance, when
inflected for second person, it will be interpreted as ‘You don’t have/You aren’t/
Not you.’ It can also show multiple inflection through a lateral arc displacement,
much in the same way as in plural verb agreement (see chapter 7). Although it
has not been interpreted as such in the original source, this might be a case of
verb ellipsis where the negative sign acquires the properties of a negative auxiliary.
Another interesting morphological idiosyncrasy is reported for negated existentials
in NS (Morgan 2006, 123): the language has lexicalized animacy in the domain of
existential verbs and possesses a specific item restricted to the expression of exis-
15. Negation 337

tence with animate arguments, exist-animate, as opposed to an unrestricted exist-


unmarked. While the former can co-occur with bimanual not, exist-unmarked can-
not and negation of existence is conveyed by not, nothing, zero or from-scratch.

6. Concluding remarks

This overview of the grammatical and lexical encoding of negation and negative
structures across sign languages has documented the linguistic variation existing in
this domain despite the still limited range of descriptions and analyses available.
Even what might be considered a modality-dependent feature, namely the non-
manual encoding of negation, turns out not to function uniformly in the expression
of negation across the sign languages studied. Rather, its properties and distribution
are constrained by the language-particular grammars they are part of. At the same
time, however, it is also striking to notice how recurrent and widespread some
morphological features are in the negation systems described. These recurrent pat-
terns offer a unique window into grammaticalization pathways of relatively young
languages in the visual-gestural modality. In any case, the scholarship reported here
should have made it clear that much more detailed work on a broader range of
sign languages is needed to get better insights into many of the issues that have
been already raised for linguistic theory and description so far.

7. Literature
Antzakas, Klimis
2006 The Use of Negative Head Movements in Greek Sign Language. In: Zeshan, Ulrike
(ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara
Press, 258⫺269.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344.
Arrotéia, Jéssica
2005 O Papel da Marcação Não Manual nas Sentenças Negativas em Língua de Sinais
Brasileira (LSB). MA Thesis, Universidade Estadual de Campinas.
Atkinson, Joan/Campbell, Ruth/Marshall, Jane/Thacker, Alice/Woll, Bencie
2004 Understanding ‘Not’: Neuropsychological Dissociations Between Hand and Head
Markers of Negation in BSL. In: Neuropsychologia 42, 214⫺229.
Baker-Shenk, Charlotte
1983 A Micro-analysis of the Nonmanual Components of Questions in American Sign
Language. PhD Dissertation, University of California, Berkeley.
Corina, David P./Bellugi, Ursula/Reilly, Judy
1999 Neuropsychological Studies of Linguistic and Affective Facial Expressions in Deaf
Signers. In: Language and Speech 42, 307⫺331.
Geraci, Carlo
2005 Negation in LIS (Italian Sign Language). In: Bateman, Leah/Ussery, Cheron (eds.),
Proceedings of the North East Linguistic Society (NELS 35). Amherst, MA: GLSA,
217⫺230.
338 III. Syntax

Giannakidou, Anastasia
2006 N-words and Negative Concord. In: Everaert, Martin/Riemsdijk, Henk van (eds.),
TheBlackwell Companion to Syntax, Volume III. Oxford: Blackwell, 327⫺391.
Hendriks, Bernadet
2007 Negation in Jordanian Sign Language: A Cross-linguistic Perspective. In: Perniss,
Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation. Comparative Studies
on Sign Language Structure. Berlin: Mouton de Gruyter, 103⫺128.
Hendriks, Bernadet
2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective.
PhD Dissertation, University of Amsterdam. Utrecht: LOT.
Laka, Itziar
1990 Negation in Syntax: The Nature of Functional Categories and Projections. PhD Disser-
tation, MIT, Cambridge, MA.
Meir, Irit
2004 Question and Negation in Israeli Sign Language. In: Sign Language & Linguistics
7(2), 97⫺124.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert G.
2000 The Syntax of American Sign Language. Functional Categories and Hierarchical
Structure. Cambridge, MA: MIT Press.
Petronio, Karen
1993 Clause Structure in American Sign Language. PhD Dissertation, University of Wash-
ington.
Petronio, Karen/Lillo-Martin, Diane
1997 WH-movement and the Position of Spec-CP: Evidence from American Sign Lan-
guage. In: Language 73(1), 18⫺57.
Pfau, Roland
2002 Applying Morphosyntactic and Phonological Readjustment Rules in Natural Lan-
guage Negation. In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.),
Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge
University Press, 263⫺295.
Pfau, Roland
2008 The Grammar of Headshake: A Typological Perspective on German Sign Language
Negation. In: Linguistics in Amsterdam 2008(1), 37⫺74.
[http://www.linguisticsinamsterdam.nl/]
Pfau, Roland/Quer, Josep
2002 V-to-Neg Raising and Negative Concord in Three Sign Languages. In: Rivista di
Grammatica Generativa 27, 73⫺86.
Pfau, Roland/Quer, Josep
2007 On the Syntax of Negation and Modals in German Sign Language (DGS) and
Catalan Sign Language (LSC). In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus
(eds.), Visible Variation. Comparative Studies on Sign Language Structure. Berlin:
Mouton de Gruyter, 129⫺161.
Pfau, Roland/Quer, Josep
2010 Nonmanuals: Their Grammatical and Prosodic Roles. In: Brentari, Diane (ed.), Sign
Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press,
381⫺402.
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 3⫺98. [http://www.ling.uni-potsdam.de/lip/]
Quadros, Ronice M. de
1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifíca Univer-
sidade Católica do Rio Grande do Sul, Porto Alegre.
15. Negation 339

Quer, Josep
2002/ Operadores Negativos en Lengua de Signos Catalana. In: Cvejanov, Sandra B. (ed.),
2007 Lenguas de Señas: Estudios de Lingüística Teórica y Aplicada. Neuquén: Editorial
de la Universidad Nacional del Comahue, Argentina, 39⫺54.
Quer, Josep/Boldú, Rosa Ma.
2006 Lexical and Morphological Resources in the Expression of Sentential Negation in
Catalan Sign Language (LSC). In: Actes del 7 è Congrés de Lingüística General,
Universitat de Barcelona. CD-ROM.
Reilly, Judy/Anderson, Diane
2002 FACES: The Acquisition of Non-Manual Morphology in ASL. In: Morgan, Gary/
Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins,
159⫺181.
Savolainen, Leena
2006 Interrogatives and Negatives in Finnish Sign Language: An Overview. In: Zeshan,
Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen:
Ishara Press, 284⫺302.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge
University Press.
Tang, Gladys
2006 Questions and Negation in Hong Kong Sign Language. In: Zeshan, Ulrike (ed.),
Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press,
198⫺224.
Van Herreweghe, Mieke/Vermeerbergen, Myriam
2006 Interrogatives and Negatives in Flemish Sign Language. In: Zeshan, Ulrike (ed.),
Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press,
225⫺256.
Yang, Jun Hui/Fischer, Susan D.
2002 Expressing Negation in Chinese Sign Language. In: Sign Language & Linguistics
5(2), 167⫺202.
Zanuttini, Raffaella
2001 Sentential Negation. In: Baltin, Mark/Collins, Chris (eds.), The Handbook of Contem-
porary Syntactic Theory. Oxford: Blackwell, 511⫺535.
Zeshan, Ulrike
2003 Aspects of Türk İşaret Dili (Turkish Sign Language). In: Sign Language & Linguis-
tics 6(1), 43⫺75.
Zeshan, Ulrike
2004 Hand, Head, and Face: Negative Constructions in Sign Languages. In: Linguistic
Typology 8(1), 1⫺58.
Zeshan, Ulrike (ed.)
2006a Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press.
Zeshan, Ulrike
2006b Negative and Interrogative Constructions in Sign Languages: A Case Study in Sign
Language Typology. In: Zeshan, Ulrike (ed.), Interrogative and Negative Construc-
tions in Sign Languages. Nijmegen: Ishara Press, 28⫺68.
Zeshan, Ulrike
2006c Negative and Interrogative Structures in Turkish Sign Language (TİD). In: Zeshan,
Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen:
Ishara Press, 128⫺164.

Josep Quer, Barcelona (Spain)


340 III. Syntax

16. Coordination and subordination


1. Introduction
2. Coordination
3. Subordination
4. Conclusion
5. Literature

Abstract
Identifying coordination and subordination in sign languages is not easy because mor-
phosyntactic devices which mark clause boundaries, such as conjunctions or complemen-
tizers, are generally not obligatory. Sometimes, however, non-manuals and certain syntac-
tic diagnostics may offer a solution. Constituent boundaries can be delineated through
eye blinks, and syntactic domains involved in coordination can be identified through
head nods and body turns. In addition to these modality specific properties in delineating
coordination and subordination, diagnostics of grammatical dependency defined in
terms of constraints of syntactic operations is often useful. We observe that the island
constraints involved in wh-extraction from coordination and subordination are also ob-
served in some sign languages, and scope of the negator and Q-morpheme impose syn-
tactic constraints on these constructions. Lastly, cross-linguistic variation is observed in
some sign languages, as revealed, for instance, by gapping in coordinate structures, sub-
ject pronoun copy in sentential complements, and choice of relativization strategy.

1. Introduction
In all natural languages, clauses can be combined to form complex sentences. Clause
combining may generally involve like categories, a characteristic of coordination, or
unlike categories, as in subordination. In his typological study on spoken languages,
Lehmann (1988) defines coordination and subordination in terms of grammatical de-
pendency. According to him, dependency is observed with subordination only and co-
ordination is analyzed as involving only sister relations between the conjuncts. Re-
cently, syntactic analysis within the generative framework assumes that natural
languages realize a hierarchical syntactic structure, with grammatical dependencies ex-
pressed at different levels of the grammar. However, spoken language research has
demonstrated that this quest for evidence for dependency is not so straightforward. As
Haspelmath (2004) puts it, sometimes it is difficult to distinguish coordination from
subordination as mismatches may occur where two clausal constituents are semanti-
cally coordinated but syntactically subordinated to one another, or vice versa. It is
equally difficult, if not more so, in the case of sign languages, which are relatively
‘younger’ languages. They lack a written form which encourages the evolution of con-
junctions and complementizers as morphosyntactic devices for clause combination
(Mithun 1988). In this chapter, we assume that bi-clausal constructions as involved in
16. Coordination and subordination 341

coordination and subordination show dependency relations between constituents X


and Y. Such dependency manifests itself in some abstract grammatical operations such
as extraction and gapping. We will provide an overview of current research on coordi-
nate and subordinate structures in sign languages and examine whether these grammat-
ical operations in coordination and subordination are also operative in sign languages.
At this juncture, a crucial question to ask is what marks clause boundaries in sign
languages, or precisely what linguistic or prosodic cues are there to signal coordination
and subordination. Morphosyntactic devices like case marking, complementizers, con-
junctions, or word order are common cues for identifying coordinate and subordinate
structures in spoken languages. On the sign language front, however, there is no stand-
ardized methodology for identifying clause boundaries, as pointed out in Johnston/
Schembri (2007). We shall see that it is not obligatory for sign languages to incorporate
conjunctions or complementizers. Before we go into the analysis, we will briefly discuss
some recent attempts to delineate clause boundaries in sign language research.
Research on spoken language prosody attempts to study the interface properties of
phonology and syntax based on prosodic cues like tone variation or pauses to mark
clause boundaries. Although results show that there is no isomorphic relationship be-
tween prosodic and syntactic constituents, structures are generally associated with Into-
national Phrases (IP) in the prosodic domain. Edmonds (1976) claimed that the bound-
ary of a root sentence delimits an IP. Nespor and Vogel (2007) and Selkirk (2005),
however, found that certain non-root clauses also form IP domains; these are paren-
theticals, non-restrictive relative clauses, vocatives, certain moved elements, and tags.
In sign language research, there is a growing interest in examining the roles of non-
manuals in sign languages. Pfau and Quer (2010) categorize them into (i) phonological,
(ii) morphological, (iii) syntactic, and (iv) pragmatic. In this chapter, we will examine
some of these functions of non-manuals. Crucial to the current analysis is the identifica-
tion of non-manuals that mark clause boundaries within which we can examine gram-
matical dependency in coordination and subordination. Recently, non-manuals like eye
blinks have been identified as prosodic cues for clause boundaries (Wilbur 1994; Herr-
mann 2010). Sze (2008) and subsequently Tang et al. (2010) found that while eye blinks
generally mark intonational phrases in many sign languages, Hong Kong Sign Lan-
guage (HKSL) uses them to mark phonological phrases as well. Sandler (1999) also
observed that sentence-final boundaries are further marked by an across-the-board
change of facial expression, head position, eye gaze direction, or eye blinks. These
studies on prosodic cues lay the foundation for our analysis of coordination (section 2)
and subordination (section 3) in this chapter.

2. Coordination

2.1. Types of coordination

Coordination generally involves the combining of at least two constituents of the like
categories either through juxtaposition or conjunctions. Pacoh, a Mon-Khmer moun-
tain language of Vietnam, for instance, juxtaposes two verb phrases (VPs) without a
conjunction (1) (Watson 1966, 176).
342 III. Syntax

(1) Do [cho  t’ôq cayâq, 


cho t’ôq apây] [Pacoh]
she return to husband return to grandmother
‘She returns to (her) husband and returns to her grandmother.’

Wilder (1997) proposes to analyze conjuncts as either determiner phrases (DPs) or


complementizer phrases (CPs) with ellipsis of terminal phonological material and not
as deletion of syntactic structure as part of the derivation. Here we focus on VPs and
CPs. We leave it open whether the structure of the conjuncts remains ‘small’ (i.e., only
VPs) or ‘large’ (i.e., CPs) at this stage of analysis.
In many languages, conjunctions are used in different ways to combine constituents.
Frequently, a conjunction is assigned to the last conjunct, as shown by the Cantonese
example in (2), but some languages require one for each conjunct, either before or
after it. Also, some languages use different conjunctions for different grammatical cat-
egories. In Upper Kuskokwim Athabaskan, for instance, ʔił is used for noun phrase
(NP) conjuncts and ts’eʔ for clausal conjuncts (Kibrik 2004). (3) provides a conjoined
sentence with the conjunction ts’eʔ for every clausal conjunct (Kibrik 2004, 544).

(2) ngo3 kam4-maan3 VP[ VP[ jam2-zo2 tong1] tung4 VP[ sik6-zo2 min6-baau1] ]
pro-1 last-evening drink-asp soup and eat-asp bread
‘I drank soup and ate bread last night.’ [Cantonese]
(3) nongw donaʔ totis łeka [Upper Kuskokwim Athabaskan]
from.river upriver portage dog
ʔisdlal ts’eʔ ch’itsan’ ch’itey nichoh ts’eʔ <.....>
I.did.not.take and grass too.much tall and …
‘I did not take the dogs to the upriver portage because the grass was too tall,
and …’

There have been few reports on conjunctions in sign languages (see e.g., Waters/Sutton-
Spence (2005) for British Sign Language). American Sign Language (ASL) has overt
lexical markers such as and or but, as in (4) (Padden 1988, 95). Padden does not
specifically claim these overt lexical markers to be conjunctions or discourse markers.
According to her, they may be true conjunctions in coordinate structures if a pause
appears between the two clausal conjuncts and the second conjunct is accompanied by
a sharp headshake (hs).

hs
(4) 1persuadei but change mind [ASL]
‘I persuaded her to do it but I/she/he changed my mind.’

Although manual signs like and, but, and or are used by some Deaf people in Hong
Kong, they normally occur in signing that follows the Chinese word order. In Austral-
ian Sign Language (Auslan), and does not exist, but but does, as shown in (5) (John-
ston/Schembri 2007, 213).

(5) k-i-m like cat but p-a-t prefer dog [Auslan]


‘Kim likes cats but Pat prefers dogs.’
16. Coordination and subordination 343

Instead of using an overt conjunction, juxtaposition is primarily adopted, especially in


conjunctive coordination (‘and’) for simultaneous and sequential events (e.g., Johnston/
Schembri (2007) for Auslan; Padden (1988) for ASL; van Gijn (2004) for Sign Lan-
guage of the Netherlands (NGT); Vermeerbergen/Leeson/Crasborn (2007) for various
sign languages). In the ASL examples in (6) and (7), two clauses are juxtaposed for
sequential and simultaneous events (Padden 1988, 85).

Sequential events:
(6) igive1 money, 1index get ticket [ASL]
‘He’ll give me the money, then I’ll get the tickets.’
Simultaneous events:
(7) house blow-up, car icl:3-flip-over [ASL]
‘The house blew up and the car flipped over.’

HKSL examples showing juxtaposition for conjunctive coordination (8a,b), disjunction


(8c), and adversative coordination (8d) are presented below. (8a) and (8b) encode
sequential and simultaneous events, respectively. (8b) confirms the observation made
by Tang, Sze, and Lam (2007) that juxtaposing two VP conjuncts as simultaneous
events is done by assigning each event to a manual articulator. In this example, eat-
chips is encoded by the signer’s right hand, and drink-soda by his left hand. As for
(8c), if it turns out that either is a conjunction, this sign conforms to a distribution of
conjunctions discussed in Haspelmath (2004), according to which it occurs obligatorily
after the last conjunct (bl = blink, hn = head nod, ht = head turn, bt = body turn).

bl bl
hn
(8) a. mother door cl:unlock-door, cl:push-open, [HKSL]
bl
hn
cl:enter house
‘Mother unlocked the door, pushed it open (and) went inside.’
bl
hn
b. boy ix3 sita, chips, soda,
ht right ht left ht right
eat-chips, drink-soda, eat-chips, ….
‘The boy is sitting here, he is eating chips (and) drinking soda.’
bl bl
hnCbt right
c. ix1 go-to beijing, (pro1) take-a-plane,
bl bl bl
hnCbt left
take-a-train, either doesn’t-matter
‘I am going to Beijing. I will take a plane or take a train. Either way, it
doesn’t matter.’
344 III. Syntax

bl bl
hnChtCbt forward
d. exam come-close, ruth diligent do-homework,
bl
hnChtCbt backward
hannah lazy watch-tv
‘The exam is coming close; Ruth is diligently doing her homework (but)
Hannah is lazy and watches TV.’

There is little discussion about non-manuals for coordination in the sign language lit-
erature. However, it seems that non-manuals are adopted when lexical conjunctions
are absent in HKSL. In a great majority of cases, we observe an extended head nod
that is coterminous with the conjunct, and the clause boundaries are marked by an eye
blink. Liddell (1980, 2003) observes that syntactic head nods, which are adopted to
assert the existence of a state or a process, are larger, deeper, and slower in articulation.
In his analysis, a syntactic head nod obligatorily shows up when the verb is gapped or
absent. However, in HKSL, this type of head nods occurs whether or not the verb is
absent. In (8a) and (8b), the head nods are adopted to assert a proposition. In a neutral
context, conjunctive coordination has head nods only. Disjunction requires both head
nods and body turn to the left and right for different conjuncts, obligatorily followed
by a manual sign either (8c). Adversative conjunction may involve either head turn
or forward and backward body leans for the conjuncts, in addition to head nods (8d).
In sum, we observe three common types of coordination in sign languages. Juxtapo-
sition appears to be more common than coordination involving manual conjunctions.
In HKSL, these conjuncts are usually coterminous with a head nod and end with an
eye blink, indicating a constituent boundary of some kind. Also, non-manuals like head
turn or body leans interact with the types of coordination.

2.2. Diagnostics for coordination


In this section, we will briefly summarize three diagnostics which are said to be associ-
ated with coordination in spoken languages: extraction, gapping, and negation. We will
investigate whether coordination in sign languages is also sensitive to the constraints
involved in these grammatical operations.

2.2.1. Extraction

It has been commonly observed in spoken languages that movement out of a coordi-
nate structure is subject to the Coordinate Structure Constraint given in (9).

(9) Coordinate Structure Constraint (CSC)


In a coordinate structure, no conjunct can be moved, nor may any element
contained in a conjunct be moved out of that conjunct. (Ross 1967, 98 f.)

The CSC prevents movement of an entire conjunct (10a) or a constituent within a


conjunct (10b) out of a coordinate structure.
16. Coordination and subordination 345

(10) a. *Whati did Michael eat and ti?


b. *Whati did Michael play golf and read ti?

Padden (1988) claimed that ASL also obeys the CSC. In (11), for instance, topicalizing
an NP object out of a coordinate structure is prohibited (Padden 1988, 93; t = non-
manual topic marking; subscripts appear as in the original example).

t
(11) *flower, 2give1 money, jgivei [ASL]
‘Flowers, he gave me money but she gave me.’

A’-movement such as topicalization and wh-question formation in HKSL also leads to


similar results. Topics in HKSL occupy a position in the left periphery, whereas the
wh-arguments are either in-situ or occupy a clause-final position (Tang 2004). The
following examples show that extraction of an object NP from either the first or second
VP conjunct in topicalization (12b,c) or wh-question (13b,c) is disallowed.

(12) a. first group responsible cooking, second group [HKSL]


responsible design game
‘The first group is responsible for cooking and the second group is
responsible for designing games.’
t
b. *cookingi first group responsible ti, second group
responsible design game
t
c. *design gamei, first group responsible cooking,
second group responsible ti
(13) a. yesterday dad play speedboat, [HKSL]
eat cow^cl:cut-with-fork-and-knife
‘Daddy played speedboat and ate steak yesterday.’
b. *yesterday dad play ti, eat cow^cl:cut-with-fork-and-knife whati
Lit. ‘*What did daddy play and eat steak?’
c. *yesterday dad play speedboat, eat whati?
Lit. ‘*What did daddy play speedboat and eat?’

Following Ross (1967), Williams (1978) argues that the CSC can be voided if the gram-
matical operation is in ‘across-the-board’ (ATB) fashion. In the current analysis, this
means that an identical constituent is extracted from each conjunct in the coordinate
structure. In (14a) and (14b), a DP that bears an identical grammatical relation in both
conjuncts has been extracted. Under these circumstances, no CSC violation obtains.

(14) a. John wondered whati [Peter bought ti] and [the hawker sold ti]
b. The mani who ti loves cats and ti hates dogs …

However, ATB movement fails if the extracted argument does not bear the same gram-
matical relation in both conjuncts. In (15), the DP a man cannot be extracted because
it is the subject of the first conjunct but the object of the second conjunct.
346 III. Syntax

(15) *A mani who ti loves cats and the woman hates ti …

ATB movement also applies to coordinate structures in ASL and HKSL, as shown in
(16a), from Lillo-Martin (1991, 60), and in (16b). In these examples, topicalizing the
grammatical object of both conjuncts is possible if the topic is the grammatical object
of both conjuncts and encodes the same generic referent. However, just as in (15),
ATB movement is disallowed in the HKSL example in (16c) because the fronted DP
[ixa boy] does not share the same grammatical relation with the verb in the two TP con-
juncts.

t
(16) a. athat moviei, bsteve like ei but c julie dislike ei [ASL]
‘That moviei, Steve likes ei but Julie dislikes ei.’
t
b. orangei, mother like ti, father dislike ti [HKSL]
‘Orange, mother likes (and) father dislikes.’
top
c. *ixa boyi, ti eat chips, girl like ti [HKSL]
Lit. ‘As for the boy, (he) eats chips (and) the girl likes (him).’

However, while topicalization in ATB fashion works in HKSL, it fails with wh-question
formation even if the extracted wh-element bears the same grammatical relation in
both TP conjuncts, as shown in (17). Obviously, the wh-operator cannot be co-indexed
with the two wh-traces in (17). Instead, each clause requires its own wh-operator,
implying that they are two independent clauses (18).

wh
(17) *mother like ti, father dislike ti, whati? [HKSL]
Lit. ‘What does mother like and father dislike?’
wh wh
(18) mother like tj whatj? father dislike ti, whati?
Lit. ‘What does mother like? What does father dislike?’

In sum, the data from ASL and HKSL indicate that extraction out of a coordinate
structure violates the CSC. However, it is still not clear why topicalization in ATB
fashion yields a licit structure while this A’-movement fails in wh-question formation ⫺
at least in HKSL. Assuming a split-CP analysis with different levels for interrogation
and topicalization, one might argue that the difference is due to the directionality of
SpecCP in HKSL. As the data show, the specifier position for interrogation is in the
right periphery (18) while that for topicalization is on the left (16b) (see chapter 14
for further discussion on wh-questions and the position of SpecCP). Possibly, the direc-
tion of SpecCP interacts with ATB movement. Further research is required to verify
this issue.

2.2.2. Gapping
In spoken language, coordinate structures always yield a reduction of the syntactic
structure and ellipsis has been put forward to account for this phenomenon. One in-
16. Coordination and subordination 347

stance of ellipsis is gapping. In English, the verb in the second clausal conjunct can be
‘gapped’ under conditions of identity with the verb in the first conjunct (19a). In fact,
cross-linguistic studies show that the direction of gapping in coordinate structures is
dependent upon word order (Ross 1970, 251). In particular, SVO languages like Eng-
lish show forward gapping in the form of SVO and SO (i.e., deletion of the identical
verb in the second conjunct); hence (19b) is ungrammatical because the verb from the
first conjunct is gapped. In contrast, SOV languages show backward gapping in the
form of SO and SOV (i.e., deletion of the identical verb in the first conjunct), as data
from Japanese shows (20a). If the verb of the second conjunct is gapped, the sentence
is ungrammatical (20b).

(19) a. [Sally eats an apple] and [Paul Ø a candy].


b. *[Sally Ø an apple] and [Paul eats a candy].
(20) a. [Sally-wa lingo-o Ø], [Paul-wa ame-o tabe-da] [Japanese]
Sally-top apple-acc Paul-top candy-acc eat-past
Lit. ‘Sally an apple and Paul ate a candy.’
b. *[Sally-wa lingo-o tabe-te], [Paul-wa ame-o Ø]
Sally-top apple-acc eat-ger Paul-top candy-acc
‘Sally ate an apple and Paul a candy.’

Little research has been conducted on gapping in sign languages. Liddell (1980) ob-
serves that gapping exists in ASL and a head nod to accompany the remnant object
NP is necessary, as shown in (21), which lists a number of subject-object pairs. A
reanalysis of this example shows that the constraint on gapping mentioned above also
applies: (21) displays an SVO pattern, hence forward gapping is expected (Liddell
1980, 31).

hn
(21) have wonderful picnic. pro.1 bring salad, john beer [ASL]
hn hn
sandy chicken, ted hamburger
‘We had a wonderful picnic. I brought the salad, John (brought) the beer, Sandy
(brought) the chicken and Ted (brought) the hamburger.’

Forward gapping for SVO sentences is also observed in HKSL, as shown in (22a).
While head nod occurs on the object of the gapped verb in ASL, HKSL involves an
additional forward body lean (bl). However, it seems that gapping in HKSL interacts
not only with word order, but also with verb types, in the sense that plain verbs but
not agreeing or classifier verbs allow gapping; compare (22a) with (22b) and (22c).

bl forwardChn
(22) a. tomorrow picnic, ix1 bring chicken wing, [HKSL]
bl forwardChn bl forwardChn
pippen sandwiches, kenny cola,
bl forwardChn
connie chocolate
348 III. Syntax

‘(We) will have a picnic tomorrow. I will bring chicken wings, Pippen (brings)
sandwiches, Kenny (brings) cola, (and) Connie (brings) chocolate.’
b. *kenny 0scold3 brenda, pippen Ø connie
‘Kenny scolds Brenda (and) Pippen Ø Connie.’
c. *ix1 head wall Ø, brenda head window
cl:head-bang-against-flat-surface
‘I banged my head against the wall and Brenda against the window.’

One possible explanation why HKSL disallows agreeing and classifier verbs to be
gapped in coordinate structures is that these verbs express grammatical relations of
their arguments through space. In sign languages, the path and the spatial loci encode
grammatical relations between the subject and the object (see chapter 7, Verb Agree-
ment, for discussion). Thus, gapping the spatially marked agreeing verb scold (22b)
or the classifier predicate cl:head-bang-against-flat-surface (22c) results in the vio-
lation of constraints of identification. We assume that the gapped element lacks pho-
netic content but needs to be interpreted, where syntactic derivations feed the interpre-
tive components. However, contrary to English, where agreement effects can be voided
in identification (Wilder 1997), agreement effects, such as overt spatial locative or
person marking, are obligatory in HKSL, or probably in sign languages in general.
Otherwise, the ‘gapped verb’ will result in the failure of identifying the spatial loci for
which the referents or their associated person features are necessarily encoded. This
leads not only to ambiguity of referents, but also to ungrammaticality of the structure.
Note that word order is not an issue here; even if classifier predicates in HKSL nor-
mally yield a SOV order and one should expect backward gapping, (22b) and (22c)
show that both forward and backward gapping are unacceptable so far as agreeing and
classifier verbs are concerned. In fact, it has been observed in ASL that verb types in
sign languages yield differences in grammatical operations. Lillo-Martin (1986, 1991)
found that topicalizing an object of a plain verb in ASL requires a resumptive pronoun
while it can be null in the case of agreeing verbs (see section 3.1.2). The analysis of
the constraints on gapping and topicalization in HKSL opens up a new avenue of
research for testing modality effects in syntactic structure.

2.2.3. Scope of yes/no-questions and negation

Scope of yes/no-questions and negation is another diagnostic of coordination. Manual


operators like the negator and the Q-morpheme in HKSL can scope over the coordi-
nate structure, as in (23a) and (23b) (re = raised eyebrows).

(23) a. pippen brenda they-both go horse-betting. [HKSL]


hnCbt left hnCbt backward right re
brenda win, pippen lose, right-wrong?
Lit. ‘Pippen and Brenda both went horse-betting. Did Brenda win and
Pippen lose?’
b. teacher play speedboat
eat cow^cl:cut-with-fork-and-knife not-have
‘The teacher did not ride the speedboat and did not eat beef steak.’
16. Coordination and subordination 349

(23a) offers a further example of adversative coordination in HKSL with both con-
juncts being scoped over by the clause-final Q-morpheme right-wrong accompanied
by brow-raise. In fact, the question requires both conjuncts to be true for the question
to receive an affirmative answer; if one of the conjuncts is false or both are false, the
answer will be negative. In (23b), the negator not-have scopes over both conjuncts.
The fact that an element takes scope over the conjuncts in ATB fashion is similar to
the Cantonese example in (2) above, where the two VP conjuncts coordinated by the
conjunction tong (‘and’) are scoped over by the temporal adverbial kum-maan (‘last
night’), and marked by the same perfective marker -zo.
Where a non-manual operator is used, some data from ASL and HKSL indicate
that it is possible to have just one conjunct under the scope of negation. In the ASL
example (24a), the non-manual negation (i.e., headshake) only scopes over the first
conjunct but not the second, which has a head nod instead (Padden 1988, 90). In the
HKSL example (24b), the first conjunct is affirmative, as indicated by the occurrence
of small but repeated head nods, but the second conjunct is negative and ends with
the sentential negator not, which is accompanied by a set of various non-manual mark-
ers (i.e., head tilted backward, headshake, and pursed lips). Note that both (24a) and (24b)
concern adversative conjunction but not conjunctive coordination. In HKSL, the non-
manual marking has to scope over both conjuncts in conjunctive coordination; scoping
over just one conjunct, as in (24c), leads to ungrammaticality. In other words, scope of
yes/no-questions or negation is a better diagnostic for conjunctive coordination than
for other types of coordination. As our informants suggest, (24b) behaves more like a
juxtaposition of two independent clauses, hence failing to serve as a good diagnostic
for coordinate structures (n = negative headshake).

n hn
(24) a. iindex telephone, jindex mail letter [ASL]
‘I didn’t telephone but she sent a letter.’
hnCCC ht backwardChsCpursed lips
b. felix come gladys come not [HKSL]
‘Felix will come (but) Gladys will not come.’
yn
c. *felix come gladys go [HKSL]
Lit. ‘*Will Felix come? (and) Gladys will leave.’

In this section, we have summarized the findings on coordination in sign languages


reported so far; specifically, we have examined the constraints involved in wh-extrac-
tion, gapping, and the scope of some morphosyntactic devices for yes/no-questions and
negation over the coordinate structure. We found that topicalization observes the CSC
and ATB movement more than wh-question formation in these languages. As for gap-
ping, we suggest that verb types in sign languages may have an effect on gapping.
Lastly, using the scope properties of the Q-morpheme in yes/no-questions and the
negator not in conjunctive coordination allows us to identify the constraints on coordi-
nate structures. As we have shown, using negation in disjunctive coordination may lead
to different syntactic behaviors. As for the use of non-manuals, we suggest that head
nods and body turns are crucial cues for the different types of coordination if no
350 III. Syntax

manual conjunctions are present. In the following section, we will explore another
process of clause combining ⫺ subordination ⫺ which typically results in asymmetri-
cal structure.

3. Subordination

Compared with coordination, subordination has received relatively more attention in


sign language research. Thompson’s (1977) claim that ASL does not have grammatical
means for subordination has sparked off a quest for tests for syntactic dependencies.
Subsequent research on ASL has convincingly shown that looking for manual markers
of subordination misses the point because certain subordinate structures are marked
only non-manually (Liddell 1980). Padden (1988) also suggests some syntactic diagnos-
tics for embedded sentential complements in ASL, namely subject pronoun copies for
matrix subjects, spread of non-manual negation into subordinate but not coordinate
structures, as well as wh-extraction from the embedded clauses. However, subsequent
research on NGT yield different results (van Gijn 2004).
In this section, we will first focus on sentential complements and their associated
diagnostics (section 3.1). Typologically, sentential complements are situated towards
the higher end of clause integration with complementizers as formal morphosyntactic
devices to mark the grammatical relations. Where these devices are absent in sign
languages, we argue that the spread of non-manuals might offer a clue to syntactic
dependencies, similar to the observations in coordinate structures. In section 3.2, we
turn our attention to relative clauses, that is, embedding within DP, and provide a
typological sketch of relativization strategies in different sign languages. Note that, due
to space limitations, we will not discuss adverbial clauses in this chapter (see Coulter
(1979) and Wilbur/Patschke (1999) for ASL; Dachkovsky (2008) for Israeli Sign Lan-
guage).

3.1. Sentential complements

Sentential complements function as subject or object arguments subcategorized for


usually by a verb, a noun, or an adjective. In Generative Grammar, sentential comple-
ments are usually analyzed as CPs. Depending on the features of the head, the embed-
ded clause may be finite or non-finite, the force may be interrogative or declarative.
Typologically, not all languages have overt complementizers to mark sentential comple-
ments. Complementizers derive historically from pronouns, conjunctions, adpositions
or case markers, and rarely verbs (Noonan 1985). Cantonese has no complementizers
for both declarative and interrogative complement clauses, as exemplified in (25a) and
(25b). The default force of the embedded clause is usually declarative unless the matrix
verb subcategorizes for an embedded interrogative signaled by an ‘A-not-A’ construc-
tion like sik-m-sik (‘eat-not-eat’) in (25b), which is a type of yes/no-questions (int = in-
tensifier).
16. Coordination and subordination 351

(25) a. ngo3 lam2 CP[ Ø TP [tiu3 fu3 taai3 song1]TP ]CP [Cantonese]
pro-1 think cl pants int loose
‘I think the pants are too loose.’
b. ngo3 man4 CP[ Ø TP [keoi3 sik6-m4-sik6 faan6]TP ]CP
pro-1 ask pro-3 eat-not-eat rice
‘I ask if he eats rice.’

In English, null complementizers are sometimes allowed in sentential complements;


compare (26a) with (26b). However, a complementizer is required when the force is
interrogative, as the ungrammaticality of (26c) shows.

(26) a. Kenny thinks CP[ Ø TP[Brenda likes Connie]TP ]CP.


b. Kenny thinks CP[ that TP[Brenda likes Connie]TP ]CP.
c. *Kenny asks CP[ Ø TP[Brenda likes Connie]TP ]CP.

Null complementizers have been reported for many sign languages. Without an overt
manual marker, it is difficult to distinguish coordinate from subordinate structures
at the surface level. Where subordinate structures are identified, we assume that the
complementizer is not spelled out phonetically and the default force is declarative, as
shown in (27a⫺d) for ASL (Padden 1988, 85), NGT (van Gijn 2004, 36), and HKSL
(see Herrmann (2006) for Irish Sign Language and Johnston/Schembri (2006) for
Auslan).

(27) a. 1index hope iindex come visit will [ASL]


‘I hope he will come to visit.’
b. pointsigner know pointaddressee addresseecomesigner [NGT]
‘I know that you are coming to (see) me.’
c. ix1 hope willy next month fly-back hk [HKSL]
‘I hope Willy will fly back to Hong Kong next month.’
yn
d. rightasksigner rightattract-attentionsigner
ixaddressee want coffee [NGT]
‘He/she asks me: “Do you want any coffee?”’

Van Gijn (2004) observes that there is a serial verb in NGT, roepen (‘to attract atten-
tion’), which may potentially be developing into a complementizer. roepen (glossed
here as attract-attention) occasionally follows utterance verbs like ask to introduce
a ‘direct speech complement’, as in (27d) (van Gijn 2004, 37).
As mentioned above, various diagnostics have been suggested as tests of subordina-
tion in ASL. Some of these diagnostics involve general constraints of natural languages.
In the following section, we will summarize research that examines these issues.

3.1.1. Subject pronoun copy

In ASL, a subject pronoun copy may occur in the clause-final position without a pause
preceding it. The copy is either coreferential with the subject of a simple clause (28a)
or the subject of a matrix clause (28b) but not with the subject of an embedded clause.
352 III. Syntax

Padden (1988) suggests that a subject pronoun copy is an indicator of syntactic depend-
ency between a matrix and a subordinate clause. It also distinguishes subordinate from
coordinate structures because a clause-final pronoun copy can only be coreferential
with the subject of the second conjunct, not the first, when the subject is not shared
between the conjuncts. Therefore, (28c) is ungrammatical because the pronoun copy
is coreferential with the (covert) subject of the first conjunct (Padden 1988, 86⫺88).

(28) a. 1index go-away 1index [ASL]


‘I’m going, for sure (I am).’
b. 1index decide iindex should idrivej see children 1index
‘I decided he ought to drive over to see his children, I did.’
c. *1hiti, iindex tattle mother 1index
‘I hit him and he told his mother, I did.’

It turns out, however, that this test of subordination cannot be applied to NGT and
HKSL. An example similar to (28b) is ungrammatical in NGT, as shown in (29a): the
subject marijke in the matrix clause cannot license the sentence-final copy pointright.
As illustrated in (29b), the copy, if it occurs, appears at the end of the matrix clause
(i.e., after know in this case), not the embedded clause (van Gijn 2004, 94). HKSL also
displays different coreference properties with clause-final pronoun copies. If a final
index sign does occur, the direction of pointing determines which grammatical subject
it is coreferential with. An upward pointing sign (i.e., ixai), as in (29c), assigns the
pronoun to the matrix subject only. Note that the referent gladys, which is the matrix
subject, is not present in the signing discourse, the upward pointing obviates locus
assignment. Under these circumstances, (29d) is ungrammatical when the upward
pointing pronoun ixaj is coreferential with the embedded subject pippen. On the other
hand, the pronoun ixbj in (29e) that points towards a locus in space refers to the
embedded subject pippen.

(29) a. *marijke pointright know inge pointleft leftcomesigner pointright [NGT]


‘Marijke knows that Inge comes to me.’
b. inge pointright know pointright pointsigner italy signergo.toneu.space [NGT]
‘Inge knows that I am going to Italy.’
c. gladysi suspect pippen steal car ixai [HKSL]
‘Gladys suspected Pippen stole the car, she did.’
d. *gladysi suspect pippenj steal car ixaj [HKSL]
Lit. ‘Gladys suspected Pippen stole the car, he did.’
e. gladysi suspect pippenj steal car ixbj [HKSL]
‘Gladys suspected Pippen stole the car, he did.’

It is still unclear why the nature of pointing, that is, the difference between pointing
to an intended locus like ‘bj’ in (29e) for the embedded subject versus an unintended
locus like ‘ai’ in (29c) for the matrix subject, leads to a difference in coreference in
HKSL. The former could be a result of modality because of the fact that the referent
is physically present in the discourse constrains the direction of pointing of the index
sign. This finding lends support to the claim that those clause-final index signs without
an intended locus refer to the matrix subject in HKSL. In sum, it appears that subject
16. Coordination and subordination 353

pronoun copy cannot be adopted as a general test of subordination in sign languages.


Rather, this test seems to be language-specific because it works in ASL but not in
NGT and HKSL.

3.1.2. Wh-extraction

The second test for subordination has to do with constraints on wh-extraction. In sec-
tion 2.2.1, we pointed out that extraction out of a conjunct of a coordinate structure is
generally not permitted unless the rule is applied in ATB fashion. In fact, Ross (1967)
also posits constraints on extraction out of wh-islands (30a⫺c). This constraint has
been attested in many spoken languages, offering evidence that long-distance wh-
movement is successively cyclic, targeting SpecCP at each clause boundary.

(30) a. Whoi do you think Mary will invite ti?


b. *Whoi do you think what Mary did to ti?
c. *Whoi do you wonder why Tom hates ti?

(30b) and (30c) have been argued to be ungrammatical because the intermediate wh-
clause is a syntactic island in English and further movement of a wh-constituent out
of it is barred.
Typological studies on wh-questions in sign languages found three syntactic posi-
tions for wh-expressions: in-situ, clause-initial, or clause-final (Zeshan 2004). In ASL,
although the wh-expressions in simple wh-questions may occupy any of the three syn-
tactic positions (see chapter 14 on the corresponding debate on this issue), they are
consistently clause-initial in the intermediate SpecCP position for both argument and
adjunct questions (Petronio/Lillo-Martin 1997). In other words, this constitutes evi-
dence for embedded wh-questions in ASL. In HKSL, the wh-expression of direct
argument questions is either in-situ or clause-final, and that of adjunct questions is
consistently clause-final. However, in embedded questions, the wh-expressions are con-
sistently clause-final, as in (31a) and (31b), and this applies to both argument and
adjunct questions.

(31) a. father wonder help kenny who [HKSL]


‘Father wondered who helped Kenny.’
b. kenny wonder gladys cook crab how
‘Kenny wondered how Gladys cooked the crabs.’

Constraints on extraction out of embedded clauses have been examined. In NGT, ex-
traction is possible only with some complement taking predicates, such as ‘to want’
(32a) and ‘to see’, but impossible with ‘to believe’ (32b) and ‘to ask’ (van Gijn 2004,
144 f.).

wh
(32) a. who boy pointright want rightvisitleft twho [NGT]
‘Who does the boy want to visit?’
354 III. Syntax

wh
b. *who inge believe twho signervisitleft
‘Who does Inge believe visits him?’

Lillo-Martin (1986, 1992) claims that embedded wh-questions are islands in ASL;
hence, extraction is highly constrained. Therefore, the topic in (33) is base-generated
and a resumptive pronoun (i.e., apronoun) is required.

t
(33) amother, 1pronoun don’t-know “what” *(apronoun) like [ASL]
‘Mother, I don’t know what she likes.’

HKSL behaves similarly. (34a) illustrates that topicalizing the object from an embed-
ded wh-question also leads to ungrammaticality. In fact, this operation cannot even be
saved by a resumptive pronoun (34b); neither can it be saved by signing buy at the
locus of the nominal sofa in space (34c). It seems that embedded adjunct questions
are strong islands in HKSL and extraction is highly constrained. Our informants only
accepted in-situ wh-morphemes, as shown in (34d).

(34) a. *ixi sofa, ix1 wonder dad buy ti where [HKSL]


b. *ixi sofa, ix1 wonder dad buy ixi where
c. *ixi sofa, ix1 wonder dad buyi where
‘As for that sofa, I wonder where dad bought it.’
d. ix1 wonder dad buy ixi sofa where
‘I wonder where dad bought the sofa.’

The results from wh-extraction are more consistent among the sign languages studied,
suggesting that the island constraints are modality-independent. HKSL seems to be
more constrained than ASL because HKSL does not allow wh-extraction at all out of
embedded wh-adjunct questions while in ASL, resumptive pronouns or locative agree-
ment can circumvent the violation. It may be that agreeing verbs involving space for
person features satisfy the condition of identification of null elements in the ASL
grammar. In the next section, we will examine non-manuals as diagnostics for subordi-
nation.

3.1.3. Spread of non-manuals in sentential complementation

In contrast to coordinate structures, non-manuals may spread from the matrix to the
embedded clause, demonstrating that the clausal structure of coordination differs from
that of subordination. This is shown by the ASL examples in (35) for non-manual
negation (Padden 1988, 89) and yes/no-question non-manuals (Liddell 1980, 124). This
could be due to the fact that pauses are not necessary between the matrix and embed-
ded clauses, unlike coordination, where a pause is normally observed between the
conjuncts (Liddell 1980; n = non-manuals for negation).
16. Coordination and subordination 355

n
(35) a. 1index want jindex go-away [ASL]
‘I didn’t want him to leave.’
yn
b. remember dog chase cat
‘Do you remember that the dog chased the cat?’

However, the spread of non-manual negation as observed in ASL turns out not to be
a reliable diagnostic for subordination in NGT and HKSL. The examples in (36) illus-
trate that in NGT, the non-manuals may (36a) or may not (36b) spread onto the em-
bedded clause (van Gijn 2004, 113, 119).

neg
(36) a. pointsigner want pointaddressee neu spacecome-alongsigner [NGT]
‘I do not want you to come along.’
neg
b. inge believe pointright pointsigner signervisitleft marijke
‘Inge does not believe that I visit Marijke.’

HKSL does not systematically use non-manual negation like headshake as a grammati-
cal marker. However, in HKSL, the scope of negation may offer evidence for subordi-
nation. In some cases, it interacts with body leans. In (37a), the sign not occurring at
the end of the embedded clause generally scopes over the embedded clause but not
the matrix clause. Therefore, the second reading is not acceptable to the signers. To
negate the matrix clause, signers prefer to extrapose the embedded clause by means
of topicalization, as in (37b). Body leans are another way to mark the hierarchical
structure of matrix negation. In (37c), the clause-final negator not scopes over the
matrix but not the subordinate clause. (37c) differs from (37a) in the adoption of topi-
calization of the entire sentence with forward body lean, followed by a backward body
lean and a manual sign not, signaling matrix negation.

(37) a. gladys think willy come-back not [HKSL]


i. ‘Gladys thinks Willy will not come back.’
ii. *‘Gladys does not think Willy will come back.’
top
b. willy come-backi, gladys say ti not-have
‘As for Willy’s coming back, Gladys did not say so.’
bl forward bl back
top
c. gladys want willy come-back hk not
‘As for Gladys wanting Willy to come back to Hong Kong, it is not the case.’

Branchini et al. (2007) also observe that where the basic word order is SOV in Italian
Sign Language (LIS), subordinate clauses are always extraposed either to the left pe-
riphery (38a) or to the right periphery (38b). They argue that subordinate clauses do
not occur in their base position preceding the verb (38c) but rather extraposed to the
periphery to avoid the processing load of centre embedding (te = tensed eyes).
356 III. Syntax

te
(38) a. [maria house buy] paolo want [LIS]
te
b. paolo want [maria house buy]
c. *paolo [maria house buy] want
‘Paolo wants Maria to buy a house.’

It could be that different sign languages rely on different grammatical processes as


tests of subordination. In HKSL, another plausible diagnostic is the spread of a non-
manual associated with the verb in the matrix clause. For verbs like believe, guess,
and want, which take object complement clauses, we observe pursed lips as a lexical
non-manual. In (39a) and (39b), a pause is not observed at the clause boundary; the
lips are pursed and the head tilts sideward for the verb in the matrix clause, and these
non-manuals spread till the end of the complement clause, followed by a head nod,
suggesting that the verb together with its complement clause forms a constituent of
some kind.

(39) a. male house look-outi, sky cl: thick-cloud-hover-above [HKSL]


pursed lipsC hn
male guess tomorrow rain
‘The man looks out (of the window) and sees thick clouds hovering in the
sky above. The man guesses it will rain tomorrow.’
pursed lipsC hn
b. ix1 look-at dress pretty; want buy give brenda
‘I saw a pretty dress; I want to buy it and give it to Brenda.’

The same phenomenon is observed in indirect yes/no-questions subcategorized for by


the verb wonder. In this context, we observe the spread of pursed lips and brow-
raise from the verb onto the indirect yes/no-question and brow-raise peaks at the sign
expensive in (40). Thus these non-manuals suggest that it is an embedded yes/no-ques-
tion.

yn
(40) ix1 wonder ixdet car expensive [HKSL]
‘I wonder if this car is expensive.’

One may wonder whether these lexical non-manuals stemming from the verbs have
any grammatical import. In the literature, certain non-manuals like headshake and eye
gaze have been suggested to be the overt realization of formal grammatical features
residing in functional heads. Assuming that there is a division of labor between non-
manuals at different linguistic levels (Pfau/Quer 2010), what we observe here is that
lexical non-manuals associated with the verb spread over a CP domain that the verb
subcategorizes for. It could be that these non-manuals bear certain semantic functions.
In this case, verbs like guess, want, and wonder denote mental states; semantically,
the proposition encoded in the embedded clause is scoped over by these verbs, and
thus the lexical non-manuals scope over these propositions.
In this section, we have examined to what extent the spread of non-manuals over
embedded clauses provides evidence of subordination. Matrix yes/no-questions appear
16. Coordination and subordination 357

to invoke a consistent spread of non-manuals over the embedded clauses across sign
languages. However, patterns are less consistent with respect to non-manual negation:
in complex sentences, sign languages like ASL, NGT, and HKSL show different spread-
ing behaviors for the negative headshake. HKSL instead makes use of scope of nega-
tion, which offers indirect evidence for embedded clauses in HKSL. We also observe
that non-manuals associated with lexical verbs spread into embedded clauses, offering
evidence for sentential complementation. It seems that if non-manuals do spread, they
start from the matrix verb and spread to the end of the embedded clause. Therefore,
in order to use the spread of non-manuals as diagnostics, a prerequisite is to confirm
if the sign language in question uses them. As we have seen, NGT and HKSL do not
use spread of headshake while ASL does.

3.2. Relative clauses

Relative clauses (RCs) have been widely studied in spoken languages, and typological
analyses centre around structural properties such as whether the RCs (i) are head
external or internal, (ii) postnominal or prenominal, (iii) restrictive or non-restrictive,
(iv) employ relative markers such as relative pronouns, personal pronouns, resumptive
pronouns, etc., and (v) their position within a sentence (Keenan 1985; Lehmann 1986).
In sign languages, an additional analysis concerns the use of non-manuals in marking
RCs.
Typologically, Dryer (1992) found a much higher tendency of occurrence for post-
nominal than prenominal RCs: in his sample, 98 % of VO languages and 58 % of OV
languages have postnominal RCs. Externally and internally headed relative clauses
(EHRCs vs. IHRCs) in languages are analyzed as complex NPs while correlatives are
subordinating sentences (Basilica 1996; de Vries 2002). Clear cases of IHRCs are ob-
served in SOV languages and they may co-occur with prenominal EHRCs (Keenan
1985, 163). To date, investigations into relativization strategies in sign languages have
been conducted primarily on ASL, LIS, and DGS. In this section, we will add some
preliminary observations from HKSL. We will first focus on the type and position of
the RCs and the use of non-manuals (section 3.2.1), before turning to the use of rela-
tive markers (section 3.2.2). The discussion, which only addresses restrictive RCs, will
demonstrate that the strategies for relativization in sign languages vary cross-linguisti-
cally, similarly to spoken languages.

3.2.1. Types of relative clauses

To date, various types of RCs have been reported for a number of sign languages,
except for prenominal RCs. Liddell (1978, 1980) argues that ASL displays both IHRCs
(41a) and postnominal ERHCs (41b) (Liddell 1980, 162). According to Liddell, there
are two ways to distinguish EHRCs and IHRCs in ASL. First, in (41a), the non-manual
marker for relativization extends over the head noun dog, indicating that the head
noun is part of the RC, while in (41b), dog is outside the domain of the non-manual
marker. Second, in (41a), the temporal adverbial preceding the head noun scopes over
358 III. Syntax

the verb of the RC, and if the adverbial is part of the RC, then the head noun following
it cannot be outside the RC (rel = non-manuals for relatives).

rel
(41) a. recently dog chase cat come home [ASL]
‘The dog which recently chased the cat came home.’
rel
b. 1ask3 give1 dog [[ursula kick]S thatc ]]NP
‘I asked him to give me the dog that Ursula kicked.’

As for non-manual marking, brow raise has been found to commonly mark relativiza-
tion. Other (language-specific) non-manuals reported in the literature include back-
ward head tilt and raised upper lips for ASL, a slight body lean towards the location
of the relative pronoun for DGS, and tensed eyes and pursed lips for LIS.
According to Pfau and Steinbach (2005), DGS employs postnominal EHRCs, which
are introduced by a relative pronoun (rpro; see 3.2.2 for further discussion). In (42),
the non-manual marker accompanies only the pronoun. The adverbial preceding the
head noun is outside the non-manual marker and scopes over the matrix clause verb
arrive (Pfau/Steinbach 2005, 513). Optionally, the RC can be extraposed to the right,
such that it appears sentence-finally.

re
(42) yesterday [man ix3 [rpro-h3 cat stroke]CP ]DP arrive [DGS]
‘The man who is stroking the cat arrived yesterday.’

The status of RCs in LIS is less clear, as there are two competing analyses. Branchini
and Donati (2009) suggest that LIS has IHRCs marked by a clause-final determiner,
which, based on accompanying mouthing, they gloss as pe (43a). In contrast, Cecchetto,
Geraci, and Zucchi (2006) argue that LIS RCs are actually correlatives marked by a
demonstrative morpheme glossed as prorel (43b). Note that in (43a), just as in (41a),
the non-manual marker extends over the head noun (man) and the adverbial preceding
the head noun, which scopes over the RC verb bring.

re
(43) a. today mani pie bring pei yesterday (ixi) dance [LIS]
‘The man that brought the pie today danced yesterday.’
rel
b. boy icall proreli leave done
‘A boy that called left.’

Wilbur and Patschke (1999) propose that brow raise marks constituents that underwent
A’-movement to SpecCP. Following Neidle et al. (2000), Pfau and Steinbach (2005)
argue that brow raise realizes a formal grammatical feature residing in a functional
head. Brow raise identifies the domain for the checking of the formal features of the
operator. A relative pronoun has two functions: it is an A’-operator bearing wh-fea-
tures or it is a referring/demonstrative element bearing d-features (Bennis 2001). In
16. Coordination and subordination 359

ASL, where there is no overt operator, brow raise spreads over the entire IHRC (41a).
In DGS, it usually co-occurs with only the relative pronoun (42), but optionally, it may
spread onto the entire RC, similar to (41b). For LIS, different observations have been
reported. Branchini and Donati (2009) argue that brow raise spreads over the entire
RC, as in (43a), but occasionally, it accompanies the pe-sign only. In contrast, Cec-
chetto, Geraci, and Zucchi (2006) report that brow raise is usually restricted to the
clause-final sign prorel, but may spread onto the verb that precedes it (43b).
HKSL displays IHRCs. In (44), brow raise scopes over the head noun male and
the RC. Clearly, the RC occupies an argument position in this sentence. The head noun
is the object of the matrix verb like but the subject of the verb eat within the RC.

rel
(44) hey! ix3 like [ixi male eat chips ixi] [HKSL]
‘Hey! She likes the man who is eating chips.’

Liddell (1980) claims that there is a tendency for IHRCs to occur clause-initially in
ASL. The clause in question in LIS shows a similar distribution (Branchini/Donati
2009; Cecchetto/Geraci/Zucchi 2006). (45a) shows that in HKSL, where the basic word
order is SVO (Sze 2003), the RC (ixa boy run) is topicalized to a left peripheral posi-
tion; a boundary blink is observed at the right edge of the RC, followed by the head
tilting backward when the main clause is signed. The fact that brow raise also marks
topicalized constituents in HKSL makes it difficult to tease apart the grammatical
function of brow raise between relativization and topicalization in this example. This
is even more so in (45b), where the topicalized RC is under the scope of the yes/
no-question.

rel/top
(45) a. ixa boy run ix1 know [HKSL]
‘The boy that is running, I know (him).’
rel/top y/n
b. female ixa cycle clothes orange ixa help1 introduce1 good?
‘As for the lady that is cycling and in orange clothes, will you help introduce
(her) to me?’

As mentioned, the second diagnostic for RCs is the scope of temporal adverbials. In
ASL and LIS, the temporal adverbial preceding the head noun scopes over the RC
containing the head noun but not the main clause (41a and 43a). In DGS, which dis-
plays postnominal RCs, however, the temporal adverbial scopes over the main clause
but not the RC (42). In HKSL, just as in ASL/LIS, a temporal adverbial preceding the
head noun scopes over the RC that contains the head noun (46a). Consequently, (46b)
is unacceptable if tomorrow, which falls under the RC non-manuals, is interpreted as
scoping over the main clause. According to our informants, minus the non-manuals,
(46b) would at best yield a coordinate structure which contains two conjoined VPs that
are both scoped over by the temporal adverbial tomorrow. In order to scope over the
main clause, the temporal adverbial has to follow the RC and precede the main clause,
as in (46c) (cf. the position of yesterday in (43a)).
360 III. Syntax

rel
(46) a. yesterday ixa female cycle ix1 letter senda [HKSL]
‘I sent a letter to that lady who cycled yesterday’
rel
b. tomorrow ixa female buy car fly-to-beijing
*‘The lady who is buying the car will fly to Beijing tomorrow.’
?? ‘Tomorrow that lady will buy a car and fly to Beijing.’
rel
c. ixa female cycle (ixa) tomorrow fly-to-beijing
‘The lady who is cycling will fly to Beijing tomorrow.’

3.2.2. Markers for relativization

According to Keenan (1985), EHRCs may involve a personal pronoun (e.g., Hebrew),
a relative pronoun (e.g., English), both (e.g., Modern Greek), or none (e.g., English,
Hebrew). Relative pronouns are pronominal elements that are morphologically similar
to demonstrative or interrogative pronouns. They occur either at the end of the RC,
or before or after the head noun. As for IHRCs, they are not generally marked mor-
phologically, hence leading to ambiguity if the RC contains more than one NP. How-
ever, the entire clause may be nominalized and be marked by a determiner (e.g., Ti-
betan) or some definiteness marker (e.g., Diegueño). Correlatives, on the other hand,
are consistently morphologically marked for their status as subordinate clauses and the
marker is coreferential with a NP in the main clause.
There have been discussions about the morphological markers attested in relativiza-
tion in sign languages. In ASL, there are a few forms of that, to which Liddell (1980)
has ascribed different grammatical status. First, ASL has the sign thata, which Liddell
termed ‘relative conjunction’ (47a). This sign normally marks the head noun in an
IHRC (Liddell 1980, 149 f.). There is another sign thatb which occurs at the end of a
RC and which is usually articulated with intensification (47b). Based on the scope of
the non-manuals, thatc in (47b) does not belong to the RC domain. Liddell argues
that thatc is a complementizer and that it is accompanied by a head nod (Liddell
1980, 150).

re
(47) a. recently dog thata chase cat come home. [ASL]
‘The dog which recently chased the cat came home.’
re
b. ix1 feed dog bite cat thatb thatc
‘I fed the dog that bit the cat.’

In (42), we have already seen that DGS makes use of a relative pronoun. This pronoun
agrees with the head noun in the feature [Ghuman] and comes in two forms: the one
used with human referents (i.e., rpro-h) adopts the classifier handshape for humans;
the one referring to non-human entities is similar to the regular index signs (i.e., rpro-
nh). These forms are analyzed as outputs of grammaticalization of an indexical deter-
16. Coordination and subordination 361

miner sign. The presence of relative pronouns in DGS is in line with the observation
of Keenan (1985) that relative pronouns are typical of postnominal EHRCs. In LIS,
different grammatical status has been ascribed to the indexical sign that consistently
occurs at the end of the RC. Cecchetto, Geraci, and Zucchi (2006) analyze it as a
demonstrative morpheme glossed as prorel. However, according to Branchini and
Donati (2009), pe is not a wh- or relative pronoun; rather it is a determiner for the
nominalized RC. In the IHRCs of HKSL, the clause-final index sign may be omitted
if the entire clause is marked by appropriate non-manuals, as in (46c). If the index sign
occurs, it is coreferential with the head noun within the RC and spatially agrees with
it. The index sign is also identical in its manual form to the index sign that is adjacent
to the head noun, suggesting that it is more like a determiner than a relative pronoun.
However, this clause-final index sign is accompanied by a different set of non-manu-
als ⫺ mouth-open and eye contact with the addressee.
In sum, data from HKSL, ASL, and LIS show that head internal relatives require
brow raise to spread over the RCs including the head noun. As for IHRCs, HKSL
patterns with the LIS relatives studied by Branchini et al. (2007) in the occurrence of
a clause-final indexical sign which phonetically looks like a determiner, the presence
of which is probably motivated by the nominal status of the RC. Also, the presence of
a relative pronoun as observed in DGS offers crucial evidence for the existence of RCs
in that language. In other sign languages, which do not consistently employ such devi-
ces, non-manual markers and/or the behavior of temporal adverbials may serve as
evidence for RCs.

4. Conclusion
In this paper, we have summarized attempts to identify coordinate and subordinate
structures in sign languages. We found that one cannot always rely on morphosyntactic
devices for the identification and differentiation of coordination and subordination
because these devices do not usually show up in the sign languages surveyed so far.
Instead, we adopted general diagnostics of grammatical dependency defined in terms
of constraints on grammatical operations on these structures. The discussion revealed
that the island constraint involved in wh-extraction is consistently observed in sign
languages, too, while other constraints (e.g., gapping in coordinate structures) appear
to be subject to modality effects. We have also examined the behavior of non-manuals
which we hypothesize will offer important clues to differentiate these structures.
Spreading patterns, for instance, allow us to analyze verb complementation, embedded
negation and yes/no-questions, and relativization strategies. As for the latter, we have
shown that sign languages show typological variation similar to that described for spo-
ken languages. For future research, we suggest more systematic categorization of non-
manuals, which we hope will allow us to delineate their functions at different syntac-
tic levels.
362 III. Syntax

5. Literature

Basilica, David
1996 Head Position and Internally Headed Relative Clauses. In: Language 72, 498⫺531.
Bennis, Hans
2001 Alweer Wat Voor (een). In: Dongelmans, Berry/Lallerman, Josien/Praamstra, Olf
(eds.), Kerven in een Rots. Leiden: SNL, 29⫺37.
Branchini, Chiara/Donati, Caterina
2009 Relatively Different: Italian Sign Language Relative Clauses in a Typological Perspec-
tive. In Lipták, Anikó (ed.), Correlatives Cross-Linguistically. Amsterdam: Benjamins,
157⫺191.
Branchini, Chiara/Donati, Caterina/Pfau, Roland/Steinbach, Markus
2007 A Typological Perspective on Relative Clauses in Sign Languages. Paper Presented at
the 7 th Conference of the Association for Linguistic Typology (ALT 7), Paris, Septem-
ber 2007.
Cecchetto, Carol/Geraci, Carlo/Zucchi Sandro
2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguis-
tic Theory. 24(4), 945⫺957.
Coulter, Geoffrey R.
1979 American Sign Language Typology. PhD Dissertation, University of California, San
Diego.
Dachkovsky, Svetlana
2008 Facial Expression as Intonation in Israeli Sign Language: The Case of Neutral and
Counterfactual Conditionals. In: Quer, Josep (ed.), Signs of the Time. Selected Papers
from TISLR 2004. Hamburg: Signum, 61⫺82.
Dryer, Matthews S.
1992 The Greenbergian Word Order Correlations. In: Language, 68, 81⫺138.
Edmonds, Joseph E.
1976 A Transformational Approach to English Syntax. New York: Academic Press.
Gijn, Ingeborg van
2004 The Quest for Syntactic Dependency. Sequential Complementation in Sign Language of
Netherlands. PhD Dissertation, University of Amsterdam.
Haspelmath, Martin
2004 Coordinating Constructions: An Overview. In: Haspelmath, Martin (ed.), Coordinating
Constructions. Amsterdam: Benjamins, 3⫺39.
Herrmann, Annika
2007 The Expression of Modal Meaning in German Sign Language and Irish Sign Language.
In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus. (eds.), Visible Variation. Compara-
tive Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 245⫺278.
Herrmann, Annika
2010 The Interaction of Eye Blinks and Other Prosodic Cues in German Sign Language. In:
Sign Language & Linguistics 13(1), 3⫺39.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
Keenan, Edward, L
1985 Relative Clauses. In: Shopen, Timothy (ed.), Language Typology and Syntactic Descrip-
tion. Vol. 2: Complex Constructions. Cambridge: Cambridge University Press, 141⫺170.
Kibrik, Andrej A.
2004 Coordination in Upper Kuskokwim Athabaskan. In: Haspelmath, Martin (ed.), Coordi-
nating Constructions. Amsterdam: Benjamins. 537⫺553.
16. Coordination and subordination 363

Lehmann, Christian
1986 On the Typology of Relative Clauses. In: Linguistics 24, 663⫺680.
Lehmann, Christian
1988 Towards a Typology of Clause Linkage. In: Haiman, John/Thompson, Sandra A. (eds.),
Clause Combining in Grammar. Amsterdam: Benjamins, 181⫺226.
Liddell, Scott
1978 Nonmanual Signals and Relative Clauses in American Sign Language. In: Siple, Patricia
(ed.), Understanding Language through Sign Language Research. New York: Academic
Press, 59⫺90.
Liddell, Scott
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lillo-Martin, Diane
1986 Two Kinds of Null Arguments in American Sign Language. In: Natural Language and
Linguistic Theory 4, 415⫺444.
Lillo-Martin, Diane
1991 Universal Grammar and American Sign Language: Setting the Null Argument Param-
eters. Dordrecht: Kluwer.
Lillo-Martin, Diane
1992 Sentences as Islands: On the Boundedness of A’-movement in American Sign Lan-
guage. In: Goodluck, Helen/Rochemont, Michael (eds.), Island Constraints. Dordrecht:
Kluwer, 259⫺274.
Mithun, Marianne
1988 The Grammaticalization of Coordination. In: Haiman, John/Thompson, Sandra A.
(eds.), Clause Combining in Grammar. Amsterdam: Benjamins, 331⫺359.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G.
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Nespor, Marina/Vogel, Irene
1986 Prosodic Phonology. Berlin: Mouton de Gruyter.
Noonan, Michael
2005 Complementation. In: Shopen, Timothy (ed.), Language Typology and Syntactic De-
scriptions. Vol. 2: Complex Constructions. Cambridge: Cambridge University Press,
42⫺138.
Padden, Carol
1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland.
Petronio, Karen/Lillo-Martin, Diane
1997 Wh-movement and the Position of Spec-CP: Evidence from American Sign Language.
In: Language, 18⫺57.
Pfau, Roland/Quer, Josep
2010 Nonmanuals: Their Grammatical and Prosodic Roles. In: Brentari, Diane (ed.), Sign
Languages: A Cambridge Language Survey. Cambridge: Cambridge University Press,
381⫺402.
Pfau, Roland/Markus Steinbach
2005 Relative Clauses in German Sign Language: Extraposition and Reconstruction. In Bate-
man, Leah/Ussery, Cherlon (eds), Proceeding of the North East Linguistic Society
(NELS 35). Amherst, MA: GLSA. 507⫺521.
Ross, John R.
1967 Constraints on Variables in Syntax. PhD Dissertation, MIT [Published 1986 as Infinite
Syntax, Norwood, NJ: Ablex].
364 III. Syntax

Ross, John R.
1970 Gapping and the Order of Constituents. In: Bierwisch, Manfred/Heidolph, Karl Erich
(ed.), Progress in Linguistics. The Hague: Mouton, 249⫺259.
Sandler, Wendy
1999 The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli
Sign Language. In: Sign Language & Linguistics 2, 187⫺216.
Selkirk, Elizabeth
2005 Comments on Intonational Phrasing in English. In: Frota, Sonia/Vigario, Marina/Frei-
tas, Maria Joao (eds.), Prosodies: With Special Reference to Iberian Languages. Berlin:
Mouton de Gruyter, 11⫺58.
Sze, Felix
2003 Word Order of Hong Kong Sign Language. In: Baker, Anne/Bogaerde, Beppie van den/
Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign Language Research. Selected
Papers from TISLR 2000. Hamburg: Signum, 163⫺191.
Sze, Felix
2008 Blinks and Intonational Phrasing in Hong Kong Sign Language. In: Quer, Josep (ed.),
Signs of the Time. Selected Papers from TISLR 2004. Hamburg: Signum, 83⫺107.
Tang, Gladys
2006 Negation and Interrogation in Hong Kong Sign Language. In: Zeshan, Ulrike (ed.),
Interrogative and Negative Constructions in Signed Languages. Nijmegen: Ishara Press,
198⫺224.
Tang, Gladys/Brentari, Diane/González, Carolina/Sze, Felix
2010 Crosslinguistic Variation in the Use of Prosodic Cues: The Case of Blinks. In: Brentari,
Diane (ed.), Sign Languages: A Cambridge Language Survey. Cambridge: Cambridge
University Press, 519⫺542.
Tang, Gladys/Sze, Felix/Lam, Scholastica
2007 Acquisition of Simultaneous Constructions by Deaf Children of Hong Kong Sign Lan-
guage. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simulta-
neity in Signed Language: Form and Function. Amsterdam: Benjamins, 283⫺316.
Thompson, Henry
1977 The Lack of Subordination in American Sign Language. In: Friedman, Lynn (eds), On
the Other Hand: New Perspectives on American Sign Language. New York: Academic
Press, 78⫺94.
Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.)
2007 Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins.
Vries, Mark de
2002 The Syntax of Relativization. PhD Dissertation, University of Amsterdam. Utrecht:
LOT.
Waters, Dafydd/Sutton-Spence, Rachel
2005 Connectives in British Sign Language. In: Deaf Worlds 21(3), 1⫺29.
Watson, Richard L.
1966 Clause and Sentence Gradations in Pacoh. In: Lingua 16, 166⫺189.
Wilbur, Ronnie B.
1994 Eyeblinks and ASL Phrase Structure. In: Sign Language Studies 84, 221⫺240.
Wilbur, Ronnie B./Patschke, Cynthia
1999 Syntactic Correlates of Brow Raise in ASL. In: Sign Language & Linguistics 2(1), 3⫺41.
Wilder, Chris
1994 Coordination, ATB, and Ellipsis. In: Groninger Arbeiten zur Generativen Linguistik 37,
291⫺329.
Wilder, Chris
1997 Some Properties of Ellipsis in Coordination. In: Alexiadou, Artemis/Hall, T. Alan
(eds.), Studies on Universal Grammar and Typological Variation. Amsterdam: Benja-
mins, 59⫺107.
17. Utterance reports and constructed action 365

Williams, Edwin
1978 Across-the-board Rule Application. In: Linguistic Inquiry 9, 31⫺43.
Zeshan, Ulrike
2004 Interrogative Constructions in Signed Languages: Cross-linguistic Perspectives. In: Lan-
guage 80(1), 7⫺39.

Gladys Tang and Prudence Lau, Hong Kong (China)

17. Utterance reports and constructed action


1. Reporting the words, thoughts, and actions of others
2. Early approaches to role shift
3. Role shift as constructed action
4. Formal approaches
5. Integration
6. Conclusion
7. Literature

Abstract
Signers and speakers have a variety of means to report the words, thoughts, and actions
of others. Direct quotation gives (the utterer’s version of) the quoted speaker’s point of
view ⫺ but it need not be verbatim, and can be used to report thoughts and actions as
well as words. In sign languages, role shift is used in very similar ways. The signer’s body
or head position, facial expressions, and gestures contribute to the marking of such re-
ports, which can be considered examples of constructed action. These reports also in-
clude specific grammatical changes such as the indexical (shifting) use of first-person
forms, which pose challenges for semantic theories. Various proposals to account for
these phenomena are summarized, and directions for future research are suggested.

1. Reporting the words, thoughts, and actions of others


Language users have a variety of means with which to report the words, thoughts, and
actions of others. Indirect quotation (or indirect report), as in example (1a), reports
from a neutral, or narrator’s point of view. Direct quotation (or direct report, some-
times simply reported speech), as in (1b), makes the report from the quoted person’s
point of view.

(1) Situation: Sam, in London, July 22, announces that she will go to a conference
in Bordeaux July 29. Speaker is in Bordeaux July 31.
366 III. Syntax

a. Indirect discourse description:


Sam said that she was coming to a conference here this week.
b Direct discourse description:
Sam said, “I’ll go to a conference there next week.”

There are several important structural differences between the indirect and direct
types. In the indirect description, an embedded clause is clearly used, whereas in the
direct discourse, the relationship of the quotation to the introducing phrase is arguably
not embedding. In addition, the interpretation of indexicals is different in the two
types. Indexicals are linguistic elements whose reference is dependent on aspects of
the context. For example, the reference of ‘I’ depends on who is speaking at the mo-
ment; the interpretation of ‘today’ depends on the time of utterance; etc. In direct
discourse, the reference of the indexicals is interpreted relative to the situation of the
quoted context.
It is often thought that there is another difference between indirect and direct dis-
course, viz., that direct discourse should be a verbatim replication of the original event,
whereas this requirement is not put on indirect discourse. However, this idea has been
challenged by a number of authors.
Clark and Gerrig (1990) discuss direct quotation and argue that although it “is
CONVENTIONALLY implied that the wording [of direct quotation] is verbatim in
newspapers, law courts, and literary essays, […] [it is] not elsewhere.” On their account,
quotations are demonstrations which depict rather than describe their referents. An
important part of this account is that the demonstrator selects some, but not all of the
aspects of the report to demonstrate. In addition, they point out that the narrator’s
viewpoint can be combined with the quotation through tone of voice, lexical choice,
and gestures.
Clark and Gerrig (1990, 800) contrast their account with the classical ‘Mention
theory’: “The classical account is that a quotation is the mention rather than the use
of an expression”. They critique this approach:

It has serious deficiencies (see, e.g., Davidson 1984). For us the most obvious is that it
makes the verbatim assumption […] [M]ention theory assumes, as Quine 1969 says, that a
quotation ‘designates its object not by describing it in terms of other objects, but by pictur-
ing it’. ‘When we quote a man’s utterance directly,’ Quine says, ‘we report it almost as we
might a bird call. However significant the utterance, direct quotation merely reports the
physical incident’ (219). But precisely what it pictures, and how it does so, are problematic
or unspecified (Davidson 1984). In particular, it makes no provision for depicting only
selected aspects of the ‘physical incident’, nor does it say what sort of thing the act of
picturing is.

Tannen (1989, 99⫺101) also criticizes the verbatim approach to direct quotation. She
says:

Even seemingly ‘direct’ quotation is really ‘constructed dialogue,’ that is, primarily the
creation of the speaker rather than the party quoted. […] In the deepest sense, the words
have ceased to be those of the speaker to whom they are attributed, having been appropri-
ated by the speaker who is repeating them.
17. Utterance reports and constructed action 367

Tannen also recognizes that what is commonly thought of as ‘direct quotation’ can be
used to express not only the (approximate) words of another, but also their thoughts.
She points out (Tannen 1989, 115): “Presenting the thoughts of a character other than
oneself is a clear example of dialogue that must be seen as constructed, not reported.”
Other researchers have investigated ways in which speakers both select aspects of
a dialogue to represent, and go beyond representing the actual speaker’s event to add
aspects of their own point of view. For example, Günthner (1999, 686) says that a
speaker ‘decontextualizes’ speech from its original context and

‘recontextualizes’ it in a new conversational surrounding. In recontextualizing utterances,


speakers, however, not only dissolve certain sequences of talk from their original contexts
and incorporate them into a new context, they also adapt them to their own functional
intentions and communicative aims. Thus, the quoted utterance is characterized by trans-
formations, modifications, and functionalizations according to the speaker’s aims and the
new conversational context. Here, prosody and voice quality play important roles. The use
of different voices is an interactive resource to contextualize whether an utterance is an-
chored in the reporting world or in the storyworld, to differentiate between the quoted
characters, to signal the particular activity a character is engaged in, and to evaluate the
quoted utterance.

In spoken language, prosody and voice quality play important roles in conveying point
of view, and in ‘constructing’ the dialogue that is reported. Streeck (2002) discusses
how users of spoken language may also include mimetic enactment in their ‘quota-
tions’, particularly those introduced by beClike. He calls such usage “body quotation”:
“a mimetic enactment, that is, a performance in which the speaker acts ‘in character’
rather than as situated self” (Streeck 2002, 581). One of his examples (Streeck 2002,
584) is given in (2).

gesture “sticking card into”


(2) But then they’re like “Stick this card into this machine”

Streeck (2002, 591) goes on to describe enactment further:

During an enactment, the speaker pretends to inhabit another body ⫺ a human one or
that of an alien, perhaps even a machine, or her own body in a different situation ⫺ and
animates it with her own body, including the voice. Enactments have the character of
samples: They are made out to possess the features of, and to be of the same kind as, the
phenomena that they depict. In other words, in enactments, speakers’ expressive behaviors
exemplify actions of the story’s characters.

Speakers can thus report the speech, thoughts, and even actions of another, using the
syntax of direct quotation. In this way, the speaker’s interpretation of the original
actor’s point of view can also be expressed. These observations about reporting can be
useful in understanding a range of phenomena in sign languages, discussed next. These
phenomena cover the full continuum between reporting the speech (throughout the
term ‘speech’ is intended to include signed utterances), thoughts, and actions of an-
other. Previous research has varied between considering the phenomena as quite dis-
tinct from each other versus as quite related. It will be argued here that they are indeed
related, in ways very similar to the observations just made about spoken languages.
368 III. Syntax

There have been a variety of proposals for how to analyze these phenomena. These
proposals will be reviewed, and the chapter will conclude with a suggestion regarding
how future analyses might fruitfully proceed.

2. Early approaches to role shift


In early research on the structure of American Sign Language (ASL) and other sign
languages, a phenomenon known as ‘role shift’ or ‘role play’ was discussed. The idea
was that the grammar of these sign languages included a mechanism whereby signers
could shift into the role of a character, conveying information from that character’s
perspective. This phenomenon is characteristic of particularly skilled signing, and used
especially during story-telling.
The descriptions of role shift made it seem like a special way in which sign language
could take advantage of the visual modality (Friedman 1975). For example, Mandel
(1977, 79⫺80) said:

It is common for a signer to take the role of a person being discussed […] When two or
more people are being talked about, the signer can shift from one role to another and
back; and he usually uses spatial relationships to indicate this ROLE-SWITCHING. In
talking about a conversation between two people, for instance, a signer may alternate roles
to speak each person’s lines in turn, taking one role by shifting his stance (or just his head)
slightly to the right and facing slightly leftward (thus representing that person as being on
the right in the conversation), and taking the other role by the reverse position. […] Similar
role-switching can occur in nonquotative narrative. […] A signer may describe not only
what was done by the person whose role he is playing, but also what happened to that
person.

Pfau and Quer (2010, 396) expand on the difference between quotational and non-
quotational uses of role shift:

Role shift (also known as role taking and referential shift) plays two, sometimes overlap-
ping roles in the grammar of sign languages. First, in its quotational use, it is used to
directly report the speech or the unspoken thoughts of a character (also known as con-
structed discourse). […] Second, in its nonquotational use, role shift expresses a character’s
action, including facial expressions and nonlinguistic gestures. That is, the signer embodies
the event from the character’s perspective. This embodiment is also referred to as con-
structed or reported action.

An illustration of role shift is given in Figure 17.1. In this example, the signer indicates
the locus of the wife by her eye gaze and lean toward the right during the sign say;
then in shifting the shoulders and turning the head facing left she ‘assumes’ the ‘role’
of the wife and the following signs are understood as conveying the wife’s words.
Padden (1986, 48⫺49) made the following comments about role-shifting:

Role-shifting is marked by a perceptible shift in body position from neutral (straight facing)
to one side and a change in the direction of eye gaze for the duration of ‘the role.’ […] in
informal terms, the signer ‘assumes’ the ‘role’ […]
17. Utterance reports and constructed action 369

rs: wife
wife say <you fine> [ASL]
Fig. 17.1: Role shift example

‘Role-shifting’ is perhaps an unfortunate term. It suggests structures which resemble play-


acting; indeed, this is how these structures have been described. […] As it turns out, there
are interesting constraints on role-shifting which indicate that its place in the syntactic and
discourse system of ASL should be explored further.

Padden (1986, 49⫺50) provided helpful examples of role-shifting, such as those given
in (3) and (4).

rs: husband
(3) husband <really i not mean> [ASL]
‘The husband goes, “Really, I didn’t mean it.”’
rs: husband

(4) husband <work> [ASL]


‘The husband was like ⫺ “here I am, working.”’

In example (3), the husband’s words or perhaps thoughts are reported by the signer.
In example (4), Padden uses beClike for the English translation. As discussed above,
quotations introduced with beClike in English frequently represent what Streek (2002)
calls “body quotation”. Padden describes the example as not replicating discourse, and
offers as an alternative English translation, “The husband was working”. The example
may be quoting the husband’s thoughts, but it may be ‘quoting’ just his actions, from
his point of view.
Lillo-Martin (1995) also noted that what role shift conveys is very similar to what
is conveyed with the colloquial English use of like, as in, “He’s like, I can’t believe you
did that!” (This use of like is to be distinguished from its use as a hedge or focus
marker; Miller/Weinert 1995; Underhill 1988.) Like need not convey direct discourse,
but portrays the point of view of its subject. Researchers have examined the use of
like as an introducer of “internal dialogue, gesture, or speech” (Ferrara/Bell 1995, 285;
cf. also Romaine/Lange 1991). In (5) some natural examples collected by Ferrara and
Bell (1995, 266) are given. They could be representations of speech, but may also
reflect internal dialogue or attitude, and may well be accompanied by relevant gestures.

(5) a. I was like, “Who is it?”


b. You’re like, “Okay.”
370 III. Syntax

c. She’s like, “Well I take it y’all are dating now.”


d. My Mom’s like, you know, “I trust your driving.”
e. So we’re like, “What?” [motorist in another car tries to signal to the narrator
that his car is on fire]

Padden’s translation of (4) makes explicit this comparison between role shift and the
use of English beClike.
The point that role shift does not necessarily quote a person’s words or even
thoughts is also made in the following examples from Meier (1990, 184). In example
(6a), the first-person pronoun (glossed indexs by Meier) is to be interpreted as repre-
senting what the girl said. All the rest of the example within the role shift (indicated
by 1[ ]1) represents the girl’s actions. In example (6b), no first-person pronoun is
used. However, the event is still narrated from the girl’s point of view, as indicated by
the notation 1[ ]1, and the eye gaze. The report here represents the girl’s actions as
well as her emotional state (scared).

(6) a. yesterday indexs seej girl [ASL]


walk jperson-walk-tok
gaze down
mm gaze i
1[walk. look-upi.
gaze i gaze i gaze i
man iperson-move-tos. indexs scared. hits]1
‘Yesterday I saw this girl. She walked by in front of me. She was strolling
along, then she looked up and saw this man come up to her. “I’m scared”
[she said]. He hit her.’
b. gaze down
mm gaze i
1[walk. look-upi.
gaze i gaze i
man iperson-move-tos. scared.]1
‘She was strolling along, then she looked up and saw this man come up to
her. She was scared.’

For the purposes of this chapter, all these types of reports are under consideration.
Some report the words or thoughts of another (although not necessarily verbatim).
Such cases will sometimes be referred to as quotational role shift. Other examples
report a character’s emotional state or actions, including, as Mandel pointed out, ac-
tions of which the character is recipient as well as agent. These cases will be referred
to as non-quotational. What unifies these different types of reports is that they portray
the event from the point of view of the character, as interpreted by the speaker.
Some analyses treat these different uses of role shift as different aspects of the same
phenomenon, while others look at the uses more or less separately. For example, many
researchers have focused on the quotational uses of role shift, and they may restrict
the term to these uses (including non-verbatim quotation of words or thoughts). Others
focus on the non-quotational uses. Kegl (1986) discussed what is considered here a
type of non-quotative use of role shift, which she called a role prominence marker ⫺
17. Utterance reports and constructed action 371

specifically, a role prominence clitic. She proposed that this marker is a subject clitic,
and that the NP agreeing with it is interpreted with role prominence ⫺ that is, it marks
the person from whose perspective the event is viewed.
Early researchers concluded that role shift is not the same as direct reported speech,
although it is sometimes used for that purpose. Banfield’s (1973, 9) characterization of
direct speech, which reflected a then widely-held assumption, was that it “must be
considered as a word for word reproduction” of the quoted speech, in contrast to
indirect speech. As discussed in section 1, some more recent researchers have rejected
this view of direct speech. However, earlier analyses of direct speech would not suffice
to account for role shift, since it was clear that role shift is not limited to word-for-
word reproduction of speech, but is a way of conveying a character’s thoughts, actions,
and perspective.
Likewise, role shift was early seen as clearly different from indirect speech. One of
the important characteristics of quotational role shift is a change in interpretation for
first-person pronouns and verb agreement. As in direct quotation, the referent of a
first-person pronoun or verb agreement under role shift is not the signer. It is the
person whose speech or thoughts are being conveyed. This is illustrated in example (3)
above. The signer’s use of the first-person pronoun is not meant to pick out the signer
of the actual utterance, but the speaker of the quoted utterance (in this case, the
husband). Therefore, an analysis of role shift as indirect speech also would not suffice.
Engberg-Pedersen (1993, 1995), working on Danish Sign Language (DSL), divided
role shifting into three separate phenomena, as given in (7) and described in the follow-
ing paragraph (Engberg-Pedersen 1993, 103). Note that Engberg-Pedersen uses the
notation ‘1.p’ to refer to the first person pronoun, and ‘locus c’ to refer to the
signer’s locus.

(7) 1. shifted reference, i.e., the use of pronouns from a quoted sender’s point of
view, especially the use of the first person pronoun 1.p to refer to somebody
other than the quoting sender;
2. shifted attribution of expressive elements, i.e., the use of the signer’s face and/
or body posture to express the emotions or attitude of somebody other than
the sender in the context of utterance;
3. shifted locus, i.e. the use of the sender locus for somebody other than the
signer or the use of another locus than the locus c for the signer.

In shifted reference, which Engberg-Pedersen says is confined to direct discourse, the


first person pronoun is used to refer to someone other than the signer; that is, the
person quoted. In shifted attribution of expressive elements, the signer’s signs, face,
and body express the emotions or attitude of another. This may be within a direct
discourse, but does not necessarily have to be; it may be within ‘represented thought’.
Engberg-Pedersen compares shifted attribution of expressive elements to the use of
voice quality to distinguish speakers in reported dialogues in spoken languages. The
third category, shifted locus, is similar to shifted reference, in that the signer’s locus is
used for reference to another ⫺ but in this case, the signer’s locus is used in verb
agreement only, not in overt first-person pronouns. Unlike shifted reference, shifted
locus is not limited to direct discourse. Furthermore, according to Engberg-Pedersen,
shifted locus is not always marked overtly by a change in body position. (Padden made
the same observation about examples such as the one in (4).)
372 III. Syntax

a. She looked at him arrogantly b. She looked at him arrogantly


(woman’s point of view) [DSL] (man’s point of view) [DSL]
Fig. 17.2: Distinction between shifted attribution of expressive elements and shifted locus (Re-
printed from Engberg-Pedersen 1993 with permission)

Engberg-Pedersen shows interesting ways in which these different characteristics of


‘role play’ are separable. For example, the signer’s locus can be used to refer to one
character under shifted locus, while the facial expression conveys the attitude of a
different character under shifted attribution of expressive elements. An example from
Engberg-Pedersen is given in Figure 17.2.
Both panels of Figure 17.2 show the verb look-at, and in both, the signer’s face is
used to express the woman’s (i.e., the referent of the grammatical subject’s) point of
view. However, the verb agreement is different in the two panels. In Figure 17.2a, the
verb shows regular agreement with the object/goal (the man). However, in Figure
17.2b, the verb uses the first-person locus for the object/goal agreement. This means
that while the signer’s locus is used to represent the man for purposes of verb agree-
ment (under shifted locus), it is representing the woman for the shifted attribution of
expressive elements.
Engberg-Pedersen’s characterization makes an explicit claim about the use of first-
person pronouns which needs further consideration. She says that the use of overt
first-person pronouns to refer to someone other than the signer is restricted to direct
discourse (quotation). However, the signer’s locus (i.e., first person) can be used in
verb agreement to pick out someone other than the signer in non-direct-discourse
contexts. This contrast will be discussed in section 5.
Descriptions of role shift in other sign languages similar to those presented thus far
can be found for British Sign Language (BSL, Morgan 1999; Sutton-Spence/Woll 1998),
Catalan Sign Language (LSC, Quer/Frigola 2006), German Sign Language (DGS,
Herrmann/Steinbach 2011), Nicaraguan Sign Language (ISN, Pyers/Senghas 2007),
Quebec Sign Language (LSQ, Poulin/Miller 1995), and Swedish Sign Language (SSL,
Ahlgren 1990; Nilsson 2004).

3. Role shift as constructed action


Although most discussions of role shift until the mid-1990s differentiated it from re-
ported speech/direct quotation because of the idea that such quotation should be ver-
17. Utterance reports and constructed action 373

batim, some sign language researchers were paying attention to developments in the
fields of discourse which recognized the problems with such a claim for direct quotation
more generally. They adopted the view of Tannen (1989) that direct quotation should
be seen as constructed.
Liddell and Metzger (1998), following on work by Winston (1991) and Metzger
(1995), describe instances of role shift or role play in ASL as constructed action. Metz-
ger (1995, 261) describes an example, given in (8), in which constructed dialogue is a
part of a larger sequence of constructed action. In the example, the signer is portraying
a man seated at a card table looking up at another man who is asking for someone
named Baker. The example shows the card player’s constructed dialogue, which in-
cludes his gesture, raising his hand, and his facial expression and eye gaze. It also
includes his constructed action prior to the admission, looking up at the stranger, co-
occurring with the sign look-up. The whole example starts with the narrator signing
man, to inform the audience of the identity of the character whose actions and utter-
ance will be (re-)constructed next.

to addressee gaze forward to up left lower lip extended/head tilt/gaze up left


(8) man cards-in-hand look-up, “that (raise hand) that pro.1” [ASL]
‘So one of the guys at the table says, “Yeah, I’m Baker, that’s me.”’

This flow between narrator, constructed action, and constructed dialogue is characteris-
tic of ASL stories. As we have seen, however, it is not something special to sign lan-
guages, or some way in which sign languages are different from spoken languages.
Speakers also combine words, gestures, facial expressions, and changes in voice quality
to convey the same range of narrative components.
Liddell and Metzger (1998) draw these parallels quite clearly. They aim to point
out that parts of a signed event are gestural while other parts are grammatical, just as
in the combination of speech, such as “Is this yours?” while pointing to an object such
as a pen. They state (Liddell/Metzger 1989, 659), “The gestural information is not
merely recapitulating the same information which is grammatically encoded. The ad-
dressees’ understanding of the event will depend on both the grammatically encoded
information and the gestural information.” This combination of grammatical and ges-
tural is crucially involved in constructed action.
Liddell and Metzger use the theory of Mental Spaces proposed by Fauconnier
(1985), and the notion of mental space blends discussed by Fauconnier and Turner
(1996), to account for the range of meanings expressed using constructed actions. In
their view, the signer’s productions reflect a blend of two mental spaces. One of these
mental spaces may be the signer’s mental representation of their immediate environ-
ment, called Real Space. Other spaces are conceptual structures representing particular
aspects of different time periods, or aspects of a story to be reported. In their paper,
Liddell and Metzger analyze examples elicited by a Garfield cartoon. Then, the signer’s
mental conception of the cartoon, called Cartoon space, can blend with Real Space.
Using such a blend, the signer may select certain aspects of the situation to be con-
veyed in different ways. This can be illustrated with example (9) (Liddell/Metzger 1998,
664⫺665).
374 III. Syntax

(9) cat look-up “oh-shit” cl-x(press remote control) [ASL]


‘The cat looked up at the owner. He thought, “Oh shit” and pressed the re-
mote control.’

As with Metzger’s (1995) example given in (8) above, this example includes the narra-
tor’s labeling of the character, the character’s constructed action (both in the signer’s
looking up and in his signed description look-up), and the character’s constructed
dialogue (his thoughts). Liddell and Metzger point out that the signer’s hands do not
represent the character’s hands during the sign look-up, but that they are constructing
the character’s signs during the expletive “oh-shit”. Of course, the cat Garfield does
not sign even in the cartoon, but the signer is ‘constructing’ his utterance ⫺ just as
speakers might ‘speak’ for a cat (Tannen 1989 gives such examples as part of her
argument for dissociating constructed dialogue from verbatim quotation).
To illustrate the range of meanings (generally speaking) expressed by different types
of constructed action, Liddell and Metzger (1998, 672) give the following table:

Tab. 17.1: Types of constructed actions and their significance


Types of constructed actions What they indicate
Articulation of words or signs or emblems What the |character| says or thinks
Direction of head and eye gaze Direction |character| is looking
Facial expressions of affect, effort, etc. How the |character| feels
Gestures of hands and arms Gestures produced by the |character|

The analysis presented by Liddell and Metzger emphasizes the similarity between
constructed action in sign language and its parallels in spoken languages. As discussed
earlier, speakers use changes in voice quality, as well as gestures, to ‘take on a role’
and convey their construction of the actions, thoughts, or words of another. These
changes and gestures occur together with spoken language elements. It seems clear
that the main difference is that, for signers, all these components are expressed by
movements of the hands/body/facial expressions, so separating the gesture from the
grammatical is more challenging.
Other authors have made use of the cognitive linguistics framework account of
constructed action proposed by Liddell and Metzger and have extended it in various
ways. For example, Aarons and Morgan (2003) discuss the use of constructed action
along with classifier predicates and lexical signs to express multiple perspectives se-
quentially or simultaneously in South African Sign Language.
Dudis (2004) starts with the observation that the signer’s body is typically used in
constructed action to depict a body. But he argues that actually, not all parts of the
signer’s body will be used in the blend, and furthermore, different parts of the signer’s
body can be partitioned off so as to represent different parts of the input to the blend.
For example, Dudis discusses two ways of showing a motorcyclist going up a hill. In
one, the signer’s torso, head, arms, hands, and facial expression all convey the motorcy-
clist: the hands holding the handles, the head tilted back, looking up the hill, the face
showing the effort of the climb. In the second, the signer’s hands are ‘partitioned off’,
and used to produce a verb meaning vehicle-goes-up-hill. But the torso, head, and face
17. Utterance reports and constructed action 375

are still constructing aspects of the motorcyclist’s experience. As Dudis (2004, 228)
describes it:

A particular body part that can be partitioned off from its role in the motorcyclist blend,
in this instance the dominant hand. Once partitioned off, the body part is free to participate
in the creation of a new element. This development does not deactivate the motorcyclist
blend, but it does have an impact. The |motorcyclist’s| hands are no longer visible, but
conceptually, they nevertheless continue to be understood to be on the |handles|. This is
due to pattern completion, a blending operation that makes it possible to ‘fill in the blanks’.

Dudis shows that in such multiple Real Space blends, different perspectives requiring
different scales may be used. One perspective is the participant viewpoint, in which
“objects and events […] are described from the perspective of the [participant]. The
scalar properties of such a blend, as Liddell (1995) shows, are understood to be life-
sized elements, following the scale of similar objects in reality” (Dudis 2004, 230). The
other perspective is a global viewpoint. For example, when the signer produces the
verb for a motorcycle going uphill, the blend portrayed by the hands uses the global
viewpoint. As Dudis (2004, 230) says:

The smaller scale of the global perspective depiction involving the |vehicle| is akin to a
wide-angle shot in motion-picture production, while the real-space blend containing the
participant |signer as actor| is akin to a close-up shot. It is not possible for the |signer as
actor| and the |vehicle| to come into contact, and the difference in scale is one reason why.

Janzen (2004) adds some more important observations about the nature of constructed
action and its relationship to presenting aspects of a story from a character’s perspec-
tive. First, Janzen emphasizes a point made also by Liddell and Metzger (1998), that
there is not necessarily any physical change in the body position to accompany or
indicate a change in perspective. To summarize (Janzen 2004, 152⫺153):

Rather than using a physical shift in space to encode differing perspectives as described
above, signers frequently manipulate the spatially constructed scene in their discourse by
mentally rotating it so that other event participants’ perspectives align with the signer’s
stationary physical vantage point. No body shift toward various participant loci within the
space takes place. … [T]he signer has at least two mechanisms ⫺ a physical shift in space
or mental rotation of the space ⫺ with which to accomplish this discourse strategy.

Because of the possibility for this mental rotation, Janzen (2004, 153) suggests, “this
discourse strategy may represent a more ‘implicit’ coding of perspective (Graumann
2002), which requires a higher degree of inference on the part of the addressee.” This
comment may go some way toward explaining a frequent observation, which is that
narratives containing a large amount of constructed action are often more difficult for
second-language learners to follow (Metzger 1995). Despite the frequent use of gesture
in such structures, they can be difficult for the relatively naïve addressee who has the
task of inferring who is doing what to whom.
Janzen also argues that constructed action does not always portray events from a
particular perspective, but is sometimes used to indicate which character’s perspective
376 III. Syntax

is excluded. To indicate perspective shifts towards and away from a character an alter-
nate character might be employed, but the choice of alternate character may be less
important than the simple shift away. In fact, Janzen claims that these perspective shifts
can also be used with unobserved events, indicating (e.g., by turning the head away)
that a character is unaware of the event, and not involved in it. In such cases, body
partitioning such as Dudis describes is needed: the head/eyes show the perspective of
the non-observer, while the hands may sign or otherwise convey the unseen event.

4. Formal approaches

The description of role shift as a type of constructed action recognizes that many
components of this phenomenon are analogous to the use of gestures and changes in
voice quality during narration in spoken languages. However, some researchers have
nevertheless been interested in pursuing a formal analysis of certain aspects of role
shift, particularly the change in reference for the first-person pronoun.
Lillo-Martin (1995) compared shifted reference of first-person pronouns with the
use of a logophoric pronoun in some spoken languages. In languages such as Abe,
Ewe, and Gokana, a so-called ‘logophoric pronoun’ is used in the embedded clause of
certain verbs, especially verbs that convey another’s point of view, to indicate co-refer-
ence with a matrix subject or object (Clements 1975; Hyman/Comrie 1981; Koopman/
Sportiche 1989). In the example in (10a) (Clements 1975, 142), e is the non-logophoric
pronoun, which must pick out someone other than the matrix subject, Kofi. In (10b), on
the other hand, yè is the logophoric pronoun, which must be co-referential with Kofi.

(10) a. Kofi be e-dzo [Ewe]


Kofi say pro-leave
‘Kofii said that hej left.’
b. Kofi be yè-dzo
Kofi say Log-leave
‘Kofii said that hei left.’

Lillo-Martin (1995) proposed that the ASL first-person pronominal form can serve as
a logophoric pronoun in addition to its normal use. Thus, in logophoric contexts (within
the scope of a referential shift), the logophoric pronoun refers to the matrix subject,
not the current signer.
Lillo-Martin further proposed that ASL referential shift involves a point of view
predicate, which she glossed as pov. pov takes a subject which it agrees with, and a
clausal complement (see Herrmann/Steinbach (2011) for an analysis of role shift as a
non-manual agreement operator). This means that the ‘quoted’ material is understood
as embedded whether or not there is an overt matrix verb such as say or think. Any
first-person pronouns in the complement to the pov predicate are logophoric; they are
interpreted as co-referential with the subject of pov. According to Lillo-Martin’s (1995,
162) proposal, the structure of a sentence with pov, such as (11), is as in (12).
17. Utterance reports and constructed action 377

< ashift>
(11) amom apov 1pronoun busy. [ASL]
‘Mom (from mom’s point of view), I’m busy.’
= ‘Mom’s like, I’m busy!’
(12)

According to the structure in (12), pov takes a complement clause. This CP is intro-
duced by an abstract syntactic operator, labeled Op. The operator is bound by the
subject of pov ⫺ the subject c-commands it and they are co-indexed. The operator also
binds all logophoric pronouns which it c-commands ⫺ hence, all 1pronouns in the
complement clause are interpreted as coreferential with the subject of pov.
Lee et al. (1997) argue against Lillo-Martin’s analysis of role shift. They focus on
instances of role shift introduced by an overt verb of saying, as in the example given
in Figure 17.1 above, or example (13) below (Lee et al. 1997, 25).

rsi
(13) johni say ix1pi want go [ASL]
‘John said: “I want to go.”’

Lee et al. argue that there is no reason to consider the material following the verb of
saying as part of an embedded clause. Instead, they propose that this type of role shift
is simply direct quotation. As with many spoken languages, the structure would then
involve two logically related but syntactically independent clauses. Lee et al. suggest
that the use of non-manual marking at the discourse level, specifically head tilt and
eye gaze, functions to identify speaker and addressee.
Since Lee et al. only consider cases with an overt verb of saying, they do not include
in their analysis non-quotational role shift. The possibility that both quotational and
non-quotational role shift might be analyzed as forms of direct discourse will be taken
up in more detail in section 5.
The analysis of role shift, particularly with respect to the issue of shifting reference,
was recently taken up by Zucchi (2004) and Quer (2005, 2011). Zucchi and Quer are
both interested in a theoretical claim made on the basis of spoken language research
by Kaplan (1989). Kaplan makes the following claim about indexicals, as summarized
by Schlenker (2003, 29): “the value of an indexical is fixed once and for all by the
context of utterance, and cannot be affected by the logical operators in whose scope it
378 III. Syntax

may appear”. In other words, we understand indexicals based on the context, but their
reference does not change once the context is established. Consider the examples in
(14)⫺(15), modified from Schlenker (2003).

(14) a. John thinks that I am a hero.


b. John thinks that he is a hero.
(15) a. John says that I am a hero.
b. John says that he is a hero.

In English, the (a) examples cannot be interpreted as the (b) examples ⫺ that is, the
reference of ‘I’ must be taken to be the speaker; it does not change to represent the
speaker or thinker of the reported event (John). It is of course this shifting of reference
which takes place in direct discourse in English, as in (16). This case is specifically
excluded from Kaplan’s concern.

(16) John says, “I am a hero.”

Kaplan’s claim is that no language can interpret indexicals in non-direct discourse


contexts as shifted, in the way that they are interpreted in direct discourse. He says
that if an operator existed which would allow such a shift, it would be a ‘monster’.
Schlenker (2003) objects to Kaplan’s claim on the basis of evidence from a number
of languages that do, he claims, allow such ‘monsters’. One type of example comes
from logophoric pronouns, which were discussed earlier. Clearly logophoric pronouns
seem to do exactly what Kaplan’s monsters would do, providing counter-evidence for
his claim that they do not exist. On the other hand, it is important not to allow indexi-
cals to shift willy-nilly, for surely this would lead to results incompatible with any
natural language.
Schlenker’s solution is to establish context variables introduced by matrix verbs
such as ‘say’ or ‘think’, according to which shifting indexicals will be interpreted. In
different languages, different indexicals will be identified as to the domain within which
they must be interpreted.
Zucchi (2004) considers whether role shift in sign language is another example
showing that monsters do in fact exist. His data focus on Italian Sign Language
(LIS), but it appears that the basic phenomenon is the same as we have seen for
other sign languages as well. Zucchi assumes that the quotational and non-quota-
tional uses of role shift are distinct in terms of at least some of the structures they
use. As for the quotational use of role shift, this would not be problematic for
Kaplan’s claim should this use be equivalent to direct discourse, since direct dis-
course has already been excluded. However, Zucchi argues that non-quotational
role shift still shows that the interpretation of indexicals must be allowed to shift
in non-direct discourse contexts.
In this context, a claim made by Engberg-Pedersen (1993), cited in (7) above,
becomes very relevant. Recall that Engberg-Pedersen claimed that (DSL) first-
person pronouns are only used in the shifted way within direct discourse. If shifted
pronouns can only be used in direct discourse, is there any ‘monster’ to be con-
cerned about?
17. Utterance reports and constructed action 379

The answer is ‘yes’. Numerous examples of role shift, including those provided
by Engberg-Pedersen, show that the verb may be produced with first-person agree-
ment which is interpreted as shifted, just as first-person pronouns are shifted. This
is what Engberg-Pedersen calls ‘shifted locus’ (as opposed to ‘shifted reference’).
The issue of why direct discourse allows shifted pronouns, while other cases of role
shift only allow shifted locus, will be discussed in section 5. For now, the important
point is that verb agreement with first person is just as ‘indexical’ as a first-person
pronoun for the issue under discussion.
With this in mind, Zucchi pursues a common analysis of shifting indexicals in
quotational and non-quotational contexts. It has three parts. The first part is the
introduction of another variable, this one for the speaker/signer (σ). Ordinarily, this
variable will refer to the speaker/signer of the actual utterance. However, Zucchi
proposes that the grammar of LIS also includes a covert operator which assigns a
different value to the variable σ. Furthermore, he proposes that the non-manual
markings of a role shift “induce a presupposition on the occurrence of the signer’s
variable, namely the presupposition that this variable denotes the individual corre-
sponding to the position toward which the body (or the eye gaze, etc.) shifts”
(Zucchi 2004, 14). In order to satisfy this presupposition in shifted contexts, the
operator that assigns a different value to the speaker/signer variable must be in-
voked.
Why does Zucchi use presuppositional failure to motivate the use of the opera-
tor? It is because he seeks a unified analysis of quotational and non-quotational
shifts. He argues that the non-manual marking is “not in itself a grammatical marker
of quotes or of non quotational signer shift (two functions that could hardly be
accomplished by a single grammatical element)” (Zucchi 2004, 15⫺16). The non-
manual marking simply indicates that the presupposition regarding the σ variable
is at stake.
Does this analysis show that there are, indeed, monsters of the type Kaplan
decried? In fact, Zucchi argues that neither the operator he proposes for role shift
nor the examples used by Schlenker actually constitute monsters. On Zucchi’s
analysis of LIS role shift, it is important that only the signer be interpreted as
shifted. Then, the role shift operators do not change all of the features of the
context, and therefore it is not a monster.
However, Quer (2005, 2011) suggests that Zucchi’s analysis may be oversimpli-
fied. He proposes a different solution to the problem, although like Zucchi his goal
is to unify analysis of shifting indexicals in quotational and non-quotational uses of
role shift, bringing in new data from Catalan Sign Language (LSC).
Quer’s proposal moves the discussion further by bringing in data on the shifting
(or not) of indexicals in addition to pronouns, such as temporal and locative
adverbials. Relatively little research on role shift has mentioned the shiftability of
these indexicals, so clearly more research is needed on their behavior. According
to Quer, such indexicals show variable behavior in LSC. Importantly, some may
shift within the context of a role shift, while others may not. Herrmann and
Steinbach (2011) report a similar variability in context shift for locative and tempo-
ral indexicals in German Sign Language (DGS). Consider the examples in (17)
(Quer 2005, 153⫺154):
380 III. Syntax

t RS-i
(17) a. ixa madrid joani think ix-1i study finish here madrid [LSC]
‘When he has in Madrid, Joan thought he would finish his studies there
in Madrid.’
t RS-i
b. ixa madridm moment joani think ix-1i study finish hereb
‘When he was in Madrid, Joan thought he would finish his study in Barce-
lona.’

According to Quer, when under the scope of role shift the locative adverbial here
can be interpreted vis-à-vis the context of the reported event (as in (17a)), or the
context of the utterance (as in (17b), if it is uttered while the signer is in Barcelona).
As long as adverbials can shift as well as pronouns, it is clear that none of the
previous formal analyses, which focused on the shift of the pronoun exclusively, is
adequate. Amending such analyses by adding temporal adverbials to the list of
indexicals that may shift would lead to an unnecessarily complex analysis, if instead
an alternative analysis can be developed which would include both pronominal and
adverbal indexicals. This is the approach pursued by Quer.
Quer’s analysis builds on the proposals of Lillo-Martin (1995), but implements
them in a very different way. He proposes that role shift involves a covert Point
of View Operator (PVOp), which is an operator over contexts a là Schlenker,
sitting in a high functional projection in the left periphery of the clause. While
Lillo-Martin’s analysis has a pov predicate taking a complement clause as well as
an operator binding indexical pronouns, Quer’s proposal simplifies the structure
involved while extending it to include non-pronominal indexicals. Although the
PVOp proposed by Quer is covert, he claims that it “materializes in RS nonmanual
morphology” (Quer 2005, 161). In this way, he claims, it is similar to other sign
language non-manual markers that are argued to be realizations of operators.
Quer’s proposal is of especial interest in regards to the possibility that some
indexicals shift while others do not, as illustrated in (17b) earlier. As he notes, such
examples violate the ‘Shift Together Constraint’ proposed by Anand and Nevins
(2004), which states that the various indexicals in a shifting context must all shift
together. Examples like this should be considered further, and possibly fruitfully
compared with ‘free indirect discourse’, or ‘mixed quotation’, mixing aspects of
direct and indirect quotation (Banfield 1973 and recent work by Cuming 2003,
Sharvit 2008, among others).

5. Integration

This chapter has summarized two lines of analysis for role shift in sign languages.
One line compares it to constructed action, and subsumes all types of reports
(speech, thoughts, actions) under this label. The other line attempts to create formal
structures for role shifting phenomena, focusing in some cases on the syntactic
structures involved and in other cases on the semantics needed to account for
shifting indexicals.
17. Utterance reports and constructed action 381

What is to be made of these various approaches to role shift in sign languages?


Is this a case of irreconcilable differences in theoretical foundations? Perhaps the
questions one side asks are simply not sensible to the other. However, there are
important aspects to both approaches, and a direction is suggested here for gaining
from both views, which may result eventually in a more comprehensive analysis
than either of the approaches alone.
To begin with, the comparison between role shift and constructed action is quite
apt. As happens not infrequently when comparing aspects of sign and spoken
language, the sign phenomena can lead to a broadening of our consideration of what
languages do, not because sign languages are so different from spoken languages, but
because there is more going on in spoken languages than previously considered.
Let us take into consideration what speakers do with gestures, facial expressions,
and changes in voice quality alongside their words.
As Liddell (1998) points out, what speakers do and what signers do is actually
rather similar. Constructed dialogue portrays much more than a verbatim replication
of another’s spoken words. Just as in role play, thoughts can be ‘quoted’, and the
narrator’s point of view can shift with those of a character represented (shifted
attribution of expressive elements). Furthermore, co-speech gestures may participate
in constructed action more generally, giving more information about how a character
performed an action, or other aspects of the character’s viewpoint.
If role shift is constructed action, and constructed action is an expanded concep-
tion of direct discourse, what kinds of formal structures are involved? De Vries
(2008) shows that direct quotation in spoken languages can take a number of
syntactic forms. Importantly, he shows that quotational clauses have the structure
of main clauses, not embedded clauses. This is in line with the proposal of Lee et
al. that role shift involves a syntactically independent clause, not an embedded
clause. How can the shifting of indexicals be integrated into this proposal?
First, consider the quotative use of role shift. For many researchers, direct
quotation sets up a domain which is opaque to semantic analysis. For example, de
Vries (2008) follows Clark and Gerrig (1990) in considering quotation to be prag-
matically demonstration. He argues that, syntactically, direct quotation can take a
variety of forms, but the quoted form is inserted as atomic. His proposal takes the
following form (de Vries 2008, 68):

I conclude that quotation can be viewed as a function ⫺ call it quote α ⫺ that turns
anything that can pragmatically serve as a (quasi-)linguistic demonstration into a syntac-
tic nominal category:
(62) quote α:
f.. .. (α) / [N “α”]
The quotation marks in the output are a provisional notational convention indicating
that α is pragmatically a demonstration, and also that α is syntactically opaque. If α
itself is syntactically complex, it can be viewed as the result of a previous derivation.

On such an analysis, the quoted material is inserted into a sentence but its semantic
content is not analyzed as part of the larger sentence. Rather, the content would
presumably be calculated in the ‘previous derivation’ where the syntactically com-
plex quoted material is compiled. In this case, interpretation of shifters would take
382 III. Syntax

place according to the context of the quotation (when quoting Joan, ‘I’ refers to
the quoted speaker, Joan). So, if quotation is simply a demonstration, there might
be no issue with the shifting of indexicals. Thus, quotative role shift might not pose
any particular challenges to the formal theorist. What about its non-quotative uses?
Now we must confront the issue of which indexicals shift in non-quotative role
shift. Recall Engberg-Pedersen’s claims that first person pronouns shift only in
direct discourse. As was pointed out in the previous section, the fact that first
person agreement is used on verbs in non-quotative role shift indicates that some
account of shifting is still needed. But why should the shifting of first-person
pronouns be excluded from non-quotative role shift?
The answer might be that it’s not that the pronoun used to pick out the character
whose point of view is being portrayed fails to shift, but rather that no pronouns ⫺
or noun phrases ⫺ are used to name this character within non-quotative role shift.
This type of constructed action focuses on the action, without naming the partici-
pants within the scope of the shift. This is true for all the examples of non-quotative
role shift presented thus far. Consider also Zucchi’s (2004, 6) example of non-
quotative role shift, given below in (18) (Zucchi uses the notation ‘/Gianni’ to
indicate role shift to Gianni).

/Gianni
(18) gianni arrive book I⫺donate⫺you [LIS]
‘When Gianni will come, he’ll give you a book as a present.’

In this example, the agent (gianni) and the theme (book) are named, but before the
role shift occurs. The role shift co-occurs with the verb and its agreement markers.
This mystery is not solved, but made somewhat less mysterious, by considering
again the comparison between sign language and spoken language. In a spoken
English narration of the story of Goldilocks and the Three Bears, a speaker might
gesture along with the verb in examples such as (19). In these examples, the verb
and gesture constitute a type of constructed action.

(19) a. And she ate it all up.


g(eating)
b. And she was, like, eating it all up.
g(eating)

However, if the speaker adds a first-person pronoun, as in (20), the interpretation


changes to quotation. As usual with beClike, the report need not be an actual
verbatim quote of what the character said (in the story), but may be a report of
her thoughts. But the interpretation changes sharply in comparison to the example
with no pronoun.

(20) And she was, like, I’m eating it all up.


g(eating)

So it seems to be a more general property of non-quotational constructed action


that rules out the use of any pronoun (or noun phrase), referring to the character
17. Utterance reports and constructed action 383

whose point of view is being portrayed, not a restriction against first-person shifting
pronouns. What about other indexical elements, such as temporal or locative ad-
verbs? No examples of non-quotational role shifting with shifted indexicals other
than first-person agreement have been reported. This is clearly a matter for addi-
tional research.
With this in mind, a system is needed to accommodate the shifting nature of
first-person verb agreement (and possibly other indexicals) under non-quotational
role shift. The proposal by Quer (2005, 2011) has the necessary components: an
operator over contexts which can (if needed) separately account for the shifting of
different indexicals. This type of approach can then account for the full range of
phenomena under consideration here.

6. Conclusion
In recent years, there have been two approaches to role shift in sign languages.
One approach makes the comparison between role shift and constructed action
(including constructed dialogue). This approach highlights similarities between con-
structed action in sign languages and the use of voice quality and gestures for
similar purposes in spoken languages. The second approach brings formalisms from
syntax and semantics to understanding the nature of the shifted indexicals in role
shift. This approach also makes comparisons between sign languages and spoken
languages, finding some possible similarities between the shifting of indexicals in role
shift and in logophoricity and other spoken language phenomena. More research is
needed, particularly in determining the extent to which different indexicals may or
may not shift together in both quotative and non-quotative contexts across different
sign languages.
Do these comparisons imply that there is no difference between signers and
speakers in their use of constructed action and shifting indexicals? There is at least
one way in which they seem to be different. Quinto-Pozos (2007) asks to what
degree constructed action is obligatory for signers. He finds that at least some
signers find it very difficult to describe certain scenes without the use of different
markers of constructed action (body motions which replicate or indicate the motions
of depicted characters). He suggests that there may be differences in the relative
obligatoriness of constructed action in sign vs. speech. Exploring this possibility and
accounting for it will be additional areas of future research.

Acknowledgements: The research reported here was supported in part by Award


Number R01DC00183 from the National Institute on Deafness and Other Communi-
cation Disorders. The content is solely the responsibility of the author and does
not necessarily represent the official views of the National Institute on Deafness
and Other Communication Disorders or the National Institutes of Health.
384 III. Syntax

Notation specific to this chapter

rs role shift
/Gianni role shift
|character| in the notation of works by Liddell and colleagues, words in vertical
line
brackets label ‘grounded blend elements’

7. Literature
Aarons, Debra/Morgan, Ruth
2003 Classifier Predicates and the Creation of Multiple Perspectives in South African
Sign Language. In: Sign Language Studies 3(2), 125⫺156.
Ahlgren, Inger
1990 Deictic Pronouns in Swedish and Swedish Sign Language. In: Fischer, Susan D./
Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Volume 1: Lin-
guistics. Chicago: The University of Chicago Press, 167⫺174.
Anand, Pranav/Nevins, Andrew
2004 Shifty Operators in Changing Contexts. In: Young, Robert (ed.), Proceedings of
SALT 14. Ithaca, NY: CLC Publications, 20⫺37.
Banfield, Ann
1973 Narrative Style and the Grammar of Direct and Indirect Speech. In: Foundations
of Language 10, 1⫺39.
Clark, Herbert/Gerrig, Richard
1990 Quotations as Demonstrations. In: Language 66, 764⫺805.
Clements, George N.
1975 The Logophoric Pronoun in Ewe: Its Role in Discourse. In: Journal of West African
Languages 2, 141⫺171.
Cumming, Samuel
2003 Two Accounts of Indexicals in Mixed Quotation. In: Belgian Journal of Linguistics
17, 77⫺88.
Davidson, Donald
1984 Quotation. In: Davidson, Donald (ed.), Inquiries into Truth and Interpretation. Ox-
ford: Clarendon Press, 79⫺92.
Dudis, Paul G.
2004 Body Partitioning and Real-Space Blends. In: Cognitive Linguistics 15(2), 223⫺238.
Emmorey, Karen/Reilly, Judy (eds.)
1995 Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum Associates.
Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language. Hamburg: Signum.
Engberg-Pedersen, Elisabeth
1995 Point of View Expressed through Shifters. In: Emmorey, Karen/Reilly, Judy (eds.),
Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum Associates, 133⫺154.
Fauconnier, Gilles
1985 Mental Spaces: Aspects of Meaning in Natural Language. Cambridge: Cambridge
University Press.
Fauconnier, Gilles/Turner, Mark
1996 Blending as a Central Process of Grammar. In: Goldberg, Adele (ed.), Conceptual
Structure, Discourse and Language. Stanford, CA: CSLI Publications, 113⫺130.
17. Utterance reports and constructed action 385

Ferrara, Kathleen/Bell, Barbara


1995 Sociolinguistic Variation and Discourse Function of Constructed Dialogue Introdu-
cers: The Case of be C like. In: American Speech 70(3), 265⫺290.
Fischer, Susan D./Siple, Patricia (eds.)
1990 Theoretical Issues in Sign Language Research, Volume 1: Linguistics. Chicago: The
University of Chicago Press.
Friedman, Lynn
1975 Space, Time, and Person Reference in American Sign Language. In: Language 51,
940⫺961.
Graumann, Carl F.
2002 Explicit and Implicit Perspectivity. In: Graumann, Carl F./Kallmeyer, Werner (eds.),
Perspective and Perspectivation in Discourse. Amsterdam: Benjamins, 25⫺39.
Günthner, Susanne
1999 Polyphony and the ‘Layering of Voices’ in Reported Dialogues: An Analysis of the
Use of Prosodic Devices in Everyday Reported Speech. In: Journal of Pragmatics
31, 685⫺708.
Herrmann, Annika/Steinbach, Markus
2012 Quotation in Sign Languages ⫺ A Visible Context Shift. In: Alphen, Ingrid van/
Buchstaller, Isabelle (eds.), Quotatives: Cross-linguistic and Cross-disciplinary Per-
spectives. Amsterdam: Benjamins, 203⫺228.
Hyman, Larry/Comrie, Bernard
1981 Logophoric Reference in Gokana. In: Journal of African Languages and Linguistics
3, 19⫺37.
Janzen, Terry
2004 Space Rotation, Perspective Shift, and Verb Morphology in ASL. In: Cognitive
Linguistics 15(2), 149⫺174.
Kaplan, David
1989 Demonstratives. In: Almog, Joseph/Perry, John/Wettstein, Howard (eds.), Themes
from Kaplan. Oxford: Oxford University Press, 481⫺563.
Kegl, Judy
1986 Clitics in American Sign Language. In: Borer, Hagit (ed.), Syntax and Semantics,
Volume 19: The Syntax of Pronominal Clitics. New York: Academic Press, 285⫺365.
Koopman, Hilda/Sportiche, Dominique
1989 Pronouns, Logical Variables, and Logophoricity in Abe. In: Linguistic Inquiry 20,
555⫺588.
Lee, Robert G./Neidle, Carol/MacLaughlin, Dawn/Bahan, Benjamin/Kegl, Judy
1997 Role Shift in ASL: A Syntactic Look at Direct Speech. In: Neidle, Carol/MacLaugh-
lin, Dawn/Lee, Robert G. (eds.), Syntactic Structure and Discourse Function: An
Examination of Two Constructions in American Sign Language. Manuscript, Ameri-
can Sign Language Linguistic Research Project. Boston, MA: Boston University,
24⫺45.
Liddell, Scott K.
1995 Real, Surrogate, and Token Space: Grammatical Consequences in ASL. In: Emmo-
rey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence
Erlbaum Associates, 19⫺41.
Liddell, Scott K.
1998 Grounded Blends, Gestures, and Conceptual Shifts. In: Cognitive Linguistics 9,
283⫺314.
Liddell, Scott K./Metzger, Melanie
1998 Gesture in Sign Language Discourse. In: Journal of Pragmatics 30, 657⫺697.
Lillo-Martin, Diane
1995 The Point of View Predicate in American Sign Language. In: Emmorey, Karen/
Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum
Associates, 155⫺170.
386 III. Syntax

Mandel, Mark
1977 Iconic Devices in American Sign Language. In: Friedman, Lynn A. (ed.), On the
Other Hand: New Perspectives on American Sign Language. New York: Academic
Press, 57⫺107.
Meier, Richard P.
1990 Person Deixis in American Sign Language. In: Fischer, Susan D./Siple, Patricia (eds.),
Theoretical Issues in Sign Language Research, Volume 1: Linguistics. Chicago: The
University of Chicago Press, 175⫺190.
Metzger, Melanie
1995 Constructed Dialogue and Constructed Action in American Sign Language. In:
Lucas, Ceil (ed.), Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet
University Press, 255⫺271.
Miller, Jim/Weinert, Regina
1995 The Function of LIKE in Dialogue. In: Journal of Pragmatics 23, 365⫺393.
Morgan, Gary
1999 Event Packaging in British Sign Language Discourse. In: Winston, Elizabeth (ed.),
Story Telling & Conversation: Discourse in Deaf Communities. Washington, DC:
Gallaudet University Press, 27⫺58.
Nilsson, Anna-Lena
2004 Form and Discourse Function of the Pointing toward the Chest in Swedish Sign
Language. In: Sign Language & Linguistics 7(1), 3⫺30.
Padden, Carol
1986 Verbs and Role-Shifting in American Sign Language. In: Padden, Carol (ed.), Pro-
ceedings of the Fourth National Symposium on Sign Language Research and Teaching.
Silver Spring, MD: National Association of the Deaf, 44⫺57.
Pfau, Roland/Quer, Josep
2010 Nonmanuals: Their Prosodic and Grammatical Roles. In: Brentari, Diane (ed.), Sign
Languages. (Cambridge Language Surveys.) Cambridge: Cambridge University Press,
381-402.
Poulin, Christine/Miller, Christopher
1995 On Narrative Discourse and Point of View in Quebec Sign Language. In: Emmorey,
Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence
Erlbaum Associates, 117⫺131.
Pyers, Jennie/Senghas, Ann
2007 Reported Action in Nicaraguan and American Sign Languages: Emerging Versus
Established Systems. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visi-
ble Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de
Gruyter, 279⫺302.
Quer, Josep
2005 Context Shift and Indexical Variables in Sign Languages. In: Georgala, Effi/Howell,
Jonathan (eds.), Proceedings from Semantics and Linguistic Theory 15. Ithaca, NY:
CLC Publications, 152⫺168.
Quer, Josep
2011 Reporting and Quoting in Signed Discourse. In: Brendel, Elke/Meibauer, Jörg/Stein-
bach, Markus (eds.), Understanding Quotation. Berlin: Mouton de Gruyter, 277⫺302.
Quer, Josep/Frigola, Santiago
2006 The Workings of Indexicals in Role Shift Structures in Catalan Sign Language (LSC).
Actes del 7è Congrés de Lingüística General, Universitat de Barcelona. CD-ROM.
Quine, Willard V. O.
1969 Word and Object. Cambridge, MA: MIT Press.
Quinto-Pozos, David
2007 Can Constructed Action be Considered Obligatory? In: Lingua 117(7), 1285⫺1314.
17. Utterance reports and constructed action 387

Romaine, Suzanne/Lange, Deborah


1991 The Use of Like as a Marker of Reported Speech and Thought: A Case of Grammat-
icalization in Progress. In: American Speech 66, 227⫺279.
Schlenker, Philippe
2003 A Plea for Monsters. In: Linguistics & Philosophy 26, 29⫺120.
Sharvit, Yael
2008 The Puzzle of Free Indirect Discourse. In: Linguistics & Philosophy 31, 351⫺395.
Shepard-Kegl, Judy
1985 Locative Relations in ASL Word Formation, Syntax and Discourse. PhD Disserta-
tion, MIT.
Streeck, Jürgen
2002 Grammars, Words, and Embodied Meanings: On the Uses and Evolution of So and
Like. In: Journal of Communication 52(3), 581⫺596.
Sutton-Spence, Rachel/Woll, Bencie
1998 The Linguistics of British Sign Language. Cambridge: Cambridge University Press.
Tannen, Deborah
1989 Talking Voices: Repetition, Dialogue, and Imagery in Conversational Discourse. Cam-
bridge: Cambridge University Press.
Underhill, Robert
1988 Like is, Like, Focus. In: American Speech 63, 234⫺246.
Vries, Mark de
2008 The Representation of Language within Language: A Syntactico-Pragmatic Typology
of Direct Speech. In: Studia Linguistica 62, 39⫺77.
Winston, Elizabeth A.
1991 Spatial Referencing and Cohesion in an American Sign Language Text. In: Sign
Language Studies 73, 397⫺410.
Zucchi, Alessandro
2004 Monsters in The Visual Mode? Manuscript, Università degli Studi di Milano.

Diane Lillo-Martin, Storrs, Connecticut (USA)


IV. Semantics and pragmatics

18. Iconicity and metaphor


1. Introduction
2. Iconicity in linguistic theory
3. Examination of linguistic iconicity
4. Relevance of iconicity to sign language use
5. Conclusion
6. Literature

Abstract
Iconicity, or form-meaning resemblance, is a common motivating principle for linguistic
items in sign and spoken languages. The combination of iconicity with metaphor and
metonymy allows for iconic representation of abstract concepts. Sign languages have
more iconic items than spoken languages because the resources of sign languages lend
themselves to presenting visual, spatial, and motor images, whereas the resources of
spoken languages only lend themselves to presenting auditory images. While some iconic-
ity is lost as languages change over time, other types of iconic forms remain.
Despite its pervasiveness in sign languages, iconicity seems to play no role in acquisi-
tion, recall, or recognition of lexical signs in daily use. It is important, however, for
the use of key linguistic systems for description of spatial relationships (i.e., classifier
constructions and possibly pronoun systems). Moreover, language users are able to ex-
ploit perceived iconicity spontaneously in language play and poetic usage.

1. Introduction
It has long been noticed that in some cases, there is a resemblance between a concept
and the word or sign a community uses to describe it; this resemblance is known as
iconicity. For example, Australian Sign Language (Auslan), Sign Language of the Neth-
erlands (NGT), South African Sign Language (SASL), South Korean Sign Language
(SKSL), and other sign languages use a form similar to that shown in Figure 18.1 to
represent the concept ‘book’ (Rosenstock 2004). The two flat hands with the palms
facing upwards and touching each other bear a resemblance to a prototypical book.
Iconicity motivates but does not determine the form of iconic signs. For example,
Chinese Sign Language (CSL), Danish Sign Language (DSL), and American Sign Lan-
guage (ASL) all have iconic signs for the concept ‘tree’, but each one is different
(Klima/Bellugi 1979).
Though iconic linguistic items and grammatical structures are common in both spo-
ken and sign languages, their role in linguistic theory and in the language user’s mind/
brain has long been debated. In section 2 below, we will briefly cover the history of
18. Iconicity and metaphor 389

Fig. 18.1: book in several sign languages

linguistic treatments of iconicity. Section 3 gives an overview of lexical, morphological,


and syntactic iconicity in sign languages, with a few spoken language examples for
comparison; and section 4 treats the relevance of iconicity to daily language use and
historical change. As we shall see, iconicity is pervasive in human languages. While it
appears to play little or no role in daily use of lexical signs and words, it is crucial to
the use of certain spatially based linguistic structures, and may be freely exploited for
spontaneous language play.

2. Iconicity in linguistic theory


The simple definition of iconicity is ‘signs that look like what they mean’. In this sec-
tion, we shall see that this definition is not adequate, and modify it to include cultural
and conceptual factors. We will also trace the history of linguists’ attitudes toward
iconicity, noting that an increasing sophistication in linguistic definitions of iconicity has
paralleled an increasing acceptance of iconicity in linguistic theory and sign language
research. (Note that the role of iconicity in phonological theory is not addressed in
this chapter; for discussion see van der Kooij (2002) and chapter 3, Phonology.)

2.1. ‘Transparency’ is not an adequate measure of iconicity

Given the simple definition of iconicity as ‘form-meaning resemblance’, we might ex-


pect that we could use ‘guessability’ (also called transparency) as a measure of a sign’s
iconicity ⫺ after all, if an iconic sign looks like what it means, a naïve observer ought
to be able to figure out the meaning. On the other hand, several researchers found
that non-signers had difficulty guessing the meaning of ASL iconic signs from their
forms (Hoemann 1975; Klima/Bellugi 1979), even though many were clearly iconic in
that, once the meaning was known, a connection could be seen between form and
meaning. This result indicated that fluent signers have to know the meaning of the sign
beforehand, and do not simply deduce the meaning from its form.
390 IV. Semantics and pragmatics

Pizzuto and Volterra (2000) studied the interaction between culture, conventionali-
zation, and iconicity by testing the ability of different types of naïve subjects to guess
the meanings of signs from Italian Sign Language (LIS). They found strong culture-
based variation: some signs’ meanings were easily guessed by non-Italian non-signers;
some were more transparent to non-Italian Deaf signers; and others were easier for
Italian non-signers to guess. That is, some transparency seemed to be universal, some
seemed linked to the experience of Deafness and signing, and some seemed to have a
basis in Italian culture.
In interpreting these results, we can see the need for a definition of iconicity that
takes culture and conceptualization into account. Iconicity is not an objective relation-
ship between image and referents. Rather, it is a relationship between our mental
models of image and referents (Taub 2001). These models are partially motivated by
experiences common to all humans, and partially by experiences particular to specific
cultures and societies.

2.2. Cultural/conceptual definition of iconicity

First, consider the notion of ‘resemblance’ between a linguistic item’s form and its
meaning. Resemblance is a human-defined, interactional property based on our ability
to create conceptual mappings (Gentner/Markman 1997). We feel that two things re-
semble each other when we can establish a set of correspondences (or mapping) be-
tween our image of one and our image of the other. To be more precise, then, in
linguistic iconicity there is a mapping between the phonetic form (sound sequence,
handshape or movement, temporal pattern) and some mental image associated with
the referent. As noted above, these associations are conceptual in nature and often
vary by culture.
To illustrate this point, consider Figure 18.2, which presents schematic images of
human legs and the forefinger and middle finger extended from a fist. We feel that the
two images resemble each other because we set up a mapping between the parts of
each image. Once we have established this mapping, we can ‘blend’ the two images
(Fauconnier 1997; cf. Liddell 2003) to create a composite structure: an iconic symbol
whose form resembles an aspect of its meaning. A number of sign languages have

Fig. 18.2: Structure-preserving correspondences between a) human legs and b) extended index
and middle fingers.
18. Iconicity and metaphor 391

Fig. 18.3: The ASL sign drill

used this particular V-handshape (W) to mean ‘two-legged entity’. This form/meaning
package is thus an iconic item in those sign languages.
Iconic items, though motivated by resemblance to a referent image, are not univer-
sal. In our example, the human body has been distilled down to a schematic image of
a figure with two downward-pointing appendages. Other sign languages, though they
seem to work from the same prototypical image of a human body, have chosen to
represent different details: sometimes the head and torso, sometimes the legs, and
sometimes both receive special attention in iconic representation. The index finger
extended upward from a fist, the thumb extended upward from a fist, and the thumb
extended upward with the little finger extended downward, are all phonetic forms used
in sign languages to represent the human body.
This chapter will distinguish between plain iconicity and extensions of iconicity via
metaphor or other conceptual associations. In iconic items, some aspect of the item’s
phonetic form (shape, sound, temporal structure, etc.) resembles a physical referent.
That is, a linguistic item which involves only iconicity can only represent a concrete
item that we can perceive. If a form has an abstract meaning, yet appears to give an
iconic depiction of some concrete image, that case involves iconicity linked with meta-
phor or metonymy.
Thus, the ASL sign drill (Figure 18.3), whose form resembles a drill penetrating a
wall, is purely iconic: its form directly resembles its meaning.

Fig. 18.4: The ASL sign think-penetrate


392 IV. Semantics and pragmatics

On the other hand, there is more than just iconicity in signs such as ASL think-
penetrate (Figure 18.4), whose form resembles an object emerging from the head (@ -
handshape) and piercing through a barrier (v-handshape). think-penetrate, which can
be translated as ‘to finally get the point’, has a non-concrete meaning.
The image of an object penetrating a barrier is used to evoke the meaning of effort-
ful but ultimately successful communication. This use of a concrete image to describe
an abstract concept is an instance of conceptual metaphor (Lakoff/Johnson 1980), and
think-penetrate is thus metaphorical as well as iconic (see section 3.5 for more detail).

2.3. History of attitudes toward iconicity

There has been a long history of minimizing and dismissing iconicity in language, start-
ing with de Saussure’s (1983 [1916]) doctrine of the ‘arbitrariness of the sign’, which
states that there is no natural connection between a concept and the word used to
represent it. De Saussure’s statement was aimed at countering a naïve view of iconicity,
one that would attempt to derive the bulk of all languages’ vocabularies from iconic
origins (i.e., even words like English ‘cat’, ‘dog’, and ‘girl’). But for years, it was used
to dismiss discussions of any iconic aspects of language.
The rise of functionalist and cognitivist schools of linguistics, with their interest in
conceptual motivation, allowed a renewal of attention to iconicity in spoken languages.
Studies of ‘sound symbolism’ (e.g., Hinton/Nichols/Ohala 1994), that is, cases in which
the sound of a word resembles the sound of its referent, showed that onomatopoetic
words are motivated but systematic and language-specific: many spoken languages
have a subsystem within which words may resemble their meanings yet conform to the
language’s phonological constraints (Rhodes/Lawler 1981; Rhodes 1994). On a syntac-
tic or morphological level (e.g., Haiman 1985), the order of words in a sentence or the
order of morphemes in a polysynthetic word was often found to be iconic for temporal
order of events or degree of perceived ‘conceptual closeness’ (a metaphorical use of
iconicity).
Sign linguists, unlike spoken language linguists, never had the option of ignoring
iconicity; iconicity is too pervasive in sign languages, and even a non-signing observer
can immediately notice the resemblance between some signs and their meanings. The
earliest attitude toward sign language iconicity (and one that many non-linguists still
hold) was that sign languages were simply a kind of pantomime, a picture language,
with only iconicity and no true linguistic structure (Lane 1992). Over the years, sign
linguists have had to work hard to fight the entrenched myth of sign languages as pan-
tomime.
The first modern wave of sign language linguistics took two basic approaches to
iconicity: strongly arguing against its presence or importance, with the goal of proving
sign languages to be true languages (e.g., Hoemann 1975; Frishberg 1979; Supalla 1978,
1986, 1990); and diving into descriptions of its various manifestations, intrigued by the
differences between sign and spoken languages (e.g., Mandel 1977; DeMatteo 1977).
Gradually, research (e.g., Boyes-Braem 1981; Fischer 1974; McDonald 1982; Supalla
1978; Wilbur 1979) began to establish that a linguistic system constrained sign language
iconicity, even the most iconic and seemingly variable signs that came to be known as
classifiers (see chapter 8). For example, in ASL, one kind of circular handshape (the
18. Iconicity and metaphor 393

M -handshape) is consistently used to trace the outlines of thin cylinders; other shapes
are not grammatical. Without understanding the system, one cannot know the gram-
matically correct way of describing a scene with classifiers; one can only recognize that
correct ways are iconic (a subset of the myriad possible iconic ways). These researchers
argued against focusing on signs’ iconicity; although many signs and linguistic subsys-
tems are clearly motivated by iconicity, linguists would do better to spend their energy
on figuring out the rules for grammatically-acceptable forms.
Klima and Bellugi (1979) set forth a measured compromise between the iconicity
enthusiasts and detractors. They affirmed the presence of iconicity in ASL on many
levels, but noted that it is highly constrained in a number of ways. The iconicity is
conventionally established by the language, and not usually invented on the spot; and
iconic signs use only the permitted forms of the sign language. Moreover, iconicity
appears not to influence on-line processing of signing; it is ‘translucent’, not ‘transpar-
ent’, in that one cannot reliably guess the meaning of an iconic sign unless one knows
the sign language already. To use their phrase, iconicity in sign languages is sub-
merged ⫺ but always available to be brought to the surface and manipulated.
Though Klima and Bellugi’s view has held up remarkably well over the years, recent
research has identified a few areas in which signers seem to draw on iconicity in every-
day language. We will discuss this research in section 4 below.

3. Examination of linguistic iconicity

We will now look in more detail at the types of iconic structures found in languages.
Our focus will be sign language iconicity; spoken language iconicity will be touched on
for comparison (also see Perniss/Thompson/Vigliocco (2010) for a recent discussion of
the role of iconicity in sign and spoken languages).

3.1. Comparing iconic gestures and iconic signs

People use iconic representations in many communicative situations, from pictorial


symbols to spontaneous gestures to fully conventionalized linguistic signs and words.
In this section, we will compare iconic spontaneous gestures to iconic conventional
linguistic items.
Scientific research on gestures has been expanding greatly in recent years (cf.
Kendon 1988; McNeill 1992; see also chapter 27). It is well established, for example,
that gestures accompanying speech differ in specific ways from gestures that occur
alone and carry the entire communicative message. Some gestures (called ‘emblems’
by Kendon 1988) are fully conventionalized, such as the ‘thumbs-up’ gesture indicating
approval; others are created spontaneously during a communicative event.
Figure 18.5 shows an example of a spontaneous iconic gesture. The woman is telling
a story about a character who peeled a banana; as she says those words, her left hand
configures as if she were holding the banana, and she moves her right hand downward
along the left three times as if she were peeling the banana herself.
394 IV. Semantics and pragmatics

Fig. 18.5: Iconic gesture accompanying ‘peels the banana’

This iconic gesture is embedded in a particular discourse event; it could not be


interpreted if removed from its context. The woman’s gesture represents a specific
action done by a specific referent ⫺ the character’s peeling of a banana.
By comparison, Figure 18.6 shows an iconic sign, the ASL sign banana. The domi-
nant closed-X-handshape moves down the upright non-dominant @ -handshape twice,
with a shift of orientation between the movements.

Fig. 18.6: The ASL sign banana

Though the sign is strikingly similar to the gesture, it is fully conventional and
comprehensible in the absence of context. It represents a concept (banana, a type of
fruit), not a specific action or image (a particular person peeling a banana).
The gesture and the sign are similar in that they both iconically present an image
of a banana being peeled. They are both based on a mapping between two conceptual
structures: an imagined action and a mental model of the communicator’s body and
surrounding space. These two structures are superimposed to create a composite or
‘blended’ structure (cf. Liddell 2003): the iconic sign or gesture. The differences be-
tween the gesture and the sign can be described in terms of differences between the
two input structures and the resulting composite. It can also be described in terms of
the producer’s intention ⫺ using the terms of Cuxac and Sallandre (2007), the ges-
18. Iconicity and metaphor 395

turer’s intent is illustrative (i.e., to show an image), and the signer’s intent is non-
illustrative (i.e., to refer to a concept).
For spontaneous iconic gestures, the first structure is a specific event that the ges-
turer is imagining, and the second structure is a mental model of the space around the
gesturer, including hands, face, and body. People who look at the gesture knowing that
it is a composite of these two structures can directly interpret the gesturer’s actions as
the actions taking place in the imagined event (Liddell 2003).
Recent research (McNeill 1992; Morford et al. 1995; Aronoff et al. 2003) suggests
that as iconic gestures are repeated, they may shift to become more like conventional
linguistic items in the following ways: the gesturer’s action becomes a regular phonetic
form; the imagined event becomes a schematic image no longer grounded in a specific
imagined time or place; and the meaning of the composite becomes memorized and
automatic, no longer created on the spot via analogy between form and image. Though
the ‘peel banana’ gesture in Figure 18.5 is not the direct ancestor of the ASL sign
banana, we can surmise that it resembles that ancestor and can serve to illustrate
these changes.
As the gesturer’s action becomes a sign language phonetic form, it conventionalizes
and can no longer be freely modified. The action often reduces in size or length during
this process, and may shift in other ways to fit the sign language’s phonological and
morphemic system. Aronoff et al. (2003) refer to this as taking on ‘prosodic wordhood’.
In our example, we see that the ‘peel banana’ gesture involves three gestural strokes,
whereas the ASL sign has two strokes or syllables ⫺ a typical prosodic structure for
ASL nouns. As gestures become signs, a shift toward representing objects by reference
to their shapes rather than how they are manipulated has also been observed (cf.
Senghas (1995) for the creolization of Nicaraguan Sign Language; also see chapter 36,
Language Emergence and Creolization). In our gestural example, the non-dominant
hand is a fist handshape, demonstrating how the banana is held; in ASL banana, the
non-dominant handshape is an extended index finger, reflecting the shape of the ba-
nana.
Our example also illustrates the shift from an imagined scene to a stylized image,
in tandem with the shift from illustrative to non-illustrative intent. In ASL banana,
though an image of peeling a banana is presented, it is not intended to illustrate a
specific person’s action. Moreover, the sign does not denote ‘peeling a banana’; rather,
it denotes the concept ‘banana’ itself. As we shall see, the images presented by iconic
signs can have a wide range of types of associations with the concepts denoted by
the signs.
This discussion applies to iconicity in the oral-aural modality as well as the visual-
gestural modality. Vocal imitations are iconic in that the vocal sounds resemble the
sounds they represent; spontaneous vocal imitations may conventionalize into iconic
spoken-language words that ‘sound like’ what they mean (Rhodes 1994). This type of
iconicity is usually called onomatopoeia. Other forms of spoken-language iconicity ex-
ist; see Hinton, Nichols and Ohala (1994) for more information.
To summarize: iconic spontaneous gestures and iconic signs are similar in that both
involve structure-preserving mappings between form and referent. The crucial differen-
ces are that iconic gestures are not bound by linguistic constraints on form, tend to
represent a specific action at a specific time and place, and are interpreted as meaning-
396 IV. Semantics and pragmatics

ful via an on-line conceptual blending process. In contrast, iconic signs obey the phono-
tactic constraints of the respective sign language, denote a concept rather than a spe-
cific event, and have a directly accessible, memorized meaning.

3.2. Classifiers: illustrative intent with some fixed components

The previous section discussed how a spontaneous iconic gesture changes as it becomes
a conventionally established or ‘fixed’ iconic sign. We may add to this discussion the
fact that many iconic linguistic structures in sign languages are not fully fixed. In partic-
ular, the many types of spatially descriptive structures mostly known as classifiers (see
chapter 8) are highly variable and involve strong iconicity ⫺ spatial characteristics of
the structure (e.g., motion, location, handshape) are used to represent spatial character-
istics of the event being described. Just as in spontaneous iconic gesture, the intent of
the signer in these cases is illustrative (i.e., to ‘show’ a particular event or image; see
Cuxac/Sallandre (2007) and Liddell (2003) for different analyses). However, classifiers
differ from spontaneous gesture in that while certain components of these structures
may vary to suit the needs of illustration, other components are fixed (Emmorey/
Herzig 2003; Schembri/Jones/Burnham 2005; see also sections 2.3 above and 4.1 below).
These fixed components (usually the handshapes) are often iconic as well, but may not
be freely varied by the signer to represent aspects of the scene.
Thus, classifier constructions are like spontaneous iconic gestures in that they are
intended to ‘show’ a specific mental image; some of their components, however, are
conventionally established and not variable.

3.3. Types of form/image associations

In cataloguing types of iconicity, we will look at the two main associations in iconic
signs: the perceived similarity between the phonetic form and the mental image, and
the association between the mental image and the denoted concept (see also Pietran-
drea (2002) for a slightly different analysis). We will first examine types of associations
between form and image. Note that both illustrative and non-illustrative structures
draw on these associations (see also Liddell (2003) and Cuxac/Sallandre (2007) for
slightly different taxonomies of these associations).
There are many typical ways in which a signer’s hands and body can be seen as
similar to a visual or motor image, giving rise to iconic representations. Hands and
fingers have overall shapes and can be seen as independent moving objects. They can
also trace out paths in space that can be understood as the contour of an object. Human
bodies can be seen as representing other human bodies or even animal bodies in shape,
movement, and function: we can easily recognize body movements that go with particu-
lar activities. Sign languages tend to use most of these types of resemblances in con-
structing iconic linguistic items. This section will demonstrate a few of these form/image
resemblances, using examples from lexical signs, classifiers, and grammatical processes.
The first type of form/image association I will call a full-size mapping. In this case,
the communicator’s hands, face, and upper body are fully blended with an image of
18. Iconicity and metaphor 397

Fig. 18.7: The Auslan sign write

another human (or sometimes an animal). In spontaneous full-size mappings, the com-
municator can be thought of as ‘playing a character’ in an imagined scene. He or she
can act out the character’s actions, speak or sign the character’s communications, show
the character’s emotions, and indicate what the character is looking at.
When full-size mappings give rise to lexical items, they tend to denote concepts that
can be associated with particular actions. Often, signs denoting activities will be of this
type; for example, the sign for write in ASL, SKSL, NGT, Auslan, and many other
sign languages (Figure 18.7) is based on an image of a person holding a pen and moving
it across paper, and ASL karate is based on stylized karate movements. In addition,
categories of animals or people that engage in characteristic actions can be of this type;
e.g., ASL monkey is based on an image of a monkey scratching its sides.
Full-size mappings also play a part in the widespread sign-language phenomenon
known as ‘role shift’ or ‘referential shift’. In role shift, the communicator takes on the
roles of several different characters. Sign languages develop discourse tools to show
where the signer takes up and drops each role, including gaze direction, body posture,
and facial expressions (see also chapter 17, Utterance Reports and Constructed Ac-
tion).
Another major mode of iconic representation might be called hand-size mappings.
In these, the hands or fingers represent independent entities, generally at reduced size.
The hands and fingers can represent a character or animate being; part of a being ⫺
head, legs, feet, ears, etc.; or an inanimate object.
Because hands move freely, but are small, allowing a ‘far-off’ perspective, hand-size
mappings are ideal for indicating both the entity’s shape and its overall path through
space. In section 2.2, we have already touched on a few examples of classifier hand-
shapes representing humans; for additional examples of classifiers involving hand-size
mappings, see chapter 8. Lexicalized hand-size mappings can take on a wide range of
meanings associated with entities and their actions (see next section).
A slight variation of this mode might be called contour mappings, in which the
hands represent the outline or surface contour of some entity. It is common to have
classifier forms of this sort; for example, ASL has a set of handshapes for representing
cylinders of varying depth (one, two, or four fingers extended) and diameter (M, or
closed circle, for narrow cylinders; :, or open circle, for wide ones; for the widest
cylinders, both hands are used with :-handshapes). These forms easily lexicalize into
398 IV. Semantics and pragmatics

a) b)
Fig. 18.8: a) house in SKSL, with ‘contour’ mapping vs. b) house in NGT, with ‘tracing’ mapping

signs representing associated concepts; ASL plate and picture-frame are of this type,
and so is SKSL house (Figure 18.8a), which is based on an image of a typical house
with a pointed roof.
In a third major mode of iconic representation, here called tracing mappings, the
signer’s hands trace the outline of some entity. Unlike the first two modes, in which
the signer’s movement represents an entity’s movement in the imagined event or im-
age, here movement is interpreted as the signer’s ‘sketching’ motion. This mode draws
on the basic human perceptual skill of tracking a moving object and imagining its path
as a whole. Most sign languages have sets of classifier handshapes used for tracing the
outlines of objects ⫺ in ASL, examples include the extended index finger for tracing
lines, the flat [ -handshape for tracing surfaces, and the curved M - and :-handshapes
for tracing cylinders. Lexicalized examples include ASL diploma, based on the image
of a cylindrical roll of paper, and NGT house (Figure 18.8b).
Many more types of iconic form/image relationships are possible, including: number
of fingers for number of entities; manner of movement for manner of action; duration
of gesture for duration of event; and repetition of gesture for repetition of event. A
detailed description is given in Taub (2001, 5).
For comparison, the spoken modality is suited for iconic representations of sound
images, via what we might call ‘sound-for-sound’ iconicity: spoken languages have con-
ventional ways of choosing speech sounds to fit the pieces of an auditory image. The
resulting words can be treated as normal nouns and verbs, as they are in English, or
they can be separated off into a special adverb-like class (sometimes called ideo-
phones), as in many African and Asian languages (e.g., Alpher 1994).

3.4. Types of concept/image associations

We turn now to types of relationships between an iconic linguistic item’s image and
the associated concept. Note that this section applies only to conventional or ‘frozen’
structures, where the signer is ‘saying without showing’ (i.e., non-illustrative intent in
18. Iconicity and metaphor 399

Cuxac/Sallandre’s terms) ⫺ if the intent were illustrative, the signer would be ‘showing’
an image rather than referencing a concept related to that image.
It is a common misimpression that only concrete, simple concepts can be repre-
sented by iconic linguistic items. On the contrary, iconic items represent a wide range
of concepts ⫺ the only constraint is that there must be some relationship between the
iconic image and the concept signified. Since we are embodied, highly visual creatures,
most concepts have some relation to a visual, gestural, or motor image. Thus we see a
wide variety of concept/image associations in sign languages, with their ability to give
iconic representation to these types of images.
One common pattern in sign languages is for parts to stand for wholes. If the con-
cept is a category of things that all have roughly the same shape, sometimes the se-
lected image is a memorable part of that shape. In many sign languages, this is a
common way to name types of animals. For example, the sign cat in ASL and British
Sign Language (BSL) consists of the M-shaped hand (index finger and thumb touch-
ing, other fingers extended) brushing against the signer’s cheek; the thumb and index
finger touch the cheek, and the palm is directed forward. The image presented here is
of the cat’s whiskers, a well-known feature of a cat’s face.
If the concept is a category of physical objects that come in many sizes and shapes,
sometimes the selected image is a prototypical member of the category. This is the case
for the SKSL and NGT signs for house (Figure 18.8), and the various signs for tree
cited in section 1: houses and trees come in many sizes and shapes, but the image in
both signs is of a prototypical member of the category. For house, the prototype has
a pointed roof and straight walls; for tree, the prototype grows straight out of the
ground, with a large system of branches above a relatively extended trunk.
Categories consisting of both physical and non-physical events can also be repre-
sented by an image of a prototypical case, if the prototype is physical. For example,
the ASL verb give uses the prototypical image of handing an object to a person, even
though give does not necessarily entail physically handling an object; give can involve
change of possession and abstract entities as well as movement and manipulation of
physical objects (Wilcox 1998).
In many cases, the image chosen for a concept will be of a typical body movement
or action associated with the concept. Signs denoting various sports are often of this
type, as noted in section 3.3 above. Body movements can also name an object that is
associated with the movement; for example, car in ASL and BSL uses an image of a
person turning a steering wheel (again encoded with fist-shaped instrument classifiers).
In some signs, an entire scenario involving the referent as well as other entities is
given representation. ASL examples include gasoline, showing gas pouring into a car’s
tank, and key, showing a key turning in a lock. Auslan write (Figure 18.7) is also of
this type, showing the signer moving a pen across paper.
Finally, if some physical object is strongly associated with the concept, then the
image of that object may be used to represent the concept. For example, in many sign
languages, the sign for olympics represents the linked-circles Olympics logo, as illus-
trated by signs from three different sign languages in Figure 18.9.
The final type of concept/image association in sign languages is complex enough to
merit its own subsection (see section 3.5 below): metaphorical iconic signs, or those
which name an abstract concept using a structured set of correspondences between the
abstract concept and some physical concept.
400 IV. Semantics and pragmatics

a) b) c)
Fig. 18.9: The sign for olympics in a) NGT, b) Auslan, and c) SKSL

Though iconic images in spoken languages are limited to sound images, temporal
images and quoted speech, the types of concepts given iconic representation are not
so limited. This is because any concept that is somehow associated with these kinds of
sensory images can enter into the analogue-building process.
Thus, a concept such as ‘the destructive impact of one thing into another’ can be
named by the iconic English word crash, an example of onomatopoeia. This concept
is not primarily an auditory one, but such impacts nearly always have a characteristic
sound image associated with them. It is that sound image that receives iconic represen-
tation as crash. Then the iconic word is used to talk about the concept as a whole.
Even abstract concepts that can in some way be associated with a sound image can
thus be represented iconically in spoken languages (cf. Oswalt 1994) ⫺ for example, a
stock market crash can be metaphorically associated with the sort of rapid descent and
impact that could make a sound of this sort.
It turns out, of course, that the vast majority of concepts are not closely enough
associated with a sound image. For this and other reasons, iconicity is less common in
spoken than in sign languages. Fewer concepts are appropriate for iconic representa-
tion in the spoken modality; and, as we saw in the previous section, there are far fewer
parameters that the spoken modality can exploit. The smaller amount of iconicity in
spoken languages, which has been attributed to the inferiority of iconic representations,
could just as well have been attributed to the inferiority of the spoken modality in
establishing iconic representations.

3.5. Iconicity linked with metaphor

Conceptual metaphor is the use of one domain of experience to describe or reason


about another domain of experience (Lakoff/Johnson 1980; Lakoff 1992). In spoken
languages, this often manifests as the systematic use of words from the first domain
(source) to describe entities in the second domain (target). For example, a phrase such
as ‘We need to dig deeper’ can mean ‘We need to think more intensely’ about some
topic.
In sign languages, however, the situation is somewhat different, due to the linkage
between metaphor and iconicity (Wilbur 1987; Wilcox 2000; Taub 2001). Here we see
18. Iconicity and metaphor 401

metaphor at work within sign languages’ lexicons: vocabulary for abstract (target) do-
mains often consists of iconic representations of concrete (source-domain) entities.
Thus, for example, in the ASL verb analyze, movements of the `-handshapes (‘bent
V’) iconically show the process of digging deeper into some medium. In addition to
the lexicon, the iconic classifier systems used for describing movements, locations, and
shapes can be applied to the metaphorical description of abstract (non-physical) situa-
tions (see examples in Wilcox 2000); thus, this type of iconicity can be both illustrative
and non-illustrative. This linkage between metaphor and iconicity is possible but rare
in spoken languages; the pervasive iconicity of sign languages makes this phenomenon
much more common there. Conversely, metaphor without iconicity is rare in ASL (cf.
Wilbur 1990) and other sign languages (for the metaphorical use of ‘time-lines’ in sign
languages, see chapter 9, Tense, Aspect, and Modality).
As an example, let us consider the domain of communication (also see Wilcox 2000).
Many languages have a metaphor ‘communication is sending’ (e.g., Reddy 1979; Lak-
off/Johnson 1980) where successful communication is described as successfully sending
an object to another person. In ASL, a large set of lexical signs draw on this metaphor,
including signs glossed as inform, communicate, miss, communicaton-breakdown, it-
went-by-me, over-my-head, and others. Brennan (1990) has documented a large set of
signs in BSL that draw on the same metaphor as well. We shall see that these signs
involve two conceptual mappings: one between target and source conceptual domains,
and one between source-domain image and phonetic form (Taub 2001).
In the ASL sign think-penetrate (Figure 18.4 above), the dominant @-handshape
begins at the temple and travels toward the locus of the verb’s object. On the way, it
encounters the non-dominant hand in a flat v-handshape, palm inward, but the index
finger penetrates between the fingers of the flat hand. If this sequence were to be

Tab. 18.1: Iconic mapping for think-penetrate


ARTICULATORS SOURCE
1->CL (@) an object
Forehead head
1->CL touches forehead object located in head
1->CL moves toward locus of addressee sending an object to someone
non-dominant B-CL (v) barrier to object
1->CL inserted between fingers of B-CL penetration of barrier
signer’s locus sender
addressee’s locus receiver

Tab. 18.2: Iconic mapping for drill


ARTICULATORS SOURCE
dominant L-handshape (A) long thin object with handle (in particular,
a drill)
non-dominant B-CL (v) flat surface
L inserted between fingers of B-CL penetration of surface
402 IV. Semantics and pragmatics

Tab. 18.3: Double mapping for think-penetrate


Iconic Mapping Metaphorical Mapping
ARTICULATORS SOURCE TARGET
1->CL an object an idea
Forehead head mind; locus of thought
1->CL touches forehead object located in head idea understood by originator
1->CL moves toward locus of sending an object to someone communicating idea to
addressee someone
non-dominant B-CL barrier to object difficulty in communication
1->CL inserted between penetration of barrier success in communication
fingers of B-CL despite difficulty
signer’s locus sender originator of idea
addressee’s locus receiver person intended to learn idea

interpreted as a classifier description, it would denote a long thin object (the index
finger or ‘1->’) emerging from the head, moving toward a person, encountering a bar-
rier, and penetrating it. Table 18.1 spells out this iconic mapping between articulators
and concrete domain.
It is useful to contrast think-penetrate and ASL drill (Figure 18.3 above), a sign
derived from lexicalized classifiers. In drill, the dominant hand assumes a A-hand-
shape, with index finger and thumb extended; the non-dominant hand again forms a
v-handshape. The index finger of the A-hand penetrates between the fingers of the v-
hand. The image chosen to stand for the piece of equipment known in English as a
‘drill’ is that of a long thin object (with a handle) penetrating a surface; the A, of
course, iconically represents the long thin object (or drill), and the flat hand represents
the surface pierced by the drill. This is a case of pure iconicity. The iconic mapping is
given in Table 18.2.
Unlike drill, think-penetrate does not describe a physical scene. Its actual mean-
ing can be translated as ‘to get one’s point across’ or ‘for someone to understand one’s
point’. When we consider as well signs such as i-inform-you, think-bounce, over-my-
head, and it-went-by-me, all of which resemble classifier descriptions of objects moving
to or from heads and pertain to communication of ideas, we have strong evidence for
a metaphorical mapping between the domains of sending objects and communicating
ideas. Thus, think-penetrate involves two mappings: an iconic mapping between artic-
ulators and source domain, and a metaphorical mapping between source and target do-
mains.
In Table 18.3, we can see how each articulatory element of think-penetrate corre-
sponds to an element of the domain of communication, via the double mapping. The
signer’s location corresponds to the communicator’s location; the index finger corre-
sponds to the information to be communicated; the movement of the index finger from
signer toward the syntactic object’s location in space corresponds to the communica-
tion of that information to an intended recipient; the flat hand represents a difficulty
in communication; and finally, penetration of the flat hand represents success in com-
munication despite the difficulty.
Signs that share a metaphorical source/target mapping need not share an iconic
source/articulators mapping. The classifier system of ASL provides several iconic ways
18. Iconicity and metaphor 403

to describe the same physical situation, and all of these ways can be applied to the
description of a concrete source domain. For example, consider the sign i-inform-you,
where closed flat-O-handshapes begin at the signer’s forehead and move toward the
addressee’s location, simultaneously opening and spreading the fingers. This sign does
not have a physical articulator corresponding to the idea/object; instead, the flat-O
classifier handshapes iconically represent the handling of a flat object and the object
itself is inferred. Nevertheless, in both i-inform-you and think-penetrate, the moved
object (regardless of its representation) corresponds to the notion of an idea.
This suggests that the double-mapping model is a useful way to describe metaphori-
cal/iconic phenomena in sign languages: a single-mapping model, which described signs
in terms of a direct mapping between articulators and an abstract conceptual domain,
would miss what think-penetrate and i-inform-you have in common (i.e., the source/
target mapping); it would also miss what think-penetrate and drill have in common
(i.e., the fact that the source/articulators mappings are much like the mappings used
by the sign language’s productive classifier forms).
We may note that metaphorical/iconic words and constructions also exist in spoken
languages, and can be handled with a double mapping and the analogue-building proc-
ess in the same way as metaphorical/iconic signs. Some examples of metaphorical icon-
icity in English include lengthening to represent emphasis (e.g., ‘a baaaad idea’; cf.
Okrent 2001, 187 f.), and temporal ordering to represent order of importance (e.g., topic/
comment structures such as ‘Pizza, I like’; cf. Haiman 1985).

3.6. Partially iconic structures: temporal iconicity

Drawing on the definition of iconicity as a structure-preserving mapping between form


and image associated with meaning, we find many lexical items, syntactic structures,
and other linguistic structures that are partially iconic. In these cases, only some aspects
of each sign are iconically motivated; thus, unlike the iconic items discussed above,
they do not present a single consistent iconic image.
We only have space to look at one type of partial iconicity: the case of temporal
iconicity, where morphological and syntactic structures whose temporal structure is
related to their meaning are superimposed on non-iconic lexical material. Other par-
tially iconic phenomena include: lexical items for which different aspects of the sign
are motivated by different iconic/metaphorical principles (Taub 2001, 7); sign language
pronoun systems, which are partially iconic and partially deictic (see chapter 11, Pro-
nouns); and metaphorical/iconic use of locations in signing space to convey notions of
relative power and affiliation (see chapter 19, Use of Sign Space). For the most part,
these phenomena are not consistent with illustrative intent.
Temporal iconicity is fairly common in both sign and spoken language temporal
aspect systems. One common example is the use of reduplication (i.e., the repetition
of phonetic material) in morphological structures denoting repetition over time (see,
e.g., Wilbur 2005). Many sign languages have a much more extensive use of iconicity
in their temporal aspect systems, in that the temporal structure of most aspectual inflec-
tions reflects the temporal structure of the event types they describe.
Consider, for example, the ASL protracted-inceptive (PI) inflection (Brentari 1996).
This inflection can occur on any telic verb; it denotes a delay between the onset of the
404 IV. Semantics and pragmatics

Fig. 18.10: Structure-preserving correspondences between the temporal structure of a) a situation


where a person is delayed but eventually leaves and b) the sign leave inflected for PI.

verb’s action and the accomplishment of that action ⫺ in effect, a ‘protracted begin-
ning’ of the action. PI’s phonetic form involves an extended hold at the verb’s initial
position, while either the fingers wiggle (if the handshape is an open <) or the tongue
waggles (if the handshape is more closed); after this hold, the verb’s motion continues
as normal.
Figure 18.10 (taken from Taub 2001) demonstrates this inflection with a specific
verb. Figure 18.10a shows a situation where PI is appropriate: a person who intends to
leave the house is temporarily delayed (perhaps by another person needing to talk);
eventually the person does leave. Figure 18.10b shows two phases of the ASL sign
leave inflected for PI: first the long initial hold, and then the verb’s normal movement.
It is easy to see the correspondences between the two temporal structures: a delay
in leaving (referent) is represented by a delay in the verb’s normal motion (form);
similarly, the eventual accomplishment of leaving (referent) is represented by the even-
tual performance of the verb’s normal motion (form).

3.7. Are sign languages more iconic than spoken languages?

The well-known fact that sign languages have more iconicity than spoken languages
(see, e.g., Klima/Bellugi 1979) is easily explained by the conceptual mapping model we
have been examining. The potential for iconicity is far greater in the signed modality
for two reasons. First, we have more visual and motor images than sound images associ-
ated with concepts ⫺ for example, there is no characteristic sound for the category
18. Iconicity and metaphor 405

table, yet there is a characteristic shape. Second, the signed modality, with its use of
body movements, facial expressions, hand and arm configurations, and space near the
signer, has a large number of possible ways to build linguistic analogues for mental
images. The spoken modality has little more than the ordering of sounds and the pitch
of the speaker’s voice. Thus, in creating iconic blends, sign languages have a greater
range of possibilities to draw on.
Given the abundance of iconic items in sign languages and their substantial presence
in spoken languages, it is plausible to claim that languages in fact draw on iconicity as
much as possible in the formation of new morphemes (cf. Armstrong 1988; Liddell
1992; Taub 2001; Perniss/Pfau/Steinbach 2007). Only the relative poverty of auditory
imagery in our experience, and the lack of precision in our auditory and vocal systems
(e.g., in creating and detecting localized sounds), has kept spoken languages from being
richly iconic.

4. Relevance of iconicity to sign language use


As we have seen, iconic linguistic items are conventional and language-specific; they
are motivated by their meaning but not predictable from it. Since iconic items seem to
originate in imitative gestures, iconicity is clearly a significant factor in sign creation.
As noted by Currie, Meier, and Walters (2002) and McKee and Kennedy (2000), iconic-
ity must be taken into account when calculating historical relationships among sign
languages, as a certain percentage of signs will be similar based on iconicity rather than
historical derivation (see chapter 38 for further discussion). Yet how relevant is iconic-
ity to daily use of sign languages?

4.1. Acquisition and routine language use

Psycholinguistic studies of iconicity’s relevance to acquisition and memory suggest that


in everyday use, signers are usually not conscious of a sign’s iconicity. For example,
Orlansky and Bonvillian (1984) found that the first signs learned by children do not
tend to be iconic, and Meier (1982, 1987) showed that iconic morphological structures
and personal pronouns are first used by children without regard to their iconicity (also
see chapter 28, Acquisition, and chapter 25, Language and Modality). Poizner, Bellugi,
and Tweney (1981) demonstrated that ASL signers’ ability to recall signs was not af-
fected by their iconicity. Bosworth and Emmorey (1999) showed that sign iconicity
plays no role in semantic priming (the ability to recognize a sign more quickly when a
semantically related sign is presented first as a ‘prime’). In addition, all conventional
signs are treated alike by grammatical rules, regardless of whether they are iconic or
not (Emmorey 2002). Thus, awareness of iconicity seems to be ‘optional’ in daily lan-
guage use of lexical signs.
On the other hand, the iconic use of signing space is crucial to spatial descriptions
and classifier constructions (Emmorey 2002; also see chapter 19, Use of Sign Space).
Schembri, Jones, and Burnham (2005) compared classifier descriptions in Auslan and
Taiwan Sign Language with non-signers’ gestured descriptions of the same scenes, and
406 IV. Semantics and pragmatics

found that while the classifier handshapes were language-specific, the movements and
locations of both sign languages matched almost completely with each other and with
the non-signers’ gestures. This suggests that classifier handshapes, whether iconic or
not, are fully conventional and rarely modified, while movements and locations are
produced on-line to match a conceptual model. Emmorey and Herzig’s (2003) study
of production, comprehension, and acceptability judgments of ASL classifier construc-
tions also supports this conclusion.
Theoretically, this is not surprising. Linguistic forms that depend on an active
‘blending’ of two conceptual spaces (Liddell 2003) or have illustrative intent (Cuxac/
Sallandre 2007) are exactly the sorts of forms that require on-line iconic manipulation.
Forms that are simply memorized or used non-illustratively would not require attention
to their iconic component.

4.2. Iconicity in sign language poetry

Signers often play with the iconicity of lexical signs (e.g., Klima/Bellugi 1979), showing
that it can be brought to awareness if desirable. Sign language poetry in particular
makes highly effective use of iconicity and metaphor in creating structured, artistic
language (see chapter 41).
Poets make art from language by creating patterns of meaning (e.g., repeated im-
ages or metaphors) and patterns of form (e.g., repetition of phonetic material). In
spoken languages, these levels are largely separate, but in sign languages, the two can
combine and overlap. That is, the poet’s concrete and metaphorical mental imagery
can receive direct visual representation through the language’s iconic lexical items and
grammatical inflections.
For example, the ASL poem ‘Circle of Life’ (Lentz 1995; analyzed in Taub 2001)
was composed for a wedding. One theme of this poem, eternity, is represented meta-
phorically by repeated circular motion, and by circles in general. The poem is full of
circular signs. A few (e.g., yes, envision, relationship) do not share the notion of
eternity, and simply ‘rhyme’ by having circular handshapes. Other signs have circular
handshapes or motions because they iconically depict the motion of the earth and sun
(year, sun-rise-and-set, world, world-turn), or the motion of clock hands (hour,
era); these concepts are strongly associated with time and eternity. The sign engage
depicts a ring sliding onto a finger, and the wedding ring is of course a conventional
symbol of eternity. Finally, ASL’s grammatical inflection for ‘continuation over time’ is
itself a circular movement superimposed upon a verb root, and this metaphorical/iconic
inflection appears throughout. Thus, circles function as both a ‘rhyme scheme’ and a
conceptual motif in this poem.

4.3. Historical change: loss and preservation of iconicity

Once a form/meaning pairing has been conventionally adopted as part of a language’s


lexicon or grammar, users seem to stop accessing its iconic origins on-line, and it may
undergo changes that make it less transparently iconic (cf. Klima/Bellugi 1979; Brennan
18. Iconicity and metaphor 407

1990). One example is the ‘opaque’ ASL sign home, where a flat-O-shaped hand
touches the cheek first near the mouth and second near the ear; this sign developed
as a compound of the iconic signs eat (flat-O at the mouth) and sleep (spread hand’s
palm at the cheek, suggesting a pillow). For another example, Frishberg (1979) noted
that ASL signs tend to move from their original locations toward the center of signing
space. This process may make the sign easier to perceive, by moving it closer to where
the eyes fixate; but it would reduce a sign’s iconicity by moving it from the iconically
appropriate location.
These changes are not surprising, as we see the same effects for any sort of deriva-
tional morphology. Derived items of all sorts can take on semantic nuances not predict-
able from their parts. At that point, users of the item are clearly not re-deriving it
on-line each time they use it, but instead have given it some kind of independent
representation. Over time, any such items can become so remote from their deriva-
tional origins that typical users would not know how the item arose.
It is useful to note, in addition, that iconic items often resist regular changes that
affect all other items of the language (e.g., Hock 1986): some onomatopoetic spoken-
language words persist in their original forms despite regular sound change in the rest
of the language. Similarly, sign language classifier systems may shift over time, but
they maintain some core iconic aspects. For example, Morford, Singleton, and Goldin-
Meadow (1995) suggest that as homesign systems (see chapter 26) develop, classifier-
like gestures start as strict representations of an object’s shape, but later represent an
entire semantic category regardless of each member’s shape (e.g., all vehicles would
eventually get the same classifier, as in today’s ASL). But since classifiers would still
be chosen based on the shape of the category prototype, this does not remove all
iconicity from the system.
Other changes in iconic items that have been described as a ‘loss of iconicity’ could
be better classed as a shift in type of iconicity. For example, Senghas (1995) notes that
in Nicaraguan Sign Language, some classifier constructions based on the movements
of handling objects are replaced by constructions that represent the shape and size of
the object; this may be a move away from ‘mimetic enactment’, as she claims, but it is
certainly not a loss of iconicity itself. Boyes-Braem (1981) noted a similar distinction
between motor-based and shape-based iconicity in ASL. She describes this as de-iconi-
cization, where the new form is less pantomimic and more ‘sign-like’ than the old, but
once again the two forms are equally iconic.
In sum, changes in lexical signs may remove some small portion of sign languages’
iconicity, but the core iconic grammatical structures that appear in language after lan-
guage are unlikely to fully vanish.

5. Conclusion

In this chapter, we have seen a unified treatment of iconicity in sign and spoken lan-
guages. Iconicity exists in all types of languages and is a normal mode of creating
linguistic items: conventional iconic structures emerge via repetition from spontaneous
gestural blends. While some iconicity is lost as languages change over time, other types
of iconic forms remain. Sign languages have more iconic items than spoken languages
408 IV. Semantics and pragmatics

because the resources of sign languages lend themselves to presenting visual, spatial,
and motor images, whereas the resources of spoken languages only lend themselves to
presenting auditory images. Moreover, the combination of iconicity with metaphor and
metonymy allows for iconic representation of abstract concepts.
Despite its pervasiveness in sign languages, iconicity seems to play no role in acqui-
sition, recall, or recognition of fixed lexical signs in daily use. It is important, however,
for the use of key linguistic systems for description of spatial relationships (i.e., classi-
fier constructions and possibly pronoun systems). Moreover, language users are able
to exploit perceived iconicity spontaneously in language play and poetic usage.
We may conclude that iconicity does not limit sign languages. It is irrelevant to daily
use of lexical items, crucial to spatial descriptions, and a major resource for poetic and
creative language play.

6. Literature

Alpher, Barry
1994 Yir-Yoront Ideophones. In: Hinton, Leanne/Nichols, Johanna/Ohala, John J. (eds.),
Sound Symbolism. Cambridge: Cambridge University Press, 161⫺177.
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2003 Classifier Constructions and Morphology in Two Sign Languages. In: Emmorey, Karen
(ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Law-
rence Erlbaum, 53⫺84.
Armstrong, David F.
1988 The World Turned Inside Out. In: Sign Language Studies 61, 419⫺428.
Bosworth, Rain G./Emmorey, Karen
1999 Semantic Priming in American Sign Language. Manuscript, The Salk Institute for Bio-
logical Studies, San Diego.
Boyes-Braem, Penny
1981 Features of the Handshape in American Sign Language. PhD Dissertation, University
of California, Berkeley.
Brennan, Mary
1990 Word Formation in British Sign Language. Stockholm: Stockholm University Press.
Brentari, Diane
1996 Trilled Movement: Phonetic Realization and Formal Representation. In: Lingua 98,
43⫺71.
Currie, Anne-Marie P. Guerra/Meier, Richard P./Walters, Keith
2002 A Crosslinguistic Examination of the Lexicons of Four Sign Languages. In: Meier, Rich-
ard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed
and Spoken Languages. Cambridge: Cambridge University Press, 224⫺236.
Cuxac, Christian/Sallandre, Marie-Anne
2007 Iconicity and Arbitrariness in French Sign Language (LSF): Highly Iconic Structures,
Degenerated Iconicity and Diagrammatic Iconicity. In: Pizzuto, Elena/Pietrandrea, Pa-
ola/Simone, Raffaele (eds.), Verbal and Signed Languages. Comparing Structures, Con-
structs, and Methodologies. Berlin: Mouton de Gruyter, 13⫺33.
DeMatteo, Asa
1977 Visual Imagery and Visual Analogues in American Sign Language. In: Friedman, Lynn
A. (ed.), On the Other Hand. London: Academic Press, 109⫺136.
18. Iconicity and metaphor 409

Emmorey, Karen
2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Emmorey, Karen/Herzig, Melissa
2003 Categorical Versus Gradient Properties of Classifier Constructions in ASL. In: Emmo-
rey, Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah,
NJ: Lawrence Erlbaum, 221⫺246.
Fauconnier, Gilles
1997 Mappings in Thought and Language. New York: Cambridge University Press.
Fischer, Susan D.
1974 Sign Language and Linguistic Universals. In: Rohrer, T./Ruwet, N. (eds.), Actes du
Colloque Franco-Allemand de Grammaire Transformationelle II. Tübingen: Niemeyer,
187⫺204. [Reprinted in Sign Language & Linguistics 11(2), 2007, 245⫺262].
Frishberg, Nancy
1979 Historical Change: From Iconic to Arbitrary. In: Klima, Edward/Bellugi, Ursula (eds.),
The Signs of Language. Cambridge, MA: Harvard University Press, 67⫺84.
Gentner, Dedre/Markman, Arthur B.
1997 Structure Mapping in Analogy and Similarity. In: American Psychologist 52, 45⫺56.
Haiman, John (ed.)
1985 Iconicity in Syntax: Proceedings of a Symposium on Iconicity in Syntax, Stanford, June
1983. Amsterdam: Benjamins.
Hinton, Leanne/Nichols, Johanna/Ohala, John J. (eds.)
1994 Sound Symbolism. Cambridge: Cambridge University Press.
Hock, Hans Heinrich
1986 Principles of Historical Linguistics. Berlin: Mouton de Gruyter.
Hoemann, Harry W.
1975 The Transparency of Meaning of Sign Language Gestures. In: Sign Language Studies 7,
151⫺161.
Kendon, Adam
1988 How Gestures Can Become Like Words. In: Poyatos, Fernando (ed.), Cross-cultural
Perspectives in Nonverbal Communication. Toronto: Hogrefe, 131⫺141.
Klima, Edward/Bellugi, Ursula (eds.)
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Kooij, Els van der
2002 Phonological Categories in Sign Language of the Netherlands: The Role of Phonetic
Implementation and Iconicity. PhD Dissertation, University of Leiden. Utrecht: LOT.
Lakoff, George
1992 The Contemporary Theory of Metaphor. In: Ortony, Andrew (ed.), Metaphor and
Thought (2nd ed.). Cambridge: Cambridge University Press, 202⫺251.
Lakoff, George/Johnson, Mark
1980 Metaphors We Live By. Chicago: University of Chicago Press.
Lane, Harlan
1992 The Mask of Benevolence. New York: Knopf.
Lentz, Ella Mae
1995 Circle of Life. In: The Treasure: Poems by Ella Mae Lentz. VHS Videotape. Berkeley,
CA: In Motion Press.
Liddell, Scott K.
1992 Paths to Lexical Imagery. Manuscript, Gallaudet University.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
410 IV. Semantics and pragmatics

Mandel, Mark
1977 Iconic Devices in American Sign Language. In: Friedman, Lynn A. (ed.), On the Other
Hand. London: Academic Press, 57⫺107.
McDonald, Betsy H.
1982 Aspects of the American Sign Language Predicate System. PhD Dissertation, University
of Buffalo.
McKee, David/Kennedy, Graeme
2000 Lexical Comparison of Signs from American, Australian, British, and New Zealand
Sign Languages. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revis-
ited. Mahwah, NJ: Lawrence Erlbaum, 49⫺76.
McNeill, David
1992 Hand and Mind. Chicago: University of Chicago Press.
Meier, Richard P.
1982 Icons, Analogues, and Morphemes: The Acquisition of Verb Agreement in ASL. PhD
Dissertation, University of California, San Diego.
Meier, Richard P.
1987 Elicited Imitation of Verb Agreement in American Sign Language. In: Journal of Mem-
ory and Language 26, 362⫺376.
Morford, Jill P./Singleton, Jenny L./Goldin-Meadow, Susan
1995 The Genesis of Language: How Much Time Is Needed to Generate Arbitrary Symbols
in a Sign System? In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and
Space. Hillsdale, NJ: Lawrence Erlbaum, 313⫺323.
Okrent, Arika
2002 A Modality-free Notion of Gesture and How It Can Help Us with the Morpheme vs.
Gesture Question in Sign Language Linguistics (or at Least Give Us Some Criteria to
Work with). In: Meier, Richard P./Cormier, Kearsy A./Quinto-Pozos, David G. (eds.),
Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge Uni-
versity Press, 175⫺198.
Orlansky, Michael D./Bonvillian, John D.
1984 The Role of Iconicity in Early Sign Language Acquisition. In: Journal of Speech and
Hearing Disorders 49, 287⫺292.
Oswalt, Robert L.
1994 Inanimate Imitatives in English. In: Hinton, Leanne/Nichols, Johanna/Ohala, John J.
(eds.), Sound Symbolism. Cambridge: Cambridge University Press, 293⫺306.
Perniss, Pamela/Pfau, Roland/Steinbach, Markus
2007 Can’t You See the Difference? Sources of Variation in Sign Language Structure. In:
Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Cross-linguis-
tic Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 1⫺34.
Perniss, Pamela/Thompson, Robin L./Vigliocco, Gabriella
2010 Iconicity as a General Property of Language: Evidence from Spoken and Signed Lan-
guages. In: Frontiers in Psychology 1, 227.
Pietrandrea, Paola
2002 Iconicity and Arbitrariness in Italian Sign Language. In: Sign Language Studies 2,
296⫺321.
Pizzuto, Elena/Volterra, Virginia
2000 Iconicity and Transparency in Sign Languages: A Cross-cultural View. In: Emmorey,
Karen/Lane, Harlan (eds.), The Signs of Language Revisited. Mahwah, NJ: Lawrence
Erlbaum, 261⫺286.
Poizner, Howard/Bellugi, Ursula/Tweney, Ryan D.
1981 Processing of Formational, Semantic, and Iconic Information in American Sign Lan-
guage. In: Journal of Experimental Psychology: Human Perception and Performance 7,
430⫺440.
18. Iconicity and metaphor 411

Reddy, Michael
1979 The Conduit Metaphor ⫺ A Case of Frame Conflict in Our Language about Language.
In: Ortony, Andrew (ed.), Metaphor and Thought. Cambridge: Cambridge University
Press, 164⫺201.
Rhodes, Richard
1994 Aural Images. In: Hinton, Leanne/Nichols, Johanna/Ohala, John J. (eds.), Sound Sym-
bolism. Cambridge: Cambridge University Press, 276⫺292.
Rhodes, Richard/Lawler, John
1981 Athematic Metaphors. In: Chicago Linguistic Society 17, 318⫺342.
Rosenstock, Rachel
2004 An Investigation of International Sign: Analyzing Structure and Comprehension. PhD
Dissertation, Gallaudet University, Washington, DC.
Saussure, Ferdinand de
1983 Course in General Linguistics [translated by Roy Harris; edited by Bally, Charles/Seche-
haye, Albert/Reidlinger, Albert]. London: Duckworth. [First published 1916, France].
Schembri, Adam/Jones, Caroline/Burnham, Denis
2005 Comparing Action Gestures and Classifier Verbs of Motion: Evidence From Australian
Sign Language, Taiwan Sign Language, and Nonsigners’ Gestures Without Speech. In:
Journal of Deaf Studies and Deaf Education 10(3), 272⫺290.
Senghas, Ann
1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation,
MIT.
Supalla, Ted
1978 Morphology of Verbs of Motion and Location in American Sign Language. In: Caccam-
ise, Frank/Hicks, Doin (eds.), American Sign Language in a Bilingual, Bicultural Con-
text: Proceedings of the Second National Symposium on Sign Language Research and
Teaching. Publisher not given.
Supalla, Ted
1986 The Classifier System in American Sign Language. In: Craig, Colette (ed.), Noun
Classes and Categorization. Amsterdam: Benjamins, 181⫺214.
Supalla, Ted
1990 Serial Verbs of Motion in ASL. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical
Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago
Press, 127⫺152.
Taub, Sarah F.
2001 Language from the Body: Iconicity and Conceptual Metaphor in American Sign Lan-
guage. Cambridge: Cambridge University Press.
Wilbur, Ronnie B.
1979 American Sign Language and Sign Systems. Baltimore: University Park Press.
Wilbur, Ronnie B.
1987 American Sign Language: Linguistic and Applied Dimensions. Boston: Little, Brown
and Co.
Wilbur, Ronnie B.
1990 Metaphors in American Sign Language and English. In: Edmondson, William/Karlsson,
Fred (eds.) SLR ’87: International Symposium on Sign Language Research, Finland,
July 1987. Hamburg: Signum, 163⫺170.
Wilbur, Ronnie B.
2005 A Reanalysis of Reduplication in American Sign Language. In: Hurch, Bernhard (ed.),
Studies on Reduplication. Berlin: Mouton de Gruyter, 595⫺623.
Wilcox, Phyllis Perrin
1998 give: Acts of Giving in American Sign Language. In: Newman, John (ed.), The Linguis-
tics of Giving. Amsterdam: Benjamins, 175⫺207.
412 IV. Semantics and pragmatics

Wilcox, Phyllis Perrin


2000 Metaphors in American Sign Language. Washington, DC: Gallaudet University Press.

Sarah F. Taub, Washington, DC (USA)

19. Use of sign space


1. Introduction
2. Use of sign space for referent localization
3. Discourse-level structuring of sign space
4. Structuring sign space for event representation: Signing perspective and classifier predicates
5. Structuring sign space with multiple articulators: Simultaneous constructions
6. Typological perspective: Use of sign space across sign languages
7. Summary and outlook
8. Literature

Abstract
This chapter focuses on the semantic and pragmatic uses of space. The questions ad-
dressed concern how sign space (i.e. the area of space in front of the signer’s body) is
used for meaning construction, how locations in sign space are associated with discourse
referents, and how signers choose to structure sign space for their communicative intents.
The chapter gives an overview of linguistic analyses of the use of space, starting with the
distinction between syntactic and topographic uses of space and the different types of
signs that function to establish referent-location associations, and moving to analyses
based on mental spaces and conceptual blending theories. Semantic-pragmatic conven-
tions for organizing sign space are discussed, as well as spatial devices notable in the
visual-spatial modality (particularly, classifier predicates and signing perspective), which
influence and determine the way meaning is created in sign space. Finally, the special
role of simultaneity in sign languages is discussed, focusing on the semantic and dis-
course-pragmatic functions of simultaneous constructions.

1. Introduction
As many of the chapters in this volume demonstrate, signed and spoken languages
share fundamental properties on all levels of linguistic structure. However, they differ
radically in the modality of production ⫺ spoken languages use the vocal-auditory
modality, while sign languages use the visual-spatial modality. The most obvious modal-
ity-related difference lies in the size and visibility of the articulators used for language
production. Through their movements, the hands (as the primary articulators) produce
19. Use of sign space 413

meaningful utterances in what is known as sign space, i.e. the space in front of the
signer’s body. By virtue of being produced in the visual-spatial modality, essentially all
of linguistic expression in sign languages depends on the use of space. On the phono-
logical level, space is used contrastively in the place of articulation parameter of signs.
On the morphosyntactic level, signs are modulated in space for grammatical purposes,
including aspectual marking, person and number marking, to distinguish between the
arguments of a predicate, and to identify referents at certain locations in space (see
Engberg-Pedersen 1993; Klima/Bellugi 1979; Meir 2002; Padden 1990; Sandler/Lillo-
Martin 2006; also see chapters 7, 8, and 11).
The focus of the present chapter is on the semantic and pragmatic uses of space. The
questions addressed concern how locations in sign space are associated with discourse
referents and how signers choose to structure sign space for their communicative in-
tents. This chapter will have little to say, therefore, about the functional/structural
analysis of morphosyntactic devices as such (e.g. pronouns, agreement or directional
verbs, and classifier predicates). They will be relevant, but only insofar as they bear
on the semantic and pragmatic structuring of sign space.
The chapter gives an overview of how sign space is used for the purpose of meaning
construction in signed utterances. Section 2 introduces and critically discusses the two
main types of use of sign space, i.e. syntactic and topographic, that have been tradition-
ally proposed. Section 3 presents semantic and pragmatic conventions for choosing
referent locations, and discusses the use of sign space on the higher level of discourse
structuring. Section 4 deals with signing perspective, as a way of structuring space for
event space projection, and the closely related use of classifier predicates. Section 5
focuses on the use of simultaneous constructions, as a special way of structuring sign
space given the availability of multiple, independent articulators in the visual-spatial
modality. Section 6 provides a look at sign language typology and the possible typologi-
cal variation in the use of sign space for meaning construction. Finally, section 7 gives
a summary and offers an outlook on future research.

2. Use of sign space for referent localization


The main principle guiding the use of sign space to express meaning in sign languages
is the association of referents with locations in space. Traditionally, the use of space to
achieve referent-location associations has been analyzed as taking two main forms:
syntactic and topographic (Klima/Bellugi 1979; Poizner/Klima/Bellugi 1987).

2.1. Syntactic use of sign space

In the syntactic (or referential) system, locations in sign space are chosen arbitrarily
to represent referents. The locations themselves are not considered to have semantic
import of any kind. Rather, they represent relations purely on an abstract, syntactic
level, e.g. to identify a verb’s arguments (e.g. Padden 1990; cf. chapter 7, Verb Agree-
ment) or for pronominal reference (e.g. Lillo-Martin/Klima 1990; cf. chapter 11, Pro-
nouns). For example, a signer may associate a location X1 in sign space to a referent
414 IV. Semantics and pragmatics

Fig. 19.1: Example of the syntactic use of sign space. In the semi-circle representing sign space,
X1 and X2 are locations associated with the discourse referents ‘girl’ and ‘boy’, respec-
tively. In (a), the directional verb sign ask moves from X1 to X2 to express ‘The girl
asks the boy’. In (b), ask moves from X2 to X1 to express ‘The boy asks the girl’.

‘girl’ and a location X2 to a referent ‘boy’. By moving a directional (or agreement) verb
between these two locations, or sign space loci, the signer can express either the meaning
The girl asks the boy (by moving the verb sign from X1 to X2, as in Figure 19.1a) or the
meaning The boy asks the girl (by moving the verb sign from X2 to X1, as in Figure
19.1b).
Liddell (1990) describes the syntactic use of sign space in terms of referential equal-
ity. In assigning entities to certain locations in sign space, those locations become stand-
ins for the entities themselves. Reference to the locations, e.g. by directing verb signs
or points to them, is equal to reference to the entities. Liddell (1990, 304) likens the
relationship of referential equality to the terminology of a legal contract. If Mr. Jones
is identified as “the borrower” in a contract, then all subsequent mentions of “the
borrower” within that contract refer to Mr. Jones, since the use of the phrase “the
borrower” is referentially equivalent to the man called Mr. Jones.

2.2. Topographic use of sign space

In Figure 19.1, the choice of locations in sign space gives no information about the
actual locations of the boy and girl in the event being described. Such locative informa-
tion is conveyed, however, when sign space is used topographically. In the topographic
use of space, the referent-location associations in sign space are in themselves meaning-
ful. They are chosen not arbitrarily, but rather to express spatial relationships between
referents. Thus, the locations X1 and X2 shown in Figure 19.1 would represent the
locations of the girl and the boy with respect to each other. The topographic use of
sign space exploits the iconic properties of the visual-spatial modality, as the spatial
relationships between locations in sign space match those between the referents in the
real or imagined event space being described (cf. chapter 18, Iconicity and Metaphor).
In contrast to referential equality, when sign space is used topographically, Liddell
(1990, 304) describes the relationship between a location in sign space and a referent
as location fixing. The referent is conceived of as being located at the particular sign
space location, which corresponds to a particular location in the real (or imagined)
world. Liddell uses the example of an actor who is told to stand at a particular location
19. Use of sign space 415

Fig. 19.2: Example of the topographic use of sign space. In the semi-circle representing sign space,
the dashed squares represent the placement of the hand in three different sign space
locations associated with the locations of three books in the real (or imagined) world.
The meaning expressed is ‘There are three books lying next to each other’.

on a stage. The actor’s location is thereby fixed within a spatial setting, and is topo-
graphically meaningful within that setting.
The topographic use of sign space is often associated with the use of classifier predi-
cates (cf. chapter 8). In these morphologically complex predicates, the handshape repre-
sents referents by classifying them according to certain semantic, often visual-spatial,
properties (e.g. a flat hand to represent the flat, rectangular shape of a book, or an
extended index finger to represent the long, thin shape of a pen). Furthermore, the
location and movement of the hands in sign space corresponds topographically to the
location and motion of referents. For example, to represent three books lying next to
each other on a table, a signer may place a flat hand successively in three different,
proximate locations in sign space, as shown in Figure 19.2.
Signers can use the topographic function of sign space to create very complex spatial
representations. Emmorey and Tversky (2002), for example, discuss signers’ use of
space to describe the topographic layout of a convention center or a town. To do so,
signers can use different styles of topographic mapping, depending on how the space
is conceptually viewed. As described by Emmorey and Tversky (2002), a signer can
either adopt a survey perspective, giving a bird’s eye view of the layout, or present the
spatial layout as if taking a tour through the space itself, adopting a route perspective
(cf. the discussion of signing perspective in section 4 below).

2.3. Processing evidence for the different function of syntactic


and topographic loci

Emmorey, Corina, and Bellugi (1995) provide evidence from language processing for
the differential function of topographic versus syntactic (or purely referential) uses of
space. In a memory task, signers were better at remembering spatial locations that
encoded locative information (i.e. exhibiting a topographic function) than those that
encoded only grammatical information (i.e. exhibiting a syntactic function). Similarly,
performance in a task that required deciding whether a probe sign had appeared in an
immediately preceding American Sign Language (ASL) sentence revealed a dissocia-
tion between the syntactic and topographic functions of space. The ASL sentences used
locations either syntactically or topographically and the probe signs were presented in
locations that were either congruent or incongruent with locations used in the senten-
416 IV. Semantics and pragmatics

ces. The results showed that signers were most impaired in speed and accuracy when
the probe sign appeared in an incongruent location within a topographic context. This
suggests that semantically relevant topographic locations are processed differently
from arbitrarily chosen syntactic locations. The authors argue that topographic loca-
tions may be more explicitly encoded, e.g. including other spatial information like
orientation and the relative positions of other referents. In addition, MacSweeney et al.
(2002) and Emmorey et al. (2002) provide evidence, in comprehension and production
respectively, for the involvement of brain areas specialized for spatial processing in
sign language constructions that make use of topographic functions of space.

2.4. Integrated function of syntactic and topographic loci

Emmorey et al. (1995), however, also emphasize that the two functions of sign space
are not mutually exclusive, noting that it is an issue of how a location functions within
sign space, and not of two distinct types of sign space (as is suggested by Poizner et al.
1987). Depending on how it is used, the same location can function both syntactically
(or referentially) and topographically. For example, a signer could use a classifier predi-
cate to establish a referent, e.g. a colleague, at a certain (topographic) location in sign
space, e.g. seated at her desk. Subsequently, the signer could direct a verb sign, e.g.
ask, to the same location, specifying the colleague as the grammatical object of the
predicate (see Liddell (1990, 318) for a similar example). In this example, the location
associated with the colleague is functioning syntactically (or referentially) and topo-
graphically at the same time. The colleague is still conceived of as seated at her desk
at the time she is asked a question. Although they recognize this double function of
loci in sign space, Emmorey et al. (1995) nevertheless maintain a clear distinction
between the two functions, arguing that loci do not necessarily convey topographic, or
spatially relevant, information. They note that “when space operates in the service of
grammatical functions, the spatial relation between the loci themselves is irrelevant”
(1995, 43).
Other researchers, in particular Liddell (1990, 1995, 1998, 2003) and van Hoek
(1992, 1996), propose a more strongly integrated view of the double function of spatial
loci, and have argued against maintaining a distinction between them. Van Hoek argues
that the use of space to create relationships between referents and loci is never truly
abstract (or arbitrary). Loci in sign space do not necessarily refer to the physical loca-
tion of referents (although van Hoek suggests this may be the prototype of spatial
reference), but reflect a more broadly defined conceptual location, in which referents
are conceived of within particular contexts, situations, or settings. Van Hoek’s analysis
draws on the theory of mental spaces (Fauconnier 1985, 1994), which are conceptual
structures containing all elements relevant to meaning construction in a particular con-
text, including background and world knowledge about referents. In this sense, a loca-
tion in sign space is associated not simply with a referent, but with a mental space, and
thus the location “may invoke not only the conception of a referent, but the conception
of the situation in which the referent is involved” (van Hoek 1992, 185). Furthermore,
it is not only the particular situation that is relevant, but also other factors like the
perceptual and conceptual saliency of the referent, the current location of the referent,
and the discourse focus (van Hoek 1992).
19. Use of sign space 417

Similarly, Liddell argues that the use of space to indicate non-present referents
functions fundamentally in the same way as for present referents. The association of a
location in sign space with an entity is in fact an indication of that entity’s conceptual
location. Liddell maintains that all signs which refer to locations in sign space, i.e.
directional predicates, pronouns, and classifier predicates, use space in the same way,
and questions any notion of separability of the two functions of spatial loci. Liddell
(1995 and subsequent) develops mental spaces and conceptual blending theories (Fau-
connier 1985; Fauconnier/Turner 1996) as the basis for meaning construction in sign
space (see also Dudis 2004). Conceptual blending is a process that operates over men-
tal spaces, in which selected properties from two input mental spaces get projected
onto a new, blended mental space. In a blend analysis of sign, the input spaces are
(i) real space (i.e. the immediate environment, including sign space) and (ii) the con-
ceptual representation of the event or situation to be described. In the blends that are
created in sign space, elements from conceptual space are projected onto the real space
(as sign space), and get mapped onto the signer’s hands and body (visible) and onto
locations in sign space (non-visible). Loci in sign space that are associated with particu-
lar referents are thus blended elements, and as such are interpreted as existing within
a certain spatio-temporal context.

3. Discourse-level structuring of sign space

This section focuses not on the functions of individual spatial loci in sign space, but
rather on the conventions by which signers decide how to structure sign space. In
creating arrays of referent-location associations in sign space, the expression of locative
relations between referents is only one of many relevant issues. Signers are guided in
the meaningful structuring of sign space by semantic and pragmatic considerations
(Engberg-Pedersen 1993) as well as by principles of discourse cohesion (Winston 1991,
1995). Thus, even beyond an interpretation of loci as representing actual physical or
more broadly conceived contextual locations of referents, the choice of spatial loci in
sign space is hardly ever arbitrary.
Engberg-Pedersen (1993) recognizes that a signer’s choice of loci is motivated by a
variety of factors, including semantic and spatial relationships between referents, as
well as a signer’s attitude toward referents. In addition to what she calls the iconic
convention, in which the locative relationships between referents are reflected in the
choice of spatial loci, she proposes several further semantic-pragmatic conventions for
structuring sign space. According to the convention of semantic affinity, referents that
are semantically related to each other, e.g. through a possessive relationship, are repre-
sented at the same locus in sign space. Semantic affinity overlaps with the convention
of canonical location. A referent can be associated with a location typically associated
with that referent, e.g. the city in which a person lives, even if he or she is not in that
city at the time of utterance.
Other conventions have less to do with the relationships between referents them-
selves and are instead more reflective of the signer’s attitude towards or assessment of
referents being talked about. Engberg-Pedersen observes that signers can express point
of view in their choice of loci for different referents by using locations proximal and
418 IV. Semantics and pragmatics

distal to the body on a diagonal axis. Signers tend to locate disliked referents at a
location further from the body, and place referents with which they empathize close
to the body. For example, in discussing two movies, one liked and one disliked, a signer
might underscore her adverse opinion by placing the disliked movie at a distal location,
while choosing a location close to the body for the favored movie. Were the signer
comparing two movies she was equally fond of, she would tend instead to use the left-
right lateral axis, giving equal, but contrastive, status to the two movies. The use of the
lateral axis for juxtaposing two referents (or two sets of related referents) falls under
the convention of comparison. Finally, Engberg-Pedersen notes that the choice of loci
is influenced by the authority convention. A signer may locate referents to whom she
attributes authority, e.g. a boss or government official, higher up in space than other
referents with less authority.
Winston (1991, 1995) and Mather and Winston (1998) discuss the contrastive use of
sign space on a discourse level in terms of spatial mapping. Here, sign space structuring
achieves discourse cohesion by mapping different discourse themes onto different
areas of sign space. Morphosyntactically, this is accomplished with devices associated
with the creation of spatial loci: directional verbs, classifier predicates, pointing signs,
as well as the spatial displacement of citation form signs. The visual-spatial modality
allows signers to create a visual representation of discourse structure. This, in turn,
provides addressees with powerful cues to the structure of discourse, aiding meaning
comprehension through visual information chunking. For example, in their analysis of
spatial mapping in an ASL narrative, Mather and Winston (1998) observe that the
narrator creates two main discourse spaces in sign space, one for inside a house and
one for outside it. These main spaces are further subdivided to elaborate subtopics
related to either of the main spaces, e.g. to describe events that take place inside or
outside the house, respectively. It is important to note that spatial mapping refers not
only to the mapping of concrete entities, but also of abstract ideas and notions. In this
way, discourse cohesion is visually reinforced for events in which referents engage, but
also for reporting their inner monologues or thoughts.

4. Structuring sign space for event representation:


Signing perspective and classifier predicates

In the sections above, referent location has been discussed in connection with the
topographic use of sign space. The depiction of referent location is often coupled with
the depiction of referent motion and action in signed discourse ⫺ particularly in event
narratives. In describing complex events, narrators convey information about referents
acting and interacting within a spatial setting, thereby constructing a representation of
the event space in which an event takes place. To achieve this, signed narratives rely
to a large extent on the use of signing perspective together with the use of classifier
predicates, which encode spatial and action information about referents by representing
the referent as a whole (with entity classifiers) or by representing the manipulation of a
referent (with handling classifiers). This section will focus on the relationship between
perspective and classifier predicates in structuring sign space for event representation.
19. Use of sign space 419

4.1. Signing perspective

Signing perspective refers to the way in which an event space (real or imagined) is
mapped or projected onto sign space, and is thus significant in determining how sign
space is structured for spatial representation. There are two ways in which this projec-
tion can take place, depending on the signer’s conceptual location, or vantage point,
in relation to the event space. In one case, the signer is construed as external to the
event space. In this observer perspective, the whole event space is projected onto the
area of sign space in front of the body, providing a global view of the event space. In
the other case, the signer is internal to the event space, in the role of a character within
the event. This gives the signer a character perspective on the event space, which is
conceived of as life-sized, encompassing and surrounding the signer. Entities in the
event space are mapped onto sign space as seen by the character mapped onto the
signer’s body (Perniss 2007a; Perniss/Özyürek 2008; Özyürek/Perniss 2011). Figure 19.3
gives a schematic depiction of the event space as projected onto sign space from (a)
an observer’s perspective and (b) a character’s perspective (see Fridman-Mintz/Liddell
(1998) for the use of similar schematic depictions, where a wavy line area surrounding
the signer indicates surrogate space and a semi-circle area in front of the signer indi-
cates token space).

Fig. 19.3: Example of event space projection from (a) observer perspective, where the whole of
event space is mapped onto the area of space in front of the signer’s body, and from
(b) character perspective, where the signer is within the event space, in the role of a
character in the event.

The two types of signing perspective ⫺ observer and character ⫺ have been de-
scribed along similar lines, with different names, by numerous researchers: model and
real-world space (Schick 1990); diagrammatic and viewer spatial format (Emmorey/
Falgier 1999); fixed and shifted referential framework (Poizner et al. 1987; Morgan
1999); token and surrogate space (Liddell 1995, 1998); depictive and surrogate space
(Liddell 2003); narrator and protagonist perspective (Slobin et al. 2003); global and
participant viewpoint (Dudis 2004); diagrammatic and viewer space (Pyers/Senghas
2007). Moreover, a similar distinction has been made in gesture research for iconic
gestures made from either an observer or character viewpoint (McNeill 1992).
420 IV. Semantics and pragmatics

4.2. The relationship between signing perspective


and classifier predicates

The relationship between signing perspective and classifier predicates, stated implicitly
or explicitly, can be framed in various ways. In terms of argument structure and verb
semantics, there is a systematic correspondence between entity classifiers and intransi-
tive verbs, on the one hand, and between handling classifiers and transitive verbs, on
the other hand (cf. Engberg-Pedersen 1993; McDonald 1982; Supalla 1986; Zwitserlood
2003). In each case, the handshape (or classifier) of the predicate encodes the theme
argument of the verb. With entity classifiers, the position/movement of the hands in
sign space directly encodes the intransitive location/motion of entities in the event
space, corresponding to the event-external vantage point of observer perspective. With
handling classifiers, the transitive motion of entities is represented on the hands
through a depiction of agentive manipulation, corresponding to the event-internal van-
tage point of character perspective (see chapter 8, Classifiers, for details).
The relationship between perspective and classifiers can also be characterized in
terms of the interplay of articulatory and semantic constraints, that is, constraints on
the type of information that certain forms are able to felicitously represent. For exam-
ple, the so-called 2-legged entity classifier (index and middle finger extended, fingers
pointing downward) has properties that correspond to features (or facets) of the hu-
man body: the extended fingers correspond to the legs, the tips of the fingers corre-
spond to the feet, and the back side of the fingers corresponds to the front of the body
(as shown in still 1 of Figure 19.4a). In addition to simple location and motion, these
properties allow signers to represent postural information (e.g. lying vs. standing), di-
rection of movement (e.g. forward vs. backward), as well as manner of locomotion
(e.g. walking vs. jumping). Similarly, the so-called upright entity classifier (index finger
extended, pointing upward) is used to represent human beings, as the long, upright
shape of the finger corresponds to the upright form of the human figure (as shown in
still 1 of Figure 19.4b). By convention, this handshape can also encode orientation, by
mapping the front of the body onto the front (inside surface) side of the finger.
However, neither of these two entity classifiers includes features that correspond to
the human figure’s arms or head, and they are thus not suited for the expression of
manual activity. To depict the manual manipulation, or manner of handling, of a refer-
ent, the signer’s hands have to function as hands, imitating the actual manual activity.
Expressions of this type of information appropriately involve the use of handling classi-
fiers and imply a character perspective representation (Perniss 2007c; Perniss/Özyürek
2008; Özyürek/Perniss 2011). Figure 19.4 shows the use of entity classifiers to depict
the location (in (a), still 1) and motion (in (b), still 1) of referents, and the subsequent
use of handling classifiers to depict the respective manual activity of the referent (in
still 2 of (a) and (b)) in its location or along its path. While the use of the entity
classifiers in the examples occurs in an event space projected from an external ob-
server’s perspective, the handling classifiers occur in a character perspective space in
which the signer embodies the referent. In (a), the signer is depicting an animate refer-
ent standing at a stove, holding a pan. In still 1, the signer uses a 2-legged entity
classifier to represent the referent’s location and orientation. In still 2, the signer uses
a grasping handshape to represent the referent holding the pan. In (b), the signer
represents an animate referent walking while dribbling a ball. In still 1, the signer
19. Use of sign space 421

Fig. 19.4 Examples of prototypically aligned classifier-perspective constructions: Entity classifier


predicate in an observer perspective event space projection (still 1 of (a) and (b)); Han-
dling classifier predicate in a character perspective event space projection (still 2 of (a)
and (b)). The examples are from German Sign Language (DGS).

represents the path motion of the referent, and then represents the referent dribbling
the ball in still 2.
The correspondence between the use of entity classifiers and observer perspective,
on the one hand, and handling classifiers and character perspective, on the other hand,
can also be motivated by a principle of scale iconicity, whereby different parts of a
representation should have the same size, insuring an internal consistency in scale
within the event space projection (Perniss 2007c). In observer perspective, the event
space is reduced in size, and the scale of referent representation within the event space
is correspondingly small. In contrast, the event space in character perspective is life-
sized, and referents within the event are equally represented on a life-sized scale. Based
on the notion of scale iconicity, specifically the match between the size of referent
projection and the size of event projection, the correspondences between perspective
and classifiers can be formulated in terms of prototypical alignments (Perniss 2007a,
2007b; Perniss/Özyürek 2008; Özyürek/Perniss 2011). The predicates in Figure 19.4 are
all examples of prototypically aligned classifier-perspective constructions.

4.3. Choice of signing perspective

As described above, the choice of perspective within a narrative depends to a consider-


able degree on the type of information to be expressed. The perspective from which
an event space is projected also bears on the localization of referents in sign space.
That is, the way in which sign space is structured depends on whether referents are
‘seen’ from the perspective of an external observer or from the perspective of an event
protagonist. Let us take the example of two animate referents standing opposite each
other in a particular spatial setting. In observer perspective, where the signer is not
part of the event, the canonical locations (assuming equal discourse status) for the two
referents are opposite each other on the lateral axis, on the left and right sides of
the signer’s body (cf. the discussion of Engberg-Pedersen’s (1993) semantic-pragmatic
conventions for structuring sign space in section 3 above). In character perspective,
however, the signer is a character within the event, taking on the role of one of the
animate referents. The location of one referent thus coincides with the location of the
422 IV. Semantics and pragmatics

signer’s body. The other referent, located opposite conceptually, must thus be mapped
onto sign space at a location opposite the signer’s body. Figure 19.5 gives a schematic
representation of canonical referent locations in observer and character perspective
event space projections. These correspondences are evident, for example, in the predi-
cates in Figure 19.4b. The movement of the entity classifier (in still 1) is along the
lateral axis (corresponding to the direction of motion observed). The handling classi-
fier, in contrast, is directed forward, along the sagittal axis.

Fig. 19.5: Schematic representation of canonical referent locations in an event space projected
from (A) character perspective and (B) observer perspective.

This means that depending on the use of perspective, the same referent can be
associated with different locations in sign space. This affects how sign space is struc-
tured and may have implications for discourse coherence. The combinations of per-
spective and classifier predicates found in extended discourse are much more varied
than the prototypical alignments described above. The co-occurrence of different clas-
sifier forms with different perspectives thus motivates the existence of aligned and non-
aligned classifier-perspective construction types. As described, there are two kinds of
the aligned classifier-perspective construction type: entity classifiers in observer per-
spective, on the one hand, and handling classifiers in character perspective, on the
other hand. There are also two kinds of the non-aligned classifier-perspective construc-
tion type, which are the converse combinations: entity classifiers in character perspec-
tive, on the one hand, and handling classifiers in observer perspective, on the other
hand. These are summarized in Table 19.1.
Examples of non-aligned construction types are shown in Figure 19.6. In (a), the
signer uses an entity classifier (on the right hand) to place an animate referent, in a
prone posture, in a location along the sagittal axis, opposite the body. The event space
is projected from ⫺ and thus the entity classifier referent is depicted within ⫺ the
character perspective of the referent mapped onto the signer’s body. This referent is
holding a ball (on the left hand; see section 5 on simultaneous constructions) and is
19. Use of sign space 423

Tab. 19.1: Classifier predicate and signing perspective correspondences in aligned and non-aligned
classifier-perspective construction types.

Construction type Classifier predicate Signing perspective


Entity classifier Observer perspective
Aligned Handling classifier Character perspective
Entity classifier Character perspective
Non-aligned
Handling classifier Observer perspective

Fig. 19.6: Examples of non-aligned classifier-perspective constructions: (a) Entity classifier predi-
cate in a character perspective event space projection (German Sign Language, DGS);
(b) Handling classifier predicates in an observer perspective event space projection
(Turkish Sign Language, TİD).

facing the referent lying down. In (b), the signer depicts two animate referents standing
across from each other, holding pans and flipping a pancake back and forth between
them. The referent locations correspond to an observer perspective representation of
the event space, but the signer uses handling classifiers, prototypically associated with
character perspective, to represent the referents’ activity. The hands are oriented in
order to depict the flipping along the lateral axis. In an aligned character perspective
representation, only one referent (character) would be depicted at a time, and the
hand would be directed forward, as in the actual activity. Classifier predicates and
signing perspective are used as spatial devices in almost all sign languages that have
been studied to date (an exception is Adamorobe Sign Language (AdaSL), a village
sign language used in Ghana, in which the use of entity classifiers is not attested (Nyst
2007a); also see chapter 24, Shared Sign Languages). Thus, different classifier-perspec-
tive construction types ⫺ i.e. aligned and non-aligned, as well as more complex combi-
nations and fusions of perspective (see section 5 on the simultaneous use of perspec-
tives) ⫺ should theoretically exist in all sign languages. However, the frequency of use,
or distribution of occurrence, of different construction types may differ significantly
between sign languages (see section 6 on typological differences between sign lan-
guages).
424 IV. Semantics and pragmatics

5. Structuring sign space with multiple articulators:


Simultaneous constructions

The availability of multiple, independent articulators is a further factor that impacts


on the use of space, specifically, on how sign space is structured to refer to and repre-
sent referents. Simultaneous constructions have been defined as representations that
are produced with more than one articulator, whereby each articulator bears distinct
and independent meaning units, which stand in some semanto-syntactic relationship to
each other (Engberg-Pedersen 1994; Leeson/Saeed 2007; Miller 1994; Vermeerbergen
2001; Vermeerbergen/Leeson/Crasborn 2007).
Much of the research on simultaneous constructions focuses on the use of the two
manual articulators, and numerous different functions of bimanual simultaneous con-
structions have been identified. Their functions can be categorized into two main
groups, based on whether they reflect perceptual structure or discourse structure. Si-
multaneous constructions can also involve articulators other than the hands, for exam-
ple, the eyes, face, or torso (Aarons/Morgan 2003; Dudis 2004; Liddell 1998, 2000;
Perniss 2007b). The simultaneous use of manual and non-manual articulators often
occurs to express elements associated with different perspectives, and has likewise been
associated with different functions.
Simultaneous constructions that reflect discourse structure are used primarily to
guide the procession of discourse. In this sense, they lie outside the focus of this chap-
ter. For example, signers can use simultaneous constructions in listing contexts, where
one hand is used to enumerate while the other hand identifies the list items (cf. Lid-
dell’s (2003) pointer buoy). Signers can also exploit the affordance of simultaneity to
create topic-comment structures, visually maintaining a discourse topic in sign space
on one hand, while the other hand ‘comments’ on it.
In simultaneous constructions that reflect perceptual structure, the two hands ex-
press information about the spatial and/or temporal organization of an event. Sign
space is structured to represent the simultaneity of spatial and temporal relationships
between referents, both static and moving, and using both entity and handling classifi-
ers. For example, two entity classifiers used within a simultaneous construction can
depict the spatial relationship between a person and a table (e.g. a person standing at
a table) (as shown in Figure 19.7). Similarly, two handling classifiers can depict the
temporal simultaneity of two events, e.g. holding open a cupboard door while retrieving
a cup from the cupboard. These constructions contain what Engberg-Pedersen (1993,
1994) analyzes as a hold-morpheme. The hold-morpheme is typically associated with
the ground referent or the backgrounded event (and is typically signed with the non-
dominant hand). In addition, the hold-morpheme is neutral with respect to the seman-
tic distinction between location and motion, such that when the interaction between
two moving referents (e.g. two basketball players) is depicted in sign space, only the
foregrounded referent is associated with a movement morpheme (Engberg-Pedersen
1993, 284).
The simultaneous representation of aspects of events associated with different per-
spectives, and involving both manual and non-manual articulators, has not been widely
discussed in the literature to date, yet it is recognized as a frequent phenomenon in
19. Use of sign space 425

Fig. 19.7: Example of a simultaneous classifier construction. Two entity classifier predicates are
used simultaneously to depict the spatial relationship of a person standing at a table
(German Sign Language, DGS).

narratives by those researchers who have studied it (e.g. Aarons/Morgan 2003; Dudis
2004; Engberg-Pedersen 1993; Hendriks 2008; Liddell 1998, 2000, 2003; Morgan 2002;
Perniss 2007a, 2007b, 2007c). Different functions have been attributed to such repre-
sentations, and they have been labeled in different ways. Aarons and Morgan (2003)
describe the use of ‘multiple perspective representations’, while Dudis and Liddell
characterize the creation of ‘simultaneous blends’. For these authors, the depiction of
different aspects of an event in different perspectives functions mainly to give a richer,
more detailed representation of the event. For example, both Dudis (2004) and Liddell
(2000) give an example from ASL in which a signer simultaneously represents a vehicle
on one hand and the driver of the vehicle on the body. A ‘zoomed out’, or observer
perspective, view of the scene is exhibited in the use of an entity classifier to depict,
in Liddell’s example, a car stopped at an intersection, and in Dudis’ example, a motor-
cycle going up a hill. By mapping the drivers of the vehicles onto the body, the signer
can simultaneously depict their facial expressions and behaviors (e.g. the driver of the
car looking both ways before crossing the intersection) through a ‘zoomed in’ or char-
acter perspective view of the scene. Aarons and Morgan (2003) describe a similar
construction from South African Sign Language (SASL) in which a signer maps a
parachutist floating through the air simultaneously onto his hand and onto his body.
Perniss uses the terms “simultaneous perspective constructions” (2007a,b) and ‘double-
perspective constructions’ (2007c). She attributes to these constructions two separate
functions: (i) achieving a full, semantic specification of an event, and (ii) creating a
mapping between two event space projections (one from observer and one from char-
acter perspective). Hendriks (2008) discusses the use of ‘multiple perspectives’ in Jor-
danian Sign Language (LIU), mentioning their function of clarifying a positional rela-
tionship. She gives the example of a signer using two entity classifiers to depict one
animal jumping onto the neck of another animal. The signer does this by moving one
entity classifier (2-legged entity classifier) onto the other entity classifier, specifically
onto the back of the other hand. She then additionally represents the jumping move-
ment by moving the entity classifier onto her own neck and head. In doing so, she
clarifies the nature of the spatial relationship between the two referents.
426 IV. Semantics and pragmatics

6. Typological perspective: Use of sign space across


sign languages
The use of spatial devices for structuring sign space ⫺ including the use of classifier
predicates, directional verbs, and pointing signs ⫺ has been assumed to be similar
across sign languages, due to the assumption that the iconic potential of the visual-
spatial modality creates a homogenizing effect on spatial structure (Aronoff/Meir/
Sandler 2005; Meier 2002; Talmy 2003). To date, however, very little cross-sign-linguis-
tic research on the use of sign space for referent representation has actually been
conducted. Moreover, there is comparatively little research on less well-known and
unrelated sign languages, as well as a lack of research on sign language usage in actual
discourse situations.
However, these gaps in research are beginning to be filled. More, as well as more
geographically diverse, sign languages are being studied, enriching the field of sign
linguistics through an accumulation of data and knowledge about sign language lexica
and grammars. Cross-linguistic and typological investigations are also growing in popu-
larity, with researchers embarking on long-term and large-scale projects aimed at un-
covering the range of structural variation possible within the visual-spatial language
modality (e.g. Perniss/Pfau/Steinbach 2007; Zeshan 2006; Zeshan/Perniss 2008; Wil-
bur 2006).
Investigations into the use of space, the domain of interest in the current chapter,
in different sign languages have also begun to surface recently (Aronoff et al. 2003,
2005; Liddell/Vogt-Svendsen/Bergman 2007; Nyst 2007a,b; Perniss/Özyürek 2008; Py-
ers/Senghas 2007; Özyürek/Perniss 2011). It is this domain, in particular, where modal-
ity effects are widely considered to create similarities across sign languages (see also
Cuxac 1999; Sallandre/Cuxac 2002). In fact, some research on the use of classifier
predicates has found striking similarities in the representation of spatial information
across different sign languages (e.g. Aronoff et al. 2003, 2005). However, other compar-
ative research has shown that significant differences exist between sign languages in
the use of space (e.g. Arik 2009; Perniss/Özyürek 2008; Pyers/Senghas 2007). Such
results suggest that language-specific constraints are involved in shaping the influence
of the modality’s iconic properties, and that this domain may exhibit more variation
than previously thought.
For example, Perniss and Özyürek (2008) and Özyürek and Perniss (2011) found
differences in the use of classifier-perspective constructions, specifically, in the distribu-
tion of use of non-aligned construction types, between German Sign Language (DGS)
and Turkish Sign Language (TİD), as well as differences in the way the two perspec-
tives were combined into single constructions. While TİD signers used handling classifi-
ers quite frequently in an observer perspective event space projection (as shown in Figure
19.6b), this non-aligned construction type was not used at all by DGS signers in the
data set compared. Conversely, the use of entity classifiers in a character perspective
event space was used abundantly by DGS signers (as shown in Figure 19.6a), but
comparatively rarely by TİD signers. Nyst (2007a) found that AdaSL, in contrast to
other TİD sign languages studied to date, does not make use of entity classifiers to
express location and motion events. Instead, AdaSL signers use character perspective
representations together with generic directionals and intransitive motion markers. For
example, to represent the upward climbing motion of an animate referent, AdaSL
19. Use of sign space 427

signers combine a predicate guan (meaning ‘run’) with a generic directional up. In
combination with a directional, the predicate guan functions to mark an intransitive
motion event (Nyst 2007a). Finally, Pyers and Senghas (2007) found differences in the
devices used to mark referential shift between ASL and Nicaraguan Sign Language,
including differences in the use of the body to indicate role-taking and in the use of
pointing signs to indicate referents.

7. Summary and outlook

This chapter has described the semantic and pragmatic uses of sign space, explaining
the different ways in which locations in sign space are given meaning, and the use of
different spatial devices in structuring sign space. The chapter first provided an over-
view of the syntactic and topographic uses of space, showing that referent-location
associations in sign space can either reflect the real-world locations of referents (pro-
viding information about spatial configuration), or be chosen independently of actual
locations, simply in order to mark syntactic relations. While there is evidence that these
two uses of space are treated differently in processing, many researchers have shown
that the choice of locations in sign space is never really arbitrary, but rather motivated
by semantic-pragmatic conventions and principles of discourse cohesion.
The chapter then described the use of signing perspective and classifier predicates,
two primary spatial devices used for structuring sign space for event representation,
especially of referent location, motion, and action. It described the relationship be-
tween classifier predicates and signing perspective and motivated the existence of dif-
ferent classifier-perspective construction types, including the use of simultaneous con-
structions and, in particular, the simultaneous use of different perspectives. Finally, the
issue of sign language typology was discussed, focusing on the possibilities of variation
between sign languages in the use of sign space.
Future research will continue to uncover similarities and differences between sign
languages in the use of sign space, leading us to a better understanding of the influence
of modality on language structure, as well of the potential for language-specific varia-
tion given similar mophosyntactic structures and resources.

8. Literature
Aarons, Debra/Morgan, Ruth
2003 Classifier Predicates and the Creation of Multiple Perspectives in South African Sign
Language. In: Sign Language Studies 3(2), 125⫺156.
Arik, Engin
2009 Spatial Language: Insights from Sign and Spoken Languages. PhD Dissertation, Purdue
University, West Lafayette, IN.
Aronoff, Mark/Meir, Irit/Padden, Carol A./Sandler, Wendy
2003 Classifier Constructions and Morphology in Two Sign Languages. In: Emmorey, Karen
(ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Law-
rence Erlbaum, 53⫺84.
428 IV. Semantics and pragmatics

Aronoff, Mark/Meir, Irit/Sandler, Wendy


2005 The Paradox of Sign Language Morphology. In: Language 81(2), 301⫺344.
Cuxac, Christian
1999 French Sign Language: Proposition of a Structural Explanation by Iconicity. In: Braf-
fort, Annelies/Gherbi, Rachid/Gibet, Sylvie/Richardson, James/Teil, Daniel (eds.), Lec-
ture Notes in Artificial Intelligence. Proceedings of the 3rd Gesture Workshop ’99 on
Gesture and Sign Language in Human-Computer Interaction. Berlin: Springer, 165⫺184.
Dudis, Paul
2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15(2), 223⫺238.
Emmorey, Karen/Corina, David/Bellugi, Ursula
1995 Differential Processing of Topographic and Referential Functions of Space. In: Emmo-
rey, Karen/Reilly, Judy S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence
Erlbaum, 43⫺62.
Emmorey, Karen/Falgier, Brenda
1999 Talking About Space with Space. In: Winston, Elizabeth A. (ed.), Story Telling and
Conversation: Discourse in Deaf Communities. Washington, DC: Gallaudet University
Press, 3⫺26.
Emmorey, Karen/Damasio, Hanna/McCullough, Stephen/Grabowski, Thomas/Ponto, Laura L.B./
Hichwa, Richard D./Bellugi, Ursula
2002 Neural Systems Underlying Spatial Language in American Sign Language. In: Neuro-
Image 17, 812⫺824.
Emmorey, Karen/Tversky, Barbara
2002 Spatial Perspective Choice in ASL. In: Sign Language & Linguistics 5(1), 3⫺25.
Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space
in a Visual Language. Hamburg: Signum.
Engberg-Pedersen, Elisabeth
1994 Some Simultaneous Constructions in Danish Sign Language. In: Brennan, Mary/Turner,
Graham H. (eds.), Word-order Issues in Sign Language. Durham: ISLA, 73⫺87.
Fauconnier, Gilles
1985 Mental Spaces. Cambridge, MA: MIT Press.
Fauconnier, Gilles
1994 Mental Spaces. Aspects of Meaning Construction in Natural Language. Cambridge:
Cambridge University Press.
Fauconnier, Gilles/Turner, Mark
1996 Blending as a Central Process of Grammar. In: Goldberg, Adele E. (ed.), Conceptual
Structure, Discourse, and Language. Stanford, CA: CSLI, 113⫺130.
Fridman-Mintz, Boris/Liddell, Scott K.
1998 Sequencing Mental Spaces in an ASL Narrative. In: Koenig, Jean-Pierre (ed.), Dis-
course and Cognition: Bridging the Gap. Cambridge: Cambridge University Press,
255⫺268.
Hendriks, Bernadet
2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD
Dissertation, University of Amsterdam. Utrecht: LOT.
Hoek, Karen van
1992 Conceptual Spaces and Pronominal Reference in American Sign Language. In: Nordic
Journal of Linguistics 15, 183⫺199.
Hoek, Karen van
1996 Conceptual Locations for Reference in American Sign Language. In: Fauconnier,
Gilles/Sweetser, Eve (eds.), Spaces, Worlds, and Grammar. Chicago, IL: University of
Chicago Press, 334⫺350.
19. Use of sign space 429

Klima, Edward S./Bellugi, Ursula


1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Leeson, Lorraine/Saeed, John
2007 Conceptual Blending and the Windowing of Attention in Simultaneous Constructions
in Irish Sign Language. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno
(eds.), Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins,
55⫺72.
Liddell, Scott K.
1990 Four Functions of a Locus: Reexamining the Structure of Space in ASL. In: Lucas, Ceil
(ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet Univer-
sity Press, 176⫺198.
Liddell, Scott K.
1995 Real, Surrogate and Token Space: Grammatical Consequences in ASL. In: Emmorey,
Karen/Reilly, Judy S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence
Erlbaum, 19⫺41.
Liddell, Scott K.
1998 Grounded Blends, Gestures, and Conceptual Shifts. In: Cognitive Linguistics 9(3),
283⫺314.
Liddell, Scott K.
2000 Blended Spaces and Deixis in Sign Language Discourse. In: McNeill, David (ed.), Lan-
guage and Gesture: Window Into Thought and Action. Cambridge: Cambridge Univer-
sity Press, 331⫺357.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Liddell, Scott K./Vogt-Svendsen, Marit/Bergman, Brita
2007 A Cross-linguistic Comparison of Buoys. Evidence from American, Norwegian, and
Swedish Sign Language. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno
(eds.), Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins,
187⫺215.
Lillo-Martin, Diane/Klima, Edward S.
1990 Pointing out Differences: ASL Pronouns in Syntactic Theory. In: Fischer, Susan D./
Siple, Patricia (eds.), Theoretical Issues in Sign Language Research. Vol. 1: Linguistics.
Chicago: University of Chicago Press, 191⫺210.
MacSweeney, Mairead/Woll, Bencie/Campbell, Ruth/Calvert, Gemma A./McGuire, Philip K./Da-
vid, Anthony S./Simmons, Andrew/Brammer, Michael J.
2002 Neural Correlates of British Sign Language Processing: Specific Regions for Topo-
graphic Language? In: Journal of Cognitive Neuroscience 14(7), 1064⫺1075.
Mather, Susan/Winston, Elizabeth A.
1998 Spatial Mapping and Involvement in ASL Storytelling. In: Lucas, Ceil (ed.), Pinky
Extension and Eye Gaze: Language Use in Deaf Communities. Washington, DC: Gallau-
det University Press, 183⫺210.
McDonald, Betsy
1982 Aspects of the American Sign Language Predicate System. PhD Dissertation, State Uni-
versity of New York, Buffalo.
McNeill, David
1992 Hand and Mind: What Gestures Reveal About the Mind. Chicago: University of Chic-
ago Press.
Meier, Richard P.
2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon
Linguistic Structure in Sign and Speech. In: Meier, Richard P./Cormier, Kearsy/Quinto-
Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cam-
bridge: Cambridge University Press, 1⫺25.
430 IV. Semantics and pragmatics

Meir, Irit
2002 A Cross-modality Perspective on Verb Agreement. In: Natural Language and Linguistic
Theory 20(2), 413⫺450.
Miller, Chris
1994 Simultaneous Constructions in Quebec Sign Language. In: Brennan, Mary/Turner, Gra-
ham H. (eds.), Word-order Issues in Sign Language. Durham: ISLA, 89⫺112.
Morgan, Gary
1999 Event Packaging in British Sign Language Discourse. In: Winston, Elizabeth (ed.),
Story Telling and Conversation: Discourse in Deaf Communities. Washington, DC: Gal-
laudet University Press, 27⫺58.
Morgan, Gary
2002 Children’s Encoding of Simultaneity in BSL Narratives. In: Sign Language & Linguis-
tics 5(2), 131⫺165.
Nyst, Victoria
2007a A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Nyst, Victoria
2007b Simultaneous Constructions in Adamorobe Sign Language (Ghana). In: Vermeerber-
gen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Lan-
guages: Form and Function. Amsterdam: Benjamins, 127⫺145.
Özyürek, Aslı/Perniss, Pamela
2011 Event Representations in Signed Languages. In: Bohnemeyer, Jürgen/Pederson, Eric
(eds.), Event Representation in Language and Cognition. Cambridge: Cambridge Uni-
versity Press, 84⫺107.
Padden, Carol
1990 The Relation Between Space and Grammar in ASL Verb Morphology. In Lucas, Ceil
(ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet Univer-
sity Press, 118⫺132.
Perniss, Pamela
2007a Achieving Spatial Coherence in German Sign Language Narratives: The Use of Classi-
fiers and Perspective. In: Lingua 117(7), 1315⫺1338.
Perniss, Pamela
2007b Locative Functions of Simultaneous Perspective Constructions in German Sign Lan-
guage Narratives. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.),
Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins, 27⫺54.
Perniss, Pamela
2007c Space and Iconicity in German Sign Language (DGS). PhD Dissertation, MPI Series
in Psycholinguistics 45, Radboud University Nijmegen.
Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.)
2007 Visible Variation: Cross-linguistic Studies on Sign Language Structure. Berlin: Mouton
de Gruyter.
Perniss, Pamela/Özyürek, Aslı
2008 Constructing Action and Locating Referents: A Comparison of German and Turkish
Sign Language Narratives. In: Quer, Josep (ed.), Signs of the Time. Selected Papers from
TISLR 8. Hamburg: Signum, 353⫺376.
Poizner, Howard/Klima, Edward S./Bellugi, Ursula
1987 What the Hands Reveal About the Brain. Cambridge, MA: MIT Press.
Pyers, Jennie/Senghas, Ann
2007 Referential Shift in Nicaraguan Sign Language: A Comparison with American Sign
Language. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation:
Cross-linguistic Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 279⫺
302.
19. Use of sign space 431

Sallandre, Marie-Anne/Cuxac, Christian


2002 Iconicity in Sign Language: A Theoretical and Methodological Point of View. In:
Wachsmuth, Ipke/Sowa, Timo (eds.), Gesture-based Communication in Human-com-
puter Interaction. Proceedings of the International Gesture Workshop (GW 2001). Berlin:
Springer, 171⫺180.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Schick, Brenda
1990 Classifier Predicates in American Sign Language. In: International Journal of Sign Lin-
guistics 1, 15⫺40.
Slobin, Dan I./Hoiting, Nini/Kuntze, Marlon/Lindert, Reyna/Weinberg, Amy/Pyers, Jennie/An-
thony, Michelle/Biederman, Yael/Thumann, Helen
2003 A Cognitive/Functional Perspective on the Acquisition of “Classifiers”. In: Emmorey,
Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ:
Lawrence Erlbaum, 271⫺298.
Supalla, Ted R.
1986 The Classifier System in American Sign Language. In: Craig, Colette (ed.), Noun
Classes and Categorization. Amsterdam: Benjamins, 181⫺214.
Talmy, Leonard
2003 The Representation of Spatial Structure in Spoken and Signed Language. In: Emmorey,
Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ:
Lawrence Erlbaum, 169⫺196.
Vermeerbergen, Myriam
2001 Simultane Constructies in de Vlaamse Gebarentaal. In: Beyers, R. (ed.), Handelingen
(Koninklijke Zuid-Nederlandse Maatschappij voor Taal- en Letterkunde en Geschie-
denis). Brussels: LIV, 69⫺81.
Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.)
2007 Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins.
Wilbur, Ronnie B. (ed.)
2006 Investigating Understudied Sign Languages ⫺ Croatian SL and Austrian SL, with com-
parison to American SL. Special Issue of Sign Language & Linguistics 9(1/2).
Winston, Elizabeth A.
1991 Spatial Referencing and Cohesion in an American Sign Language Text. In: Sign Lan-
guage Studies 73, 397⫺410.
Winston, Elizabeth A.
1995 Spatial Mapping in Comparative Discourse Frames. In: Emmorey, Karen/Reilly, Judy
S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 87⫺114.
Zeshan, Ulrike (ed.)
2006 Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press.
Zeshan, Ulrike/Perniss, Pamela (eds.)
2008 Possessive and Existential Constructions in Sign Languages. Nijmegen: Ishara Press.
Zwitserlood, Inge
2003 Classifying Hand Configurations in Nederlandse Gebarentaal (Sign Language of the
Netherlands). PhD Dissertation, University of Utrecht. Utrecht: LOT.

Pamela Perniss, London (United Kingdom)


432 IV. Semantics and pragmatics

20. Lexical semantics: Semantic fields and


lexical aspect
1. Introduction
2. Words and their meanings
3. Semantic relations and semantic fields
4. Aspect and visibility in sign languages
5. Lexical aspectual structures at work
6. Conclusion
7. Literature

Abstract

This chapter discusses three components of lexical semantics in sign languages. First, a
brief discussion of color terms in American Sign Language (ASL) and Hong Kong Sign
Language (HKSL) illustrates different types of form-meaning relationships within a sin-
gle semantic field; color terms may be derived through analogy with objects that are
stereotypically of a certain color, or they may be entirely arbitrary. Second, a comparison
of terms for siblings in ASL, HKSL, English, and Cantonese demonstrates different
ways in which similar conceptual distinctions are packaged into lexical kinship items.
Third, using verbs from different categories from HKSL, this chapter discusses lexical
aspect, which along with event structure, is one of the components of situation aspect.
The basic lexical distinctions between aspectually homogeneous and heterogeneous verbs
is often apparent or visible in the forms of sign language verbs, and this offers insights
into the relationships between event structure and lexical semantics, including the distinc-
tion between grammatical scales or paths and manner. The situation aspect of a predicate
is compositional, thus this analysis can be applied to predicates made up of one or many
lexical elements, as well as classifier predicate constructions.

1. Introduction

Lexical semantics is the field of linguistics that is concerned with the meanings of
lexical items or words, and how they mean what they mean (Pustejovsky 1995). This
includes the idiosyncratic conceptual meanings associated with individual words, called
lexical roots, and the grammatically relevant properties that determine and constrain
how a word behaves within a given language. It is sometimes useful to think of mental
lexicons as something like dictionaries, but lexical semantics is a very different sort of
enterprise from developing the descriptions of words, including their forms, meanings,
and some of their uses, that go into creating dictionaries. It is not in fact a major goal
of lexical semantics to create detailed descriptions of all the words in a particular
language. Like other fields of linguistic inquiry, in addition to providing valid descrip-
tions, lexical semantics is concerned with making generalizations that offer explana-
20. Lexical semantics: Semantic fields and lexical aspect 433

tions of how meanings are organized within words, the relationships in terms of mean-
ings among words and groups of words, how word meanings change and evolve over
time, and how word meanings are able to change and shift in different contexts.
This chapter presents three discussions of aspects of lexical semantics, preceded by
a discussion of words and their meanings. The first two involve the notion of semantic
fields, or sets of conceptually related words. By comparing a subset of basic color terms
from Hong Kong Sign Language (HKSL) and American Sign Language (ASL), we
will see examples of completely arbitrary form-meaning mappings, as well as more
iconic or metaphorical form-meaning mappings, working together in the same word
classes (section 3.1). This is followed in section 3.2 by a comparison of a small subset
of kinship terms in HKSL and ASL, as well as Cantonese and English, illustrating
examples of how similar meanings can be lexically packaged in different ways, and
how the ways in which meanings are packaged can reflect cultural differences. These
discussions of basic color and kinship terms show sign languages behaving just like
other languages in terms of their lexical semantics. In sections 4 and 5, I present evi-
dence of differences between sign languages and many spoken languages. Specifically,
in sign languages, lexical aspect, the lexically specified temporal contour of a verb, is
often visible or iconic in the surface form, as is the situation aspect, or temporal contour
of predicates. Thus, the forms of verbs and predicates in sign languages, and not just
their grammatical behaviors, are informative for analyses of aspectual meanings in
human languages more broadly.

2. Words and their meanings

The unit of language that lexical semantics is primarily concerned with is the word,
but defining the notion of word in a cross-linguistically valid way and determining what
a possible word can be, can be a challenge and any definition is necessarily ‘fuzzy’
around the edges. Words have phonological forms, but well-formed prosodic words
vary in terms of size and complexity within a language, as well as cross-linguistically.
Although all lexical items have some phonological form, it is not useful to define
possible lexical items in purely phonological terms (Hohenberger 2008). The notion of
a word is also distinct from the notion of a morpheme. Words are composed of at least
one morpheme, but a word may also consist of more than one morpheme. Idioms,
which function in many ways like words, do not ‘look like’ stereotypical words, and
may be phrasal in length. These questions arise when defining what a word can be in
spoken languages. In sign languages, with multiple articulators working in concert,
there are other challenges as well (Hohenberger 2008).
For the purposes of this chapter, and accepting some ‘fuzziness’ in terms of the
definition, I assume that to be a word, or lexeme, a candidate must have the following
characteristics: First, it must represent a possible phonologically well-formed (for a
particular language) stand-alone unit, with semantic and pragmatic content. This crite-
rion excludes bound morphemes, but includes single free morphemes, compounds, and
other complex morphemes. Second, it must have a conventionalized form and, allowing
for different senses in different contexts, a conventionalized meaning (Hohenberger
2008).
434 IV. Semantics and pragmatics

The second criterion is necessary for distinguishing lexemes from structures like
classifier predicate constructions (CLP) in sign languages. CLP fit the first criterion,
and have been treated like spatial verbs in some accounts, but they are different from
lexical words in important ways, and we need a definition of word that excludes them.
However the constituent parts of CLP are analyzed, they do not meet both criteria
(see chapter 8, Classifiers, for details), but an appropriate analysis of situation aspect
will allow lexical verbs and CLP to be treated together at the predicate level.
This definition of words does not address the notion of what a sign in a sign lan-
guage is, or how signs correspond to words. All of the forms that meet the criteria
above for words are also signs, but there are forms, such as CLP, that are called signs,
but that are not words. I will only use the term ‘sign’ here when referring to words,
but this does not imply that all signs are words in this technical sense.
Assuming a workable definition of words, it is possible to address some of the issues
and challenges that are important for lexical semantic theories, including developing
useful conceptualizations of the lexicon that are different from a traditional mental
dictionary approach. There are written records of people exploring word meanings and
attempting to group words into categories going back to the ancient Greeks, and in
linguistics, as in almost every other field of inquiry, there is a tendency to collect and
classify. Until fairly recently, there was an assumption that words had relatively stable
meanings, and could be classified into clear categories based on their conceptual mean-
ings and shared grammatical functions. Research has shown that conceptual meanings
rarely correspond to clear grammatical categories (Levin 1993; Levin/Rappaport
Hovav 2005), but this approach allows for some important generalizations. For exam-
ple, it is useful to say that eat and break in HKSL, which are discussed below, are
members of the same broad word class. They both denote actions and behave in some
similar ways. We call this category ‘verbs’ because both signs have similar grammatical
functions and meanings to classes identified as verbs in other languages. It is also useful
to say that eat and food have meanings that are conceptually related, being members
of the same semantic field, even though their grammatical behaviors are very different
(see chapter 5 on word classes).
However words are grouped together, and however broad or narrow the groupings
are, it is tempting to see lexical semantics as a process of determining how many cat-
egories of words there are within a language, and then placing each word in its proper
category. Different systems for categorizing words based on their grammatical behavior
and their conceptual meanings are required, but debates about labeling and particulars
about classification aside, the goals of such an enterprise would be very straightfor-
ward. Eventually, after enough careful work, all the words in a given language would
be listed and categorized, but this would not in fact result in a complete description of
the lexical semantics of say ASL or HKSL, or English or Cantonese.
Detailed linguistic descriptions like this are certainly useful, but there are several
reasons not to approach lexical semantics in this way. Word meaning is much more
flexible. Words can have different senses and different grammatical behaviors in differ-
ent contexts (polysemy), and the meanings of words can be metaphorically extended
far beyond the ‘original’ meaning of the word. These facts are hard to account for
even in the most detailed and complex of categorization schemes (Pustejovsky 1995).
Furthermore, languages package information into lexical items in different ways, so
that the nearest equivalent words in two languages may not actually mean the same
20. Lexical semantics: Semantic fields and lexical aspect 435

thing. These are difficult challenges if a goal of lexical semantics is to make generaliza-
tions about human language and not just about particular languages.
An alternative conceptualization to the dictionary metaphor for lexical semantics is
to think of meaning, in a broad sense including declarative, conceptual, encyclopedic,
and experiential as well as grammatical meaning, in a highly interconnected network
of meanings that are associated with each other to greater or lesser degrees. Instead
of being a self-contained entry in a dictionary, a word is an index that represents a set
of connected and associated meanings. From this perspective, the words eat and food
are indices that share some related conceptual meanings, but not grammatical mean-
ings. Likewise, the words eat and break share some grammatical meanings, but not
closely related conceptual meanings. Connectionist approaches to meaning are not
new, and they are faced with some difficult theoretical challenges; they are very fuzzy,
for example, making meanings very difficult to represent. However, as a way of think-
ing about lexical semantics, and notions such as synonymy, iconicity, and metaphor,
thinking about meaning in terms of connections and associations can be very useful.

3. Semantic relations and semantic fields

There are some important semantic relationships that are essential to any lexical se-
mantic analysis. Words with the same or similar meanings are referred to as synonyms,
and words with related but opposite meanings are termed antonyms. These relation-
ships are the basis for thesauruses, but meanings are not distributed evenly throughout
a lexicon. An individual word may have many, few or no synonyms or antonyms, and
how similar the meanings of two words are depends on the particular senses of each
word. In ASL, the signs good and best have similar positive meanings, differing in
degree. hot and cold have related meanings, but they are antonyms, since they refer
to opposite sides of the same relative continuum, as do rich and poor and beautiful
and ugly. When meanings involve relative locations on a continuum, finding opposites
or words with similar meanings is easy.
It is much harder to find synonyms and antonyms for words like apple and cat,
because the semantic relationships between these words and words like orange and
dog, or fruit and animal are organized differently. Words vary in terms of how general
or how specific their meanings are. apple, being relatively specific, shares some mean-
ing with its hypernym, the more general super-ordinate term fruit, but they are not
synonyms. Within the larger category of all fruit, apple, orange, and banana are alter-
natives to each other or hyponyms, but not antonyms.
Even for words at the same level of specificity, there are no ‘true’ synonyms, or
words that mean exactly the same thing. Due to its history, English has a very large
number of synonymous words, some of which are native (e.g. ‘folk’ and ‘freedom’),
while others have come into the language from outside, particularly from French (e.g.
‘people’ and ‘liberty’). ASL has a very different history, but also has a range of syno-
nyms, with native signs sharing similar meanings with forms borrowed from English.
Examples of these native/non-native pairs include the sign cake and the fingerspelled
form #cake, and the many forms of the sign glossed pizza (Lucas/Bayley/Valli 2002).
As with any pair of synonyms, the two terms vary in terms of register, style, and region,
436 IV. Semantics and pragmatics

as well as other subtle cultural and social meanings, so it is usually only possible to say
that two synonyms have similar, but not identical meanings.
If we conceive of lexical meaning in terms of networks, rather than as an elaborate
mental dictionary, similarities in meanings between words are attributed to the degree
of overlap between the networks of meanings represented by the two words. The more
the two networks overlap the more synonymous the two words are. Antonyms overlap
as well, but differ from each other in specific ways, such as relative position along a
particular continuum. Specific words share general meanings with super-ordinate
terms, and thus with all the other specific words within the category. Their specific
unshared meanings are what distinguish them from other members of the category and
make them unique. These notions of shared but distinct, and related meanings are
important for dealing with color and kinship terms, which are discussed below.

3.1. Naming colors

Terms for basic colors are of interest for language typological research as well as lexical
semantics because they represent a semantically cohesive and restricted domain that
is grounded in the human visual system. Although the human eye is able to perceive
an enormous range of colors in different shades and hues, these colors can be classified
into a small inventory of basic categories. ‘White light’ can be separated using prisms
or water droplets, producing rainbows with distinct bands of red, orange, yellow, green,
blue, and violet colors which, because of their different wavelengths, always appear in
this order. Together with pink, brown, gray, and, of course, black and white, these colors
make up the inventory of eleven basic color categories identified by Berlin and Kay
(1969). Some languages may make slightly more basic distinctions, but there is wide
agreement across languages about how colors should be grouped together. For exam-
ple, whatever they are called, red and yellow are always considered closer to each other
than either is to blue (Dowman 2007). Terms for colors are of interest to us here
because they illustrate different strategies that are available for languages to associate
forms and meanings, offering clues about how lexical semantic systems work.
To compare different strategies for labeling colors, researchers use a notion of basic
color term, which refers to general color terms with a high frequency of use that are
not borrowed, and which are used only to refer to a color and not also to an object
that is typically of a certain color. Thus, ‘red’ is a basic color term in English, while
the more specific term ‘crimson’ is not, nor is ‘violet’ borrowed from French, the low
frequency term ‘turquoise’, or the term ‘orange’ which refers to both a color and a
fruit. Cross-linguistic research has revealed that basic color terms follow a robust pat-
tern. Basic terms for the colors black and white are apparently universal in all lan-
guages. If a language has another basic color term, it is the one for red. If a language
has additional basic color terms, they are for yellow and green, followed by blue and
brown, and finally purple, pink, orange, and grey. This hierarchy of basic color terms
is illustrated in Figure 20.1.
There are several ways for a language to refer to those basic colors for which it
lacks basic terms, but with this hierarchy, it is possible to compare different languages
in a very systematic way. Why there should be such a hierarchy, and what possible
insights sign languages may reveal about this hierarchy is beyond the scope of this
20. Lexical semantics: Semantic fields and lexical aspect 437

Fig. 20.1: Hierarchical ordering of basic color terms as proposed by Berlin/Kay (1969)

chapter (the reader is referred to Berlin/Kay 1969; Kay/McDaniel 1978; Dowman 2007;
and Robertson et al. 2005 for discussions of basic color terms, and to Hollman/Sutrop
2010 for a recent discussion of color terms in sign language). This discussion will focus
on different labeling strategies for basic colors in HKSL and ASL and what they illus-
trate about lexical semantics.

3.2. Color terms in HKSL and ASL

Woodward (1989) provided a comparison of inventories of basic color terms in ten


sign languages, including ASL and HKSL. ASL has basic color terms for only black/
white (I) and red (II), with all other colors labeled using terms other than native basic
color terms. In contrast, HKSL has basic color terms for categories I to V in Figure
20.1. Compared to the ASL terms, both the native and non-native color terms in HKSL
are idiosyncratic; each has a different movement, place of articulation (POA), and
handshape. Most of these terms are mono-morphemic, except the non-native term for
pink, which is a compound of powder and red, borrowed from Chinese (Tang 2007).
The term purple has a distinct handshape (^), but shares movement and POA, a
horizontal movement under the lower lip, with red (F), and thus is not treated as a
basic color term in Woodward (1989). The sign red in HKSL is otherwise similar to
the sign blood, with the latter distinguished through mouthing (Tang 2007), but the
mouthing is an indication that the form blood is a metaphorical extension from red,
rather than the other way around, so red is safely treated as a basic color term.
HKSL has two signs referring to the color green, both of which involve a trilled
alternating movement of the first and second fingers. In the first form of green, the
hand is held with the palm oriented downwards. In the second alternative form, the
hand is held with the palm and fingers pointed upwards. The sign grass (grass having

(1)

grass ‘grass’ in HKSL


438 IV. Semantics and pragmatics

a stereotypical green color), is a two-handed form otherwise similar to the second form
for green, with the addition of side-to-side path movements along the horizontal plane:
Presumably, these signs for green evolved historically from the sign grass, based on
their shared phonological features and associated meanings, with the second sign green
preserving a closer metaphorical relationship. The differences between grass and the
two color signs imply that they are basic color terms. This is different from the term
orange in ASL, where a single sign is used to refer to both a color as well as a fruit.
Interestingly, HKSL has two signs glossed orange, one of which refers to the color
orange, and another that has clearly been developed from a classifier form for slicing
a round object into quarters referring to the fruit (Tang 2007).
Basic colors in ASL are labeled using three different methods. There are dedicated
native signs for black, white, and red (I and II in Fig 20.1), which historically have
evolved from metaphorical extensions, although these relationships have become
opaque to some extent (Woodward 1989). The remaining colors in the inventory are
labeled using non-native forms, and initialized forms in particular. The sign for green
uses the g-handshape (H), yellow uses the y-handshape (d), blue and brown use
the b-handshape (k), and the signs for purple and pink use the p-handshape (c).
Furthermore, the signs blue, yellow, purple, and green are articulated in the same
general location in the signing space in front of the signer, indicating that these non-
native forms were adopted into the language as a group. As initialized forms, these
color terms have a rather unique non-arbitrary system of form-meaning mapping. For
example, rather than referring to objects that are typically colored blue or green, the
ASL signs blue and green reflect the shapes of the written English words ‘blue’ and
‘green’. This sort of form-meaning relationship is different from the one we find with
the sign orange, which again refers to both the color and the fruit and is not an
initialized sign.
This comparison of some of the terms for basic colors in ASL and HKSL serves to
illustrate an important fact about lexical semantics: different methods for associating
forms and meanings work equally well, and a language can make use of different
methods even within a single semantic domain. There is no functional difference be-
tween using a native basic color term like blue in HKSL, and the non-native initialized
term in ASL; both forms refer to the same color blue. Native and non-native color
terms, arbitrary terms, and terms with metaphorical relationships with objects can all
function side by side with one another in the same inventory of color terms. This
dissociation between form and meaning allows languages to work as they do; any label-
ing method will work perfectly well, and gaps are easily filled in. These dissociations
are clearest when we look at small objective semantic domains like color terms. In
other domains like the kinship terms discussed in the next section, culture plays a
larger role.

3.3. Kinship terms: Relations and meanings

Systems of kinship are one of the primary ways in which societies organize themselves.
They are diverse and complex and include genealogical, cultural (including marital),
and historical relationships, as well as biological relationships. Different cultures find
it useful to elaborate these systems to different degrees, and to maintain and navigate
20. Lexical semantics: Semantic fields and lexical aspect 439

through these systems, the relevant relationships are labeled using kinship terms. There
is huge variation in the inventories of kinship terms found in spoken languages in
cultures around the world, and the same is true for Deaf cultures and communities.
Kinship terms have been studied in sign languages from a wide range of social and
cultural contexts. Some of these include relatively small and isolated communities of
Deaf and hearing members integrated into a single society, where a sign language is
widely used (see chapter 24, Shared Sign Languages, for discussion). Examples of such
sign languages include Providence Island Sign Language (PISL) from the Caribbean
(Woodward 1978) and the Adamorobe Sign Language (AdaSL) of Ghana (Nyst 2007;
Woodward 1978). Kinship terms have also been studied in sign languages from rural
and urban areas in both the developing and the developed world, where sign languages
are not widely used outside of the Deaf community and where Deaf people may be
socially and economically isolated to different degrees. Examples of these studies in-
clude Japanese Sign Language (NS) (Peng 1974), Argentine Sign Language (LSA)
(Massone/Johnson 1991), and HKSL and ASL (Woodward 1978; and others).
Woodward (1978) represents an extensive early study of native kinship terms for
consanguineous (blood) relationships in 20 sign languages, including historical sign
languages and living sign languages from around the world. All of the languages in
this study had a native kinship term for offspring, but in none of the languages were
offspring distinguished by sex. With a few exceptions, all of these languages have terms
for mother and father, but 12 of these languages lacked native terms for grandfather
or grandmother. Only a small minority of these languages had native terms for uncle
and aunt or cousin, and three lacked native terms for sibling relationships. Of those
that had sibling relationship terms, three distinguished older and younger siblings with
native terms, while two distinguished siblings in terms of birth order as well as gender
(Taiwan and Japanese Sign Languages). Nine sign languages had a general term for
sibling, and the remaining six languages distinguished brother and sister relationships.
As with color terms, the lack of a native term for a certain relationship only indi-
cates that the language uses some other method for labeling the kin relationship, not
that such relationships are unknown or unimportant to users of the language. The signs
uncle and aunt in ASL are non-native, for example. Still, the inventories of native
kinship terms offers the same clues regarding social structures in Deaf communities
using sign languages that they offer for communities using spoken languages. It has
been suggested, for example, that in very small close-knit communities, like those of
PISL and AdaSL, there is no motivation to develop an elaborated inventory of kinship
terms, since the members of the kinship system are more easily referred to individually
(Woodward 1978). For Deaf communities in urban areas, such as the one using LSA,
it has been suggested that relationships within Deaf communities are more important
than relationships with hearing relatives (Massone/Johnson 1991) and that therefore,
there is little motivation to develop elaborated inventories of consanguineous kinship
terms. These sociolinguistic issues are beyond our scope here, but in the next section, we
will see some evidence of culture influencing language in the domain of kinship terms.

3.4. Sibling terms in HKSL and ASL


In this section, we look at a very small subset of consanguineous kin relationships,
specifically terms for siblings, in ASL and HKSL and compare them to sibling terms
440 IV. Semantics and pragmatics

in English and Cantonese. Although their meanings or conceptual, with this small set
of terms, the meanings that are commonly used to distinguish different sorts of siblings,
such as relative age or birth order (older/younger) and sex (male/female) can be pack-
aged together with the sibling term in different ways. The most important distinction
within a particular kinship system is packaged in a mono-morphemic kinship term with
the sibling meaning. The remaining meaning, either relative age or sex is expressed as
a modifier of the kin term, or through a non-manual, namely mouthing. In this way,
these sibling terms function like grammatical phi-features of Person, Number, and
Gender in pronouns, which are also packaged together in different ways in different
languages.
Traditionally, Chinese cultures have relatively elaborated systems of kinship terms.
In the domain of siblings, these terms include birth order and male/female distinctions,
with distinct basic terms for each of the four possible combinations, see (2) (the num-
bers in the Cantonese glosses indicate lexical tones).

(2) Cantonese sibling labeling system:


a. go1-go1 ‘older male sibling’
b. je2-je2 ‘older female sibling’
c. dai6-dai6 ‘younger male sibling’
d. mui6-mui6 ‘younger female sibling’

The terms for siblings in HKSL encode similar meanings as single manual forms with
associated mouthings. This is not surprising, since users of Cantonese and HKSL share
the same kinship system. These forms are composed of obligatory mouthings encoding
gender distinctions together with the sign for elder, formed with the middle finger
({) making contact with the chin, or younger, formed with the little finger (N) making
contact with the chin (3). The mouthings are thus used to distinguish male from female
siblings in otherwise similar basic manual forms (Tang 2007). They are presumably a
relatively recent innovation as they do not appear in the data from older studies
(Woodward 1978). There is also a series of synonymous forms for these sibling relation-
ships made up of compounds, whose first elements are the same as those below, fol-
lowed by the sign for either boy or girl (Tang 2007).

(3) HKSL:
a. elder-brother ‘older brother’
b. elder-sister ‘older sister’
c. younger-brother ‘younger brother’
d. younger-sister ‘younger sister’
ASL:
e. older/younger brother ‘older/younger brother’
f. older/younger sister ‘older/younger sister’

In the less elaborated ASL system, the basic sibling terms distinguish between male
and female siblings, but not relative birth order. Sex distinctions are indicated through
POA, with the sign brother starting relatively higher on the head, like many other
male kin terms (father, uncle), and the sign sister starting lower on the face, like
many other female kin terms (mother, niece). Relative birth order is expressed with
20. Lexical semantics: Semantic fields and lexical aspect 441

modifiers like older and younger. The English sibling term inventory is similar to that
of ASL, as indicated in the English glosses of (3e–f).
The similar inventories of sibling terms reflect the fact that the users of HSKL and
Cantonese share a kinship system in Hong Kong, as do the users of ASL and English
in North America. These four languages are able to make the same distinctions in
terms of birth order and gender among siblings, but they do so in different ways. In
ASL and English, the basic sibling term includes only the gender distinction, with birth
order encoded by a modifier. In contrast, the basic terms in HKSL and Cantonese
make birth order distinctions as well as gender distinctions.
If how meanings are packaged in words is an indication of how closely associated
those meanings are within a kinship system, then this comparison of sibling terms
amongst these four languages indicates a relatively greater culture importance placed
on birth order in the kinship system shared by users of Cantonese and HKSL, com-
pared with those of ASL and English. While American culture includes a notion of
respect for elders, Chinese culture, and in particular Confucian philosophy, emphasizes
and formalizes respect for elders to a greater degree. The importance of birth order
and relative age is also important for other Asian sign languages. Woodward (1978)’s
sampling of kinship terms in 20 sign languages indicates that birth order is encoded in
the basic sibling terms in Indian, Malaysian, Japanese, and Taiwanese Sign Languages,
as well as HKSL, but in none of the sign languages from outside Asia in the sample.
The extent to which language influences culture, or culture influences language is a
difficult question, and it is partially a lexical semantic question. In the small set of
kinship terms discussed here, culture seems to have some influence on how closely
meanings are packaged together, at least in this restricted domain. In other ways, what
we have seen from these brief discussions of color and kinship terms in ASL and
HKSL is that these languages behave like spoken languages in terms of lexical seman-
tics. What we will see in the remainder of this chapter is that in some domains, the
relationships between the meanings and forms of words is quite different from the
arbitrary form-meaning relationships that tend to be found in spoken languages. This
makes evidence from sign languages especially interesting for lexical semantic analyses.

4. Aspect and visibility in sign languages


Iconicity refers to non-arbitrary relationships between a linguistic form and its mean-
ing. Traditionally, linguists have assumed that form-meaning relationships were over-
whelmingly arbitrary, with perhaps a few exceptions such as onomatopoeia like ‘knock’
and ‘bang’ and words for barnyard animal sounds like ‘quack’ and ‘oink’. From this
perspective, the apparently obvious iconicity in sign languages was interpreted as evi-
dence that they were somehow not ‘true’ languages, forcing pioneering sign linguists
to work hard to demonstrate the contrary (Klima/Bellugi 1979; and many others). With
more data and research, the conventional wisdom regarding iconicity in both signed
and spoken languages has changed, and instances of different types of iconicity have
been found in languages of all types (Perniss/Thompson/Vigliocco 2010; and many
others; see chapter 18, Iconicity and Metaphor, for details).
Iconicity raises some very interesting issues for lexical semantics. If meaning can be
represented in a form, it is important to determine which kinds of meanings are repre-
442 IV. Semantics and pragmatics

sented and how. There are numerous examples of lexical iconicity in sign languages,
showing the creative process of using metaphor to stretch word meanings to build and
expand lexicons. This lexical iconicity also shows the aggregate and adaptive nature of
natural language lexicons; rather than being formulated logically and consistently, lexi-
cons and their words evolve as needed. This creates iconic forms whose meanings seem
clear, once they are known, but which are not decomposable into morphemes with
predictable meanings. Lexically iconic forms whose conceptual meanings are reflected
in their forms are idiosyncratic. The ASL sign deer, which represents antlers, the sign
ape, which represents the stereotyped chest-beating gorilla behavior, and the sign
sheep, which represents the shearing of wool, are each iconic, but they are iconic in
different ways. Signs with similar meanings in different languages may have very differ-
ent forms and yet be equally iconic.
Despite this idiosyncrasy, there are some generalizations within at least some lexical
domains that have been made. Meir et al. (2007), for example, have argued for an
iconic relationship between the places of articulation (POA) on the body in body-
anchored verbs like eat and part of their conceptual meaning. What they have found
is that if the meaning of a body-anchored verb can be associated with a body, or part
of the body, then that sign will tend to be articulated on the body. The sign eat, in all
the sign languages that they looked at, was articulated on the mouth, for example. We
will see additional examples of this sort of conceptual iconicity below.
Spoken languages are limited by their modality in the ways in which linguistic forms
can reflect their meanings, but in addition to onomatopoeia, spoken languages can
organize the elements within an utterance in such a way that the linear sequencing of
elements corresponds in some way to the temporal order, cause-and-effect relation-
ships, and even spatial relationships in the events and states-of-affairs that they denote.
This specific sort of iconicity is referred to as diagrammatic iconicity (Newmeyer 1992;
and others). The visibility of aspectual meanings discussed in the remainder of this
chapter represents a type of this kind of iconicity.

4.1. The visibility of event structure and aspect

Sign languages reveal so much about lexical aspect because of the diversity in their
predicate types. Rather than having all their verbs behave in similar grammatical ways,
sign languages have distinct sorts of predicates built around three different types of
verbs (Padden 1998; and others), as well as non-lexical CLP. Each type of predicate
behaves in different and informative ways. When these different predicates are decom-
posed into their constituent parts, and these parts are associated with components of
the predicate’s temporal contour, or situation aspect (Lee 2001; Levin/Rappaport
Hovav 2005; Rathmann 2005; Smith 1997), we see that both the event structure of the
predicate and lexical aspect are visible on the surface. The distinctions that are impor-
tant for situation aspect include contrasts between static and dynamic stages of predi-
cates and transitions between different stages. These distinctions produce a very small
inventory of situation aspects, as we will see shortly. This visibility provides important
information about the distinctions between event structure and lexical aspect, and how
the compositional situation aspect of the predicate is built up. Situation aspect can be
relatively visible in spoken languages as well, for example, in Russian and other Slavic
20. Lexical semantics: Semantic fields and lexical aspect 443

languages (Borer 2005; Ramchand 2008; and others), but in general, situation aspect
is much more opaque on the surface in the more well-studied spoken languages. This
makes verbs and predicates in sign languages particularly useful for lexical semantic
analyses. The visibility of situation aspect is illustrated in the following examples ex-
tracted from short narratives produced by native signers of HKSL (constituents of the
utterance that are not illustrated in pictures are presented within brackets in the
glosses):

(4) a. b. [HKSL]

break …
‘[The wooden fence] broke …’

(5) a. b. c. [HKSL]

search good = find


(bird) find[=search^good] (food) good
‘The bird goes and finds food …’

Both of these predicates denote events with specified endpoints, or telic events, but
they indicate their endpoints in different ways. In (4), the single verb break denotes a
change of state. In (5), the compound find, composed of search denoting a dynamic
stage (5a–b) and good (5c) denoting successful completion, also represents a telic
event. In this example, the completion of the event is emphasized with an additional
stand-alone form, namely the sign good following the Object food. The predicate in
(5) is decomposable into individual morphemes, each of which represents a stage in
the event, producing an event structure that is visible in the surface morphosyntax. The
meaning break is certainly iconically represented in the form of break, but although the
verb is composed of two ‘sub-lexical’ parts, illustrated in (4a) and (4b), these elements
are not morphemes. Hence, the situation aspects of these telic predicates are both
visible, but they are visible through different mechanisms. In (5), the visibility is repre-
sented through the morphosyntax, with each morpheme making a contribution. In (4),
the visible lexical aspect of break indicates the situation aspect of the verb/predicate.
444 IV. Semantics and pragmatics

This proposal, that the aspectual structures are visible in the forms of predicates in
sign languages, is adapted from the Event Visibility Hypothesis (EVH), proposed and
developed by Wilbur (2003, 2008; Grose/Wilbur/Schalber 2007), as well as analyses of
situation aspect proposed for HKSL by Lee (2001) and for ASL by Rathmann (2005).
To account for this specific sort of iconicity in sign languages, both a formal treatment
of the underlying aspectual structures across all predicate types, and a formal account
of the phonology of sign languages are necessary, in order to establish a systematic
relationship between meanings and their surface forms.
To address phonology, following previous EVH proposals, this analysis assumes the
Prosodic Model of sign language phonology (Brentari 1998; see chapter 2, Phonology,
for details). According to the EVH, in sign languages, aspectual meanings are reflected
in the movements of surface forms. Specifically, telicity is reflected in single changes
of orientation (4), single changes of handshape (5), or single movements to contact
with either the body or a phonological plane. Atelic events are associated with tracing
movements along a plane and repeated and trilled movements (Wilbur 2003). These
generalizations are intended to apply to all predicate types in sign languages, excluding
initialized and fingerspelled forms. They also seem to apply equally well to predicates
composed of single verbs, like (4), predicates with multiple lexical constituents like (5),
and to CLP, which lack stand-alone lexical verbs.
This proposal assumes an inventory of five basic situation aspects: states (‘she likes/
wonders …’), activities (‘… ran for an hour’), and semelfactives (‘… flapped once’), all
three of which are atelic, and achievements (‘broke the fence’) and accomplishments
(‘read the book’), both of which are telic (Lee 2001; Levin/Rappaport Hovav 2005;
Rathmann 2005; Smith 1997). Telicity is a property of whole predicates, not individual
verbs (Tenny/Pustejovsky 2000; and many others), so it is important to make a distinc-
tion between the event structures of predicates, where distinctions between static and
eventive situations (‘she likes cheese’ vs. ‘she is eating cheese’), and between atelic and
telic events (‘she ate cheese’ vs. ‘she ate a slice of cheese’) are made, and lexical aspect.
Lexical aspect represents the aspectual contributions of verbs and other constituents
to the compositional situation aspect of the predicate. It is at the lexical level that
distinctions between subtypes of atelic and telic situations are made, for example, be-
tween activities and semelfactives, or achievements and accomplishments.
Event structures in this account, adapting the analysis of Pustejovsky (1991, 1995),
are decomposed into static (s) and eventive (e) stages, termed subevents. Telicity is
represented as a transition between an e subevent and a final s, the static stage specified
by the predicate (e0s). It is these transitions within event structures that the EVH
associates with the phonological characteristics discussed above, such as a single change
of orientation in a verb like break in (4). For clarity, I use the terms static and eventive
as descriptions of predicates and subevents; the terms stative and dynamic are used to
describe verbs.
By decomposing event structures into subevents, the distinctions between static and
eventive situations and between atelic and telic events can be represented with an
inventory of only three basic templates. We can set aside the issues of the termination
of atelic events and causation, but these templates represent inceptive as well as telic
transitions. Simple static situations are composed of only a single subevent, represented
[s]. Events are a subset of situation types, and are represented with an inceptive transi-
tion from an initial static stage, to an eventive stage, [s0(e)]. Inceptive transitions are
20. Lexical semantics: Semantic fields and lexical aspect 445

often un- or underspecified, and they may not be visible in the phonological forms of
sign language predicates. All atelic events share the structure [s0(e)]; the distinctions
between semelfactives and activities are made at the lexical level. Telic predicates spec-
ify their own endpoints, represented as a transition to a final static stage, [s0(e0s)].
These structures composed of subevents are treated as basic abstract templates or
constructions, which lack any lexical semantic content. They are provided with lexical
semantic content and conceptual meaning when overt constituents of a predicate iden-
tify their subevents (Ramchand 2008; Pustejovsky 1995). A predicate’s underlying
event structure can be relatively transparent or visible on the surface when each of its
subevents is identified individually. This is the case in (5) above, where the two compo-
nents of the compound find each identify one of the subevents in a telic transition.
Single verbs may also identify multiple subevents at once, in a one-to-many mapping,
resulting in an event structure that is relatively opaque in the morphosyntax of the
predicate. This is the case in (4), where the two subevents in a telic transition are
identified by the single verb break. The lexical semantics of a verb determines how it
will behave relative to an event structure, as we see in the next section.

4.2. Lexical aspectual distinctions

Like basic event structures, only a small set of basic lexical aspectual distinctions are
important at the level of situation aspect. A verb may be specified as homogeneous,
denoting a single uniform stage, or a verb may be heterogeneous, denoting two distinct
stages (Pustejovsky 1991, 1995). Verbs of both basic types are able to participate in telic
predicates, but because they have different aspectual structures, they make different
contributions. Homogeneous verbs may be either stative (‘like’, ‘know’) or dynamic
(‘run’, ‘search’). Adjectives are also lexically stative, and can make contributions to
predicates similar to stative verbs allowing the two to be treated together in some ways
(Klima/Bellugi 1979). Here we will focus on dynamic verbs, whose aspectual meanings
can be broadly divided into those denoting manner (‘run’, ‘walk’, ‘jump’) and those
that denote a grammatical scale or event path (‘build’, ‘exit’, ‘ascend’) (Tenny 1994;
Erteschik-Shir/Rapoport 2005; and many others).
There are many different types of scales associated with different semantic fields.
Scales may represent spatial paths (‘run to’, ‘jump over’), or the scale of an event may
be a delimited object that is consumed or created as the event proceeds (‘build a
house’, ‘eat a cookie’). Scales play an essential role in telicity, since only predicates
with scales that have been identified and delimited, or bounded, by the predicate are
able to receive telic interpretations. The boundaries of scales are identified with speci-
fied or delimited internal arguments (Tenny/Pustejovsky 2000; and many others). Thus
‘eat a cookie’ has a bounded scale of one cookie, after which the event has reached its
endpoint, while ‘eat cookies’ has no bounded scale and can only be atelic.
There is an additional distinction between scales which can be subdivided into incre-
ments of space, time, or quantity that get ‘used up’ as the event progresses, and scales
that are composed of a single interval that cannot be subdivided. In terms of lexical
aspect, the notion that Smith (1997) and others (Lee 2001; Rathmann 2005) term gram-
matically relevant duration is equivalent to this notion of grammatical scale. An event
of eating a cookie extends over multiple increments of time. In contrast, a single flap
446 IV. Semantics and pragmatics

of a bird’s wing or a single blink of an eye, are punctual and occur over a single
increment or point. This difference between types of scales produces the lexical distinc-
tions between activities and semelfactives among atelic events, and between achieve-
ments and accomplishments among telic events. Here I will use the terms incremental
and point scales rather than duration to describe this distinction; the term duration will
refer to the amount of time over which a situation extends.
These lexical aspectual distinctions are represented in Figure 20.2 below. The lexical
aspectual structures of particular verbs are represented as substructures of this more
general structure.

Fig. 20.2: A typological structure for lexical aspect

As we will see in examples for HKSL, these basic lexical aspectual distinctions are
reflected in the forms of verbs in sign languages. These lexical aspectual meanings can
be expressed by elements other than lexical verbs. This allows for situation aspect to
be built up in different ways in different languages. In English, for example, where the
form of the verb offers no clues to its aspectual meanings, scalar meanings are often
external to the verb, for example in prepositions following the verb, while the verb
itself encodes a manner meaning, as in ‘run to’ and ‘jump over’.
Beginning from the top of the structure, a verb may be specified as either lexically
homogeneous, denoting a single stage, or heterogeneous denoting different stages. For
the purposes of this chapter, I assume that heterogeneous verbs are composed of a
dynamic stage and a stative stage, rather than two dynamic or two stative stages. I also
assume that the same distinctions that are relevant for dynamic homogeneous verbs
are also relevant for the dynamic stages of heterogeneous verbs, but for simplicity these
structures are not replicated in Figure 20.2 under the Heterogeneous node. Within
homogeneous verbs, there is a distinction between those denoting stative stages
(‘think’) and those denoting dynamic stages (traditional ‘activity’ verbs). Within the
group of dynamic verbs, there is a contrast between verbs denoting grammatical scales
(‘eat’, ‘build’) and verbs denoting manners (‘run’, ‘walk’). There are many different
types of manner, such as manner of motion, consumption, creation, communication,
and so on, but these distinctions do not appear to be relevant for situation aspect and
20. Lexical semantics: Semantic fields and lexical aspect 447

therefore are not represented here. Within the scalar meanings, there is the contrast
between scales specified as a single point or interval, representing the sort of scale
required for semelfactive situations such as ‘cough’, ‘blink’, and ‘flap’, and verbs with
scales composed of multiple increments. A subset of scales can be further specified for
a spatial as well as a temporal scale, represented with a spatial path node.
The aspectual distinctions represented in Figure 20.2 may be contributed to the
predicate by the verb, but elements other than lexical verbs may also contribute these
meanings to the situation aspect of the predicate. Modifiers like ‘on foot’ and ‘by car’
in English, for instance, contribute manner meanings. We have already seen that el-
ements like prepositions can contribute scales, and in telic predicates denoting events
of creation and consumption, delimited internal arguments identify both the scales and
their endpoints, as in ‘build a house’ and ‘eat a cookie’. This allows different predicates
to have similar situation aspects built up in different ways, as shown in (4) and (5)
above. Whatever a verb’s lexical aspectual meaning, it makes the same contribution to
its predicate, regardless of the predicate’s situation type. The verb break means the
same thing whether it appears in a telic predicate as in (4), or in an atelic predicate.
This follows from the fact that telicity is a property of entire predicates, rather than
individual verbs, while lexical aspect applies at the word level, wherever those words
appear. These facts will become important when looking at verbs with point scales
below. These verbs have a stable lexical meaning, but are able to receive both activity
and semelfactive readings. In contrast, other dynamic homogeneous verbs are only
able to receive activity readings, no matter how short the duration of their events is.
As we will see, the point scale meanings associated with these verbs are reflected in
their surface forms.

4.3. Some verb categories in sign languages

Lexical aspectual distinctions, often under different names, have been widely discussed
in the literature on spoken languages, and although they are represented differently in
different analyses, and are made in different ways in different languages, these basic
distinctions appear to be relevant for all natural languages (Borer 2005; Grose 2008;
Lee 2001; Levin/Rappaport Hovav 2005; Ramchand 2008; Rathmann 2005; Smith 1997;
and many others). In many languages, there may be few or no overt clues in the form
of a verb as to its lexical aspect. Since knowing a word entails knowing what it means
and how it can behave, arbitrary form-meaning relationships are workable, and overt
clues regarding lexical semantics on the surface are unnecessary. This means that the
situation aspect of a predicate may also be opaque on the surface. Yet, even in lan-
guages like English, there are in fact clues regarding lexical and situation aspect, and
even more so in languages with rich morphological systems, like for example case
marking systems (Borer 2005; Grose 2008; Pustejovsky 1991, 1995; Ramchand 2008;
and many others). Sign languages, because of their diversity of predicate and verb
types, turn out to be particularly informative regarding lexical and situation aspect.
Verbs in sign languages have been categorized into three groups (Padden 1988; and
many others), based only loosely on their conceptual meanings, but more importantly
on their compatibility with systems of referential expressions, traditionally termed
agreement markers. These verb groupings are plain verbs, agreeing verbs, and spatial
448 IV. Semantics and pragmatics

verbs (see chapter 7, Verb Agreement, for details). These groupings do not distinguish
verbs from the predicates that they appear in, a distinction that is necessary to make
here, so in these traditional groupings, classifier predicate constructions are grouped
together with spatial verbs. Because they are not conventionalized lexical verbs, CLP
are excluded from the group of spatial verbs here, although they do form a relatively
cohesive class based on a shared spatial semantic field. Agreeing verbs are a much less
cohesive group, including literal transfers (give, send) as well as other verbs that are
not so clearly instances of transfers (look-at, help, tell). Plain verbs are diverse as
well, including verbs like break, eat, and think.
These groupings do not correspond closely to semantically coherent verb classes.
Instead they are grouped together by their behaviors and forms. All lexical verbs have
a basic citation form, with a specified place of articulation (POA), handshape, and
movement. Plain verbs preserve their basic conventionalized POA, handshapes, and
movements when they appear inside predicates. In contrast, when agreeing verbs ap-
pear in the appropriate contexts, their forms are altered to refer to one or more argu-
ments of the predicate (Janis 1992; Meir 2002). The specific arguments that an agreeing
verb is able to refer to are determined by its lexical semantics. For example, the POA
of verbs denoting transfers, like give and send, are modified to refer to sources and
recipient arguments. Lexical spatial verbs, like put and take, are otherwise similar to
agreeing verbs, but the arguments that they refer to are locations, or objects at loca-
tions. CLP are not conventionalized lexical items, and their POA, handshapes, and
movements are all independently meaningful (Benedicto/Brentari 2003; Grose/Wilbur/
Schalber 2007; Shepard-Kegl 1985).
The treatment of agreement markers in sign languages is adapted from Meir (2002),
who argues that these referential systems in sign languages are a type of thematic
agreement, indicating the role that an argument plays within a predicate. These systems
are sensitive to verb meaning, and thus can be informative of lexical semantics, and
are distinct in important ways from agreement systems in spoken languages that specif-
ically mark Subject and Object grammatical relationships regardless of verb meaning.
It should be noted that there is controversy in the literature concerning the status
and the appropriate analysis of the elements that I refer to as (thematic) agreement
markers, as well as the status of the constituents of CLP (Liddell 2003; Sandler/Lillo-
Martin 2006; and many others; also see chapter 7). Some approaches treat these el-
ements morphologically, based on their identifiable meanings and grammatical func-
tions, while other approaches treat these elements as more gestural, based on their
non-categorical forms. Luckily for this discussion, the status of these elements is not
as important as what the presence or absence of these elements reveals about verbal
lexical semantics and situation aspect.

5. Lexical aspectual structures at work

In the remaining sections, I present examples of lexical verbs with different types of
lexical aspectual structures, including: homogeneous dynamic verbs encoding manner,
incremental and point scales, and heterogeneous verbs encoding changes of state. The
examples presented here come from HKSL, and are extracted from short narratives
20. Lexical semantics: Semantic fields and lexical aspect 449

produced by native signers, but they are similar in the relevant respects to equivalent
examples from ASL and also Austrian Sign Language (ÖGS) (Wilbur 2008; Grose
2008; Grose/Wilbur/Schalber 2007; Rathmann 2005; and others). This analysis is in-
tended to be applicable to verbs and predicates in sign languages broadly, with the
exception of initialized and fingerspelled verbs. Space restrictions limit the number of
examples to only some of the possible lexical aspectual structures or verb classes. By
convention, signs in both HKSL and ASL are glossed with the nearest English equiva-
lents, so to avoid confusion, I refer only to HKSL forms here, unless otherwise speci-
fied. I begin with a discussion of plain verbs, followed by a discussion of agreeing
verbs. For comparison, I also present a short discussion of whole entity (w/e) classifier
predicate constructions. The lexical aspectual structures of plain and agreeing verbs
are presented in Figure 20.3 below.

Fig. 20.3: Basic lexical aspectual distinctions (V = verb; A = adjective; S = stative; D = dynamic)

The structure in Figure 20.3a represents stative verbs. The structure in Figure 20.3b
represents homogeneous dynamic verbs denoting manner. The structure in Figure 20.3c
represents homogeneous dynamic verbs specified for either a point scale or scale with
multiple increments. The aspectual structure of heterogeneous verbs in Figure 20.3d
includes a dynamic stage and a stative stage. By default, heterogeneous verbs have
scalar dynamic stages, but for the sake of simplicity, punctual and incremental distinc-
tions in heterogeneous verbs will be set aside. The verb break in (4) above is an
example of a heterogeneous verb with the structure in Figure 20.3d. The verb com-
pound find in (5) is decomposable into the incremental scalar verb search, associated
with the structure in Figure 20.3c, and the adjective good, represented in Figure 20.3a.
On this view, the compound find has the same combined lexical aspectual structure as
the single heterogeneous verb break.

5.1. Lexical aspect and plain verbs

Plain verbs in sign languages form a natural class because their POA are lexically
specified and non-referential, meaning that they do not refer to arguments of their
predicates. Despite being non-referential, as mentioned above the POA in body-an-
chored plain verbs can be used to reflect something of the verb’s conceptual meaning
(Meir et al. 2007). Their handshapes and movements are also lexically specified. Other-
wise, plain verbs are semantically diverse, including verbs from many different classes.
The phonological movements of plain verbs, as seen in (6) and (7) below, reflect the
verb’s lexical aspect. The relevant verbs in these examples are articulated by the
signer’s dominant right hand (H1), whereas the forms articulated by the non-dominant
hand (H2) are CLP:
450 IV. Semantics and pragmatics

(6) (7) [HKSL]

H1: (cow) eat (grass eat) H1: (mother) realize (ix hungry)
H2: Y:animal.located.at H2: 5:bird.nest.located.at
‘The cow is over here ‘The bird realizes
eating grass.’ her chicks are hungry.’

The POA of eat in (6) is located at the signer’s mouth, reflecting the verb’s meaning
related to consumption, a feature shared with other semantically related verbs like
drink. In the same way, the psych(ological) verb realize in (7) is articulated at the
temple. Other psych verbs like know, understand, remember, and forget are also
articulated at the forehead and temple, reflecting the cognitive meanings denoted by
these verbs. These POA are non-referential, meaning that they do not refer to the
signer as the argument participating in the event.
eat, a verb of consumption, is especially interesting here. In the literature on telicity
and grammatical scales, predicates of creation and consumption are frequently offered
as stereotypical examples of scalar verbs in telic predicates (Ramchand 2008; Tenny
1994). The scales in this sort of predicate are not provided by the verb, but rather by
the internal argument, or incremental theme. Given that the internal argument is delim-
ited, the event can reach its telic endpoint when the internal argument is all used up,
either through creation (e.g. ‘building a house’) or consumption (e.g. ‘eating a sand-
wich’). Since these scales are not provided by the verb but by the argument itself, we
do not expect to see them reflected in the form of the verb, and this is what we get
with eat when it occurs in telic predicates. In (6), the internal argument ‘grass’ is not
delimited, allowing for only an atelic activity interpretation. In this example, the verb
eat is made with steady contact with the mouth, and an associated non-manual repre-
senting dynamic chewing of the grass. This verb is a homogeneous dynamic verb,
denoting a manner with the aspectual structure in Figure 20.3b above.
In the literature on event structure and lexical aspect, verbs of creation and con-
sumption have received the bulk of the attention (Ramchand 2008) while psych verbs
like realize (7) have received much less attention. Even the stative/dynamic distinction
is not so clear when it comes to psych verbs, and it is especially difficult to define what
constitutes a mental change-of-state or a psychological telic event. Some psych verbs,
like realize and forget, seem similar in many ways to verbs like break, but it is not
clear how notions of grammatical scale and incremental themes, which are required by
telic events, can be applied to predicates denoting psychological and mental events.
There is also the issue of argument structure. The internal argument that undergoes
the change-of-state in (4), with break, has a clearly visible concrete resultant state. The
overt argument in (7) with realize experiences an abstract event of realization, and
although this argument is delimited, it has a different relationship with the verb than
20. Lexical semantics: Semantic fields and lexical aspect 451

the fence does with break in (4). Treating the notion of ‘realization’ or ‘learning’ as
mental equivalents to creation and consumption is one possibility, but determining how
to delimit or quantify the abstract notions that are created or consumed is problematic.
The form of the predicate in (7) offers us some clues about how to analyze at least
this psych verb. The form of realize in HKSL is similar to the verb know in terms of
its handshape and POA. Unlike know, which is a homogeneous verb, realize involves
a tilt of the head from a more neutral position to that shown in (7) and an opening of
the mouth. If know, at least in some of its senses, is a stative verb, the movements of
realize, including manual and non-manual articulators, can be associated with a dy-
namic stage, and the final position of the head and mouth can be associated with a
stative stage; thus, realize should be analyzed as the heterogeneous lexical aspectual
structure represented in Figure 20.3d. Thus, in terms of lexical aspect, realize is similar
to break, but that does not mean that the predicates that it or other heterogeneous
psych verbs appear in are necessarily telic. A heterogeneous verb may be associated
with the atelic structure [s0(e)] as well as the telic structure [s0(e0s)], and to re-
ceive a telic reading the predicate must still meet the same criteria that all telic events
must meet, regardless of the lexical semantics of its verb. Whether psych verbs are able
to participate in telic events, or whether they are prevented from doing so because of
some feature of their conceptual meanings, or even whether or not the conventional
notions of telicity need to be broadened to accommodate predicates with psych verbs
is beyond the scope of this chapter, but recent and ongoing discussions in the literature
have begun to address these questions. For the purposes of this discussion, I treat (7)
as an atelic predicate, with an inceptive transition.
As a dynamic verb, eat in (6) denotes a single occurrence of eating, no matter how
much is actually consumed. Likewise, (7) denotes a single occurrence of realizing, no
matter what is realized. These two predicates can be contrasted with the predicate in
(8), which is also atelic and also contains a dynamic verb, but a verb that is specified
for a punctual point scale, represented in Figure 20.3c. Verbs like flap, and others like
cough and knock, are classic examples of the sorts of verbs that are required to pro-
duce semelfactives (Lee 2001; Rathmann 2005; Smith 1997). Again, semelfactives are
distinguished from telic achievements, such as that in (4), by lacking a specified end-
point, and from activities, like that in (6), by being punctual. Since the distinction
between semelfactives and activities is made at the lexical level, both types of situation
aspect share the same event structure template [s0(e)]. Activities are associated with
either the lexical aspectual structure in Figure 20.3b, with a manner verb, or the struc-
ture in Figure 20.3c specified for an incremental scale. Atelic predicates with heteroge-
neous verbs (Figure 20.3d) are also treated as activities by default. Semelfactives re-
quire the structure in Figure 20.3c, specified for a point scale, but verbs with point
scales can appear in both semelfactives and activities. In a semelfactive predicate, a
verb like flap denotes a single occurrence or a single flap. When the verb is iterated,
it still denotes a single event, but one composed of multiple instances of flapping,
extending over the entire duration of the event. This is the case in (8) below. The
forms of point scale verbs in sign languages, with single changes of orientation, are
similar to heterogeneous verbs like break, but while break denotes a change of state,
flap does not (Lee 2001; Rathmann 2005).
452 IV. Semantics and pragmatics

(8) a. b. [HKSL]

(bird) flapCC
‘The bird flaps/flies away …’

(9) a. b. [HKSL]

(horse) notice (ix fence)


‘The horse notices the fence.’

In (9), the verb notice is an agreeing verb, not a plain verb, but I include it here,
setting aside its agreement marker, rather than in the following discussion of agreeing
verbs and their predicates, for phonological and for event structure reasons. Like
search in the compound find (5a–b), and the verb look-at in (12) below, notice is a
verb of perception. All three have similar handshapes. From the event structure per-
spective, it is heterogeneous like break (4), representing a dynamic stage followed by
a stative stage, displaying the structure in Figure 20.3d. This heterogeneous structure
is phonologically visible as the change of handshape between W (9a) and ` (9b). The
dynamic stage, prior to the relevant object being noticed, is indicated in the path move-
ment of the form, prior to the change of handshape. This is similar to the manual and
non-manual movements in realize (7) as well, although (7) is analyzed as denoting an
inceptive rather than a telic transition.
Like break (4), notice is composed of two sub-lexical elements, illustrated in (9a)
and (9b), but these two elements are not independent morphemes, and both are re-
quired to denote an instance of noticing. In terms of their meanings, notice is more
similar to find (5). Unlike verbs that denote concrete changes-of-state, the endpoints
denoted by verbs of perception, like notice and find, are more abstract, but their
predicates are easier to treat than predicates with psych verbs like realize in (7) above.
Nonetheless, although their endpoints represent something like a change of perception
or possession, the predicates in (5) with find and (9) with notice have the same event
structure as (4) with break: [s0(e0s)]. notice is analyzed as having a heterogeneous
lexical aspectual structure, as represented in Figure 20.3d. find, again, is a compound
20. Lexical semantics: Semantic fields and lexical aspect 453

composed of search with the structure in Figure 20.3c, and good, with the structure
in Figure 20.3a.
This discussion of plain verbs, and a single agreeing verb, shows how the forms of
these verbs can reflect lexical aspect, as well as the situation aspect of their predicates.
We have seen examples of heterogeneous verbs, both plain and agreeing, associated
with single changes of handshape (notice) and single changes of orientation (break).
This same aspectual meaning can also be associated with single movements to contact
with the body or a plane, as we will see below in (10d). We have also seen that the
aspectual structures of homogeneous verbs with point scales (flap) are reflected in
their forms. Despite their similarity with heterogeneous verbs, these verbs behave dif-
ferently, and do not denote changes of state. In contrast, the homogeneous dynamic
verb eat (6), shows continuous contact near the mouth, and the verb realize (7) shows
continuous contact with the temple, with a backwards head tilt indicating the inception
of the event.
It is important to keep in mind that as lexical items, individual verbs are idiosyn-
cratic, and may not conform to this generalization. This is the case, for example, with
non-native forms, such as initialized and fingerspelled forms. Although the lexical
aspectual structure of a sign language verb may be reflected in its phonological form,
the form of the verb does not determine its lexical aspect; the form-meaning relation-
ship identified here only works in one direction, from the aspectual meaning to the
form. This is illustrated with a comparison of the form of the verb flap (8) and the
form of break (4). The forms of both verbs involve changes of hand orientation, but
the two verbs are semantically distinct in important ways. flap does not denote a
change-of-state, and so in predicates where it receives a single occurrence interpreta-
tion, it receives an atelic semelfactive reading, even if its internal argument is specific
and delimited. In contrast, break denotes a change-of-state and, with a delimited inter-
nal argument and a single occurrence reading, receives a telic interpretation. Multiple
iterations of flap will produce an activity reading (‘flapping’) of the same internal
argument. Multiple iterations of break, in contrast, produce multiple instances of
‘breaking’ the same internal argument. Thus, the form of a verb may offer clues to its
lexical aspect, but the form does not determine a verb’s lexical aspect, making it impos-
sible to treat all changes in hand orientation or changes in handshape simply as denot-
ing changes-of-state.

5.2. Agreeing verbs and scale

Regardless of the debate about how to treat the referential markers associated with
agreeing verbs, it is possible to identify a set of verbs, traditionally called agreeing
verbs, that is distinct from the plain verbs discussed above (Padden 1988). In an agree-
ing predicate, the form of the verb, specifically the POA, orientation, and movement,
is modified from the citation form in order to refer to arguments of the predicate with
specific argument or thematic roles. The controversy regarding this referential system
centers on whether these markers can be analyzed as morphemes, and the extent to
which this referential system in sign languages is formally similar to the morphological
agreement systems found in many spoken languages. There is a related debate concern-
454 IV. Semantics and pragmatics

ing the status of the constituents of CLP. However this debate is resolved, it is clear
that CLP can be analyzed semantically and pragmatically, so we can set the debate
about their status aside here (Padden 1998; Liddell 2003; Sandler/Lillo-Martin 2006;
and many others).
In terms of lexical semantics, it is important to consider what the presence or ab-
sence of these agreement markers reveal about their associated verbs, and how they
are distinguished from plain verbs. One possibility is that agreeing verbs somehow
include in their meanings a notion of literal or metaphorical transfer, and that the
agreement markers function to indicate the sources and recipients of these transfers
(Emmorey 2002; Janis 1992; Meir 2002; Sandler/Lillo-Martin 2006; and others). Cer-
tainly, this group of verbs includes true transfer verbs, like give, but to avoid stretching
the metaphor beyond what is descriptively useful, the present analysis recognizes at
least two distinct types of agreeing verbs in terms of their lexical semantics and the
argument roles that their agreement markers indicate. These are verbs denoting
changes of possession or literal transfers (e.g. give), and verbs denoting actions directed
towards a referent (e.g. look-at).
Following Meir’s (2002) account for the functional roles of agreement markers,
these markers are treated as a sort of thematic agreement, sensitive to the argument
structure of the predicate and the lexical semantics of the verb, which contrasts with
referential systems marking clause-level roles like Subject and Object, as found in
many spoken languages. The thematic roles that are relevant for predicates denoting
literal transfers are termed sources and recipients, and typically, these roles correspond
to the Subject and Object argument roles at the clause-level. For the second type of
agreeing predicate, the relevant role is provisionally termed the directed-at argument.
It should be noted that plain verbs do not have the right argument roles for thematic
agreement markers, and further, that even agreeing verbs, when appearing in a predi-
cate without the necessary argument structure, will also lack agreement markers. Thus,
an analysis of agreeing predicates based on argument structures and the lexical seman-
tics of their verbs accounts for the compatibility of agreeing verbs with agreement
markers, and the lack of similar markers associated with plain verbs.
In those classification systems that do not distinguish predicates from lexical verbs,
verbs with spatial meanings and CLP are grouped together because of their related
meanings and phonological similarities. In the present analysis, it is necessary to distin-
guish verbs and predicates, and therefore also necessary to distinguish spatial verbs
and the predicates they appear in from CLP. Given that agreeing verbs and spatial
verbs share the same inventory of situation aspects and event structure templates, and
involve the same general notions of manner and scale, they can be analyzed in much
the same way, with spatial verbs being specified as having spatial scales or paths, or
denoting manners of motion. The class of spatial verbs includes examples like put and
move-to, which have been analyzed as lexical items derived from CLP, with associated
agreement markers that are interpreted spatially (Meir 2002; Padden 1998; Shepard-
Kegl 1985; and others). Rather than indicating (animate) sources and recipient argu-
ments, the agreement markers in lexical spatial predicates indicate initial and final
locations. As we will see in (10), agreeing verbs, which are not lexically specified for
spatial meanings, can receive spatial interpretations in spatial contexts.
CLP are predicates composed of multiple constituents, and are not lexical verbs,
although they have been analyzed as having verbal roots of some kind (Benedicto/
20. Lexical semantics: Semantic fields and lexical aspect 455

Brentari 2004; Supalla 1986; Shepard-Kegl 1985). Despite the differences between lexi-
cal verbs and CLP, the treatments of scales and manners as basic aspectual categories
can be extended to include the meanings contributed by the movement constituents of
CLP. We will see this with evidence from whole entity (w/e) CLP. These CLP are
complex predicates composed of different sorts of constituents: their handshapes are
referential to the internal argument of the predicate (Benedicto/Brentari 2004; Grose/
Wilbur/Schalber 2007); their POA refer to locations; and their movements indicate
spatial paths or manners of movement. Since they are full predicates, rather than con-
ventionalized morphological constructions like lexical verbs, CLP do not have lexically
specified aspectual structures, but their aspectual structures can be folded into the
current analysis.
In lexical predicates denoting changes of possession, or literal transfers, verbs are
specified for incremental scales, which are visible in the phonological path movement
of the verb, starting from an initial POA associated with an internal source argument
(source arguments may be omitted or unspecified) to a final POA referring to the final
recipient argument. These source and recipient arguments are animate and may be
physically present in the discourse context, or they may be represented by referential
loci established in the signing space. The recipients in these predicates represent the
boundary of the transfer scale, and are associated with the final s subevent in the
template [s0(e0s)], reflecting the fact that at the end of the event, what has been
transferred is now in possession of (located at) the recipient. Transfers can also be
distributed to multiple recipients or exhaustively over a group of recipients, but these
issues are outside of lexical semantics, so quantification issues will be set aside here.
The representations for the lexical aspectual structures associated with agreeing
verbs are presented in Figure 20.4a and 20.4b below. The slightly more elaborated
structure representing spatial path verbs is presented in Figure 20.4c. Since CLP are
not lexical verbs, they are not associated with full lexical structure, but the aspectual
contributions of their movements to their predicates can be represented using the
structure in Figure 20.4c without the V node:

Fig. 20.4: The aspectual structures of agreeing and spatial verbs

The utterance in (10) includes a w/e CLP in (10a–b), followed by a predicate with
the lexical verb give in (10c–d). In (10c–d), the non-dominant hand (H2) represents a
nest full of bird chicks, a location relative to which the predicate with give is inter-
preted. With this spatial component, the predicate in (10c–d) is to be interpreted as a
change of location as well as a change of possession:
456 IV. Semantics and pragmatics

(10) a. b. c. d. [HKSL]

H1: Y:bird.fly.to.nesti … … kgivei i(nest.with.chicks…food)


H2: C:birds.nest.located.at
‘The mother bird flies home and gives food to her babies in the nest.’

CLP are very productive in sign languages, but because they are restricted to spatial
meanings, the transitions that w/e CLP can encode are limited to changes of location
of the internal argument, as in (10a⫺b), or changes in its physical orientation. The
boundary of the spatial scale is represented by the location that the form articulated
by the dominate hand (H1) arrives at in (10b). The initial location of this event is
provided elsewhere in the narrative, and so does not appear in this utterance.
The source and initial location of the transfer is represented with a POA near the
signer’s mouth (10c), representing the mouth of the mother bird as she holds food.
give denotes the transfer from the source to the recipient in (10d). Like many other
lexical verbs, give was derived historically from a CLP form. Unlike a CLP, however,
the handshape of give is lexically specified and is not independently referential. But
the two predicates in (10) share similar situation aspects, visible in their phonological
forms. The movements of these forms are associated with dynamic stages of events
and initial and final POA, treated as agreement markers, are associated with static
subevents in the structure [s0(e0s)].
They differ in that, as a CLP, (10a–b) must be interpreted spatially. In (10c–d), give
is interpreted spatially, but the verb itself is not specified for a spatial meaning. Verbs
like give can be used in contexts where the relative positions of the referential loci,
referring to source and recipient arguments, are not interpreted as representing their
actual or real space relationships, or the distance between them (see chapter 19, Use
of Sign Space, for further discussion). This is the case, for example, when a predicate
with give denotes a transfer between two third person referents, represented by arbi-
trary referential loci established in the signing space. In these contexts, the referential
loci are not interpreted as representing actual locations or distances between referents,
nor is the movement of the form interpreted as the actual spatial path between them.
Transfer verbs in spoken languages can be modified spatially in a similar way, as the
English gloss of (10) shows.
The examples in (10) show how situation aspect, the lexical aspectual meanings of
lexical verbs, and the aspectual contributions of constituents of CLP are all visible in
the phonology, particularly in the path movements, of these forms. In CLP, phonologi-
cal path movements are restricted to spatial interpretations, but lexical transfer verbs
with their more abstract and conventionalized meanings do not share this restriction.
The scalar boundaries in these predicates are indicated with movements to POA associ-
ated with recipients and locations. This type of agreement marking can be contrasted
with the agreement markers discussed in the next section, which are not associated
with grammatical scales.
20. Lexical semantics: Semantic fields and lexical aspect 457

5.3. Agreeing verbs and manner

Agreeing verbs of the second type do not denote literal transfers, but rather actions
that are directed towards a referent. Their associated agreement markers indicate the
direction of the action, not the boundary of a scale. This group of verbs is more diverse
than verbs of transfer, and includes verbs of perception like search (5) and notice (9),
as well as the verbs show and help (11), and verbs of communication like ask and
tell/say-to. Predicates with these verbs indicate the argument to which the event is
directed through their phonological orientation. In contrast to the recipient arguments
shown by agreement markers in predicates of transfer, which are associated with final
s subevents, the arguments indicated by the agreement markers in (11) and (12) are
associated with the event through its entire duration, and thus with e subevents in the
template [s0(e)]. In many spoken languages with rich morphological case marking
systems, arguments in similar roles are labeled as dative or benefactive cases (Janis
1992), but for present purposes, I simply refer to them as directed-at arguments:

(11) (12) [HKSL]

helpi (ihorse bandage) H1: (bird) … look-ati i(babies in nest)


‘They help the horse H2: 5bent:bird’s.nest.located.at
bandage his hurt leg.’ ‘The bird looks down at her babies …’

These directed-at arguments are indicated by the phonological orientation, towards


either a referent in the discourse, as in (12) towards the co-articulated CLP represent-
ing the bird’s nest, or by a referential loci which has been established in the discourse
to represent a referent, as in (11). The verbs in this group are not lexically specified
for grammatical scales, nor do they ‘use up’ their internal arguments as they progress;
events like helping (11) or looking at (12) may continue indefinitely. Unless some other
element in the predicate provides a conceptually compatible scale for the event, verbs
of this sort are unable to participate in telic events. To contrast them with verbs of
transfer and other scalar verbs, these verbs are termed manner verbs in a broad sense.
The verb help, for example, denotes a manner of social interaction, directed towards
a beneficiary. The verb look-at denotes a manner of perception, directed towards the
object of perception.
The verb look-at can be compared with other perception verbs discussed above
with similar handshapes: the verb search in the compound find (5a–b) and notice in
(9). These verbs have related meanings, are from the same or similar verb classes, and
have similar handshapes, but they have distinct lexical aspectual structures and behave
grammatically in different ways. notice is a heterogeneous verb, with an agreement
marker referring to the argument that comes to be noticed. find (5) is a compound
458 IV. Semantics and pragmatics

with a heterogeneous aspectual structure, with search denoting the first dynamic and
homogeneous component, and good denoting the second static component. Again,
look-at denotes a non-scalar homogeneous manner. This comparison illustrates how
the conceptual meaning of a verb plays some role in determining its grammatical be-
havior, but verbs with related meanings may have very different lexical aspects and
thus, very different grammatical behaviors. Ideally, a lexical semantic analysis would
be able to account for both conceptual meaning and grammatically relevant meanings
equally well, but current analyses are not yet at that point, so there is plenty of work
left to be done.

6. Conclusion
This chapter briefly illustrated some of the interesting issues and questions for analyses
of lexical semantics in sign languages. Generally, lexical semantics work in the same
ways in sign languages and spoken languages. Within the fields of color terms in ASL
and HKSL, we find examples of arbitrary form-meaning relationships, as well as terms
whose forms reflect metaphorical relationships with terms for objects associated with
certain colors. While both sign languages have terms for the eleven basic colors, HKSL
has a larger inventory of native basic color terms, while more of the color terms in
ASL are non-native in origin. A look at a very small set of kinship terms in these two
languages demonstrated that associated meanings can be packaged into lexical items
in different ways, possibly reflecting cultural differences. Sibling terms in ASL are more
similar to English terms than the HKSL terms, which in turn are more similar to the
equivalent Cantonese words.
The picture is perhaps more interesting where verbs and predicates are concerned
because the forms of verbs and predicates in sign languages reflect aspects of their
semantics that are more opaque in many spoken languages. The basic distinctions
amongst different situation and lexical aspects are relatively well known in the spoken
language literature and in the growing literature on sign languages, but the visibility
or iconicity of aspectual meanings in their forms can provide a unique window into
lexical semantics. Useful as this iconicity may be, it is important to keep in mind that
by definition, lexical items are not always consistent. There are exceptions to every
lexical generalization, and the iconic form-meaning relationships between the aspectual
meanings of verbs and their forms discussed here can work only in one direction:
the aspectual structure of a verb may be reflected in its form, but the form does not
determine meaning.
That being said, we have seen distinctions between homogeneous and heterogene-
ous verbs, visible in their forms. We have also seen that, in those predicate types that
have them, different sorts of agreement markers not only indicate arguments and their
relationship to the predicate. Rather, they are also informative of the lexical aspect of
their associated verbs, indicating manner and scale distinctions. Manner and scale
meanings may be contributed to the predicate by elements other than lexical verbs,
and we see them in CLP as well. The w/e CLP discussed here are restricted to spatial
and locative meanings, so that all scale meanings are interpreted as spatial paths, and
all manner meanings are interpreted as manners of motion, both of which are encoded
in the movement of the form.
20. Lexical semantics: Semantic fields and lexical aspect 459

Lexical predicates and CLP encode situations and events from the same basic inven-
tory, and although they are highly productive, CLP are more constrained in their pos-
sible interpretations than lexical predicates like transfer predicates. Transfer predicates
are compatible with aspectual modifications (Klima/Bellugi 1979; Wilbur/Klima/Bellugi
1983; Wilbur 2009) that alter the temporal contour of the predicate. They may have a
simple change of possession interpretation, or a change of possession and location
interpretation, depending on context. The higher degree of iconicity in CLP, and the
fact that they are not conventionalized lexemes, makes CLP incompatible with aspec-
tual modification. Such changes in the movement of the form of a CLP would distort
the intended spatial interpretation. The fact that CLP and lexical predicates encode
situations and events from the same basic inventory, as well as the roles that scale and
manner play in all predicate types helps to account for their phonological similarities
across sign languages. The conventionalized forms and meanings of lexical verbs help
to account for their differences.
Sign languages are much less extensively studied than spoken languages, and lexical
semantics in these languages is a rich field for future research. Many sign languages
are still lacking even basic descriptions and no doubt many are waiting to be identified
by linguists. The field is still many years away from being able to produce the sort of
detailed description and analysis of verb classes and their behaviors for a sign language
similar to Levin’s (1993) extensive work on verb classes in English, which drew on
decades of research from many different sources. Yet, whenever researchers have
looked at lexical semantics in sign languages, what has been revealed is very exciting,
for sign language researchers and for linguistics as a field.

Acknowledgements: I would like to thank the members of the Deaf community of


Hong Kong, and my Deaf and hearing colleagues at the Centre for Sign Linguistics
and Deaf Studies at the Chinese University of Hong Kong who contributed to this
chapter. This project was made possible with funding from the Hong Kong Jockey
Club Charities Trust.

7. Literature

Benedicto, Elena/Brentari, Diane


2004 Where Did All the Arguments Go? Argument-changing Properties of Classifiers in
American Sign Language. In: Natural Language and Linguistic Theory 22, 743⫺810.
Berlin, Brent/Kay, Paul
1969 Basic Color Terms: Their Universality and Evolution. Berkeley, CA: University of Cali-
fornia Press.
Borer, Hagit
2005 The Normal Course of Events. New York: Oxford University Press.
Dowman, Mike
2007 Explaining Color Term Typology with an Evolutionary Model. In: Cognitive Science 31,
99⫺132.
Emmorey, Karen
2002 Language, Cognition and the Brain. Mahwah, NJ: Lawrence Erlbaum.
460 IV. Semantics and pragmatics

Erteschik-Shir, Nomi/Rapoport, Tova


2005 Path Predicates. In: Erteschik-Shir, Nomi/Rapoport, Tova (eds.), The Syntax of Aspect:
Deriving Thematic and Aspectual Interpretation. New York: Oxford University Press,
65⫺88.
Grose, Donovan
2008 The Geometry of Events: Evidence from English and American Sign Language. PhD
Dissertation, Purdue University.
Grose, Donovan/Wilbur, Ronnie/Schalber, Katharina
2007 Events and Telicity in Classifier Predicates: a Reanalysis of Body Part Classifier Predi-
cates in ASL. In: Lingua 117, 1258⫺1284.
Hollman, Liivi/Sutrop, Urmas
2010 Basic Color Terms in Estonian Sign Language. In: Sign Language Studies 11(2), 130⫺
157.
Hohenberger, Annette
2008 The Word in Sign Language: Empirical Evidence and Theoretical Controversies. In:
Linguistics 46(2), 249⫺308.
Janis, Wynne
1992 Morphosyntax of the ASL Verb Phrase. PhD Dissertation, SUNY Buffalo.
Kay, Paul/McDaniel, Chad K.
1978 The Linguistic Significance of Basic Color Terms. In: Language 54, 610⫺646.
Klima, Edward S./Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Lee, Wai Fung Sarah
2001 Aspect in Hong Kong Sign Language. MA Thesis, Department of Linguistics, Chinese
University of Hong Kong.
Levin, Beth
1993 English Verb Classes and Alternations: A Preliminary Investigation. Chicago: University
of Chicago Press.
Levin, Beth/Rappaport Hovav, Malka
2005 Argument Realization: Research Surveys in Linguistics. Cambridge: Cambridge Univer-
sity Press.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lucas, Ceil/Bayley, Robert/Valli, Clayton
2002 What’s Your Sign for Pizza? An Introduction to Variation in American Sign Language.
Washington, DC: Gallaudet University Press.
Massone, Maria I./Johnson, Robert E.
1991 Kinship Terms in Argentine Sign Language. In: Sign Language Studies 73, 347⫺360.
Meir, Irit
2002 A Cross-modality Perspective on Verb Agreement. In: Natural Language and Linguistic
Theory 20(2), 413⫺450.
Meir, Irit/Padden, Carol/Aronoff, Mark/Sandler, Wendy
2007 Body as Subject. In: Journal of Linguistics 43, 531⫺563.
Newmeyer, Frederick J.
1992 Iconicity and Generative Grammar. In: Language, 68, 756⫺796.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Padden, Carol
1988 The Interaction of Morphology and Syntax in American Sign Language. New York:
Garland Publishing.
20. Lexical semantics: Semantic fields and lexical aspect 461

Padden, Carol
1998 The ASL Lexicon. In: Sign Language & Linguistics 1, 39⫺60.
Peng, Fred C.C.
1974 Kinship Signs in Japanese Sign Language. In: Sign Language Studies 5, 31⫺47.
Perniss, Pamela/Thompson, Robin/Vigliocco, Gabriella
2010 Iconicity as a General Property of Language: Evidence from Spoken and Signed Lan-
guages. In: Frontiers in Psychology 1:227.
Pustejovsky, James
1991 The Syntax of Event Structure. In: Cognition 41, 47⫺81.
Pustejovsky, James
1995 The Generative Lexicon. Cambridge, MA: MIT Press.
Ramchand, Gillian
2008 Verb Meaning and the Lexicon: A First Phase Syntax. New York: Cambridge Univer-
sity Press.
Rathmann, Christian
2005 Event Structure in American Sign Language. PhD Dissertation, University of Texas
at Austin.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. New York: Cambridge University Press.
Shepard-Kegl, Judy
1985 Locative Relations in American Sign Language Word Formation, Syntax, and Discourse.
PhD Dissertation, MIT.
Smith, Carlotta S.
1997 The Parameter of Aspect. Dordrecht: Kluwer Academic Publishers.
Supalla, Ted
1986 The Classifier System in American Sign Language. In: Craig, Colette (ed.), Noun
Classes and Categorization. Amsterdam: Benjamins, 181⫺214.
Tang, Gladys (ed.)
2007 Hong Kong Sign Language: A Trilingual Dictionary with Linguistic Descriptions. Hong
Kong: The Chinese University Press.
Tenny, Carol
1994 Aspectual Roles and the Syntax-Semantics Interface. Dordrecht: Kluwer Academic Pub-
lishers.
Tenny, Carol/Pustejovsky, James
2000 A History of Events in Linguistic Theory. In: Tenny, Carol/Pustejovsky, James (eds.),
Events and Grammatical Objects: the Converging Perspectives of Lexical Semantics and
Syntax. Stanford, CA: CSLI Publishers, 3⫺37.
Wilbur, Ronnie B.
2003 Representations of Telicity in ASL. In: Chicago Linguistics Society 39, 354⫺368.
Wilbur, Ronnie B.
2005 A Reanalysis of Reduplication in American Sign Language. In: Hurch, Bernhard (ed.),
Studies in Reduplication. Berlin: Mouton de Gruyter, 593⫺620.
Wilbur, Ronnie B.
2008 Complex Predicates Involving Events, Time and Aspect: Is This Why Sign Languages
Look so Similar? In: Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR
2004. Hamburg: Signum, 217⫺250.
Wilbur, Ronnie
2009 Productive Reduplication in a Fundamentally Monosyllabic Language. In: Language
Sciences 31, 325⫺342.
Wilbur, Ronnie/Klima, Edward/Bellugi, Ursula
1983 Roots: On the Search for the Origins of Signs in ASL. In: Chicago Linguistics Society
19, 314⫺336.
462 IV. Semantics and pragmatics

Woodward, James
1989 Basic Color Term Lexicalization Across Sign Languages. In: Sign Language Studies 63,
145⫺152.
Woodward, James
1978 All in the Family: Kinship Lexicalization Across Sign Languages. In: Sign Language
Studies 9, 121⫺138.

Donovan Grose, Hong Kong (China)

21. Information structure


1. Understanding the terminology
2. Linguistic encoding of topic information
3. Linguistic encoding of focus
4. Conclusion
5. Literature

Abstract
Information sent from one individual to another changes the state of the information in
the receiver’s knowledge store. Thus, information structure is a way of describing deci-
sions that the sender must make when packaging information to be sent, and that the
receiver must make when unpackaging the information received. This chapter will dis-
cuss aspects of information structure, starting with an introduction to the notions of
‘focus’ and ‘topic’ in section 1. In section 2, we explore in depth the linguistic encodings
associated with various types of topics, and in section 3, those associated with focus.
Unlike topics, which generally appear at the beginning of sentences, focus is involved in
a complex interaction with word order and stress/prominence assignment. Additional
discussion is included on the relationship between focus and stress and on syntactic
structures that serve focusing functions.

1. Understanding the terminology


This chapter focuses on the manual and non-manual expression of topic and focus. To
fulfill this goal, we must first address the notions of ‘focus’ and ‘topic’ as they function
in information structure. A useful way to begin is to imagine two people in conversa-
tion. In order for the sender to successfully tell the addressee something, each piece
of information to be sent has to be evaluated with respect to what the sender knows
about what the addressee already knows. There are several factors that the sender
must consider:
21. Information structure 463

(i) What might the addressee know because it is general knowledge? That is, taking
into account the addressee’s level of education, media habits (TV, internet, books,
etc.), age, and socioeconomic status, is it safe to assume that the addressee will
recognize and understand something if it is simply mentioned as part of a sen-
tence? (Technically, what is being sent is an encoded message. It may be coded
into multiple sentences or just sentence fragments. For simplicity, I will use the
term ‘sentence’ to cover these options as well.)
(ii) What might the addressee know because of prior conversations with the sender?
Technically, this is their ‘conversational history’.
(iii) What might the addressee know given prior mention in this current discourse?
Can the sender assume that the addressee has been paying attention? Has a good
memory? Understood what was talked about? How much difficulty will the ad-
dressee have retrieving this information?

In each of these scenarios, the sender might determine that the information is either
‘not new’, a situation which can be described as old, given, familiar (Chafe 1976), or
‘new’. If the information is new, it is always a ‘focus’, never a ‘topic’. Now we can
consider what is meant by ‘focus’, after which we will explore the possible interpreta-
tions of ‘topic’.

1.1. Focus and contrast

From an information packaging perspective, focus is the central determinant of both


surface word order and prosodic structure (Chafe 1976; Lambrecht 1994; Prince 1986;
Vallduví 1991, 1995). The presentation of information in a sentence is structured ac-
cording to the speaker’s belief regarding the hearer’s knowledge and attentional state
(whether something is in the hearer’s mind at the time of utterance; variant formula-
tions of this generalization exist, but the distinctions that they are intended to capture
are not relevant here; Chafe 1976). For information packaging purposes, focus is de-
fined as the “information the hearer is instructed to enter into knowledge-store” (Vall-
duví 1991). The non-focus information, or ground, indicates to the hearer where and
how the focus information should be entered into the knowledge store. Vallduví argues
that ground includes at least two different specifications: link information, which is
commonly viewed as topic or theme, indicates where the information should be entered
in the hearer’s knowledge store, while tail information, if present, may indicate to the
hearer to substitute the focus information in place of existing information in the knowl-
edge store. In this framework, then, the absence of a tail would imply that the focus
information was additional information, while the presence of a tail would imply that
the focus information was replacement information. Typically, the order of information
presentation would be link⫺focus⫺tail.
Gundel (1999) argues for three main types of ‘focus’. The first is ‘psychological
focus’, which is what the sender and addressee are both attending to, their current
center of attention. This could also be called ‘salience’. With respect to information
structure, psychological focus is outside the linguistic system but may be affected by it.
For example, things that are in psychological focus can be referred to by pronouns or
with definite articles (such as ‘the’) or (in languages that permit it) by pro-drop (that
464 IV. Semantics and pragmatics

is, not be expressed at all in the sentence). From the perspective of this chapter,
‘psychological focus’ is shared information, what is being talked about, and hence topic,
not focus. Gundel’s second type is ‘semantic focus’, which is the new information that
is predicated about the topic in the sentence. This is also known as ‘comment’ or
‘rheme’. The third type is ‘contrastive focus’, to which linguistic prominence is given for
the purpose of contrast. Contrastive focus is used when the sender wishes to highlight a
correction of information.
Unless otherwise specified, focus will be used here to refer to Gundel’s ‘semantic
focus’. Every sentence has a semantic focus by definition. Referring to spoken lan-
guages, Gundel notes that this focus may be “linguistically marked by pitch accent, by
word order and other aspects of syntactic structure, by focus marking particles, or by
some combination of one or more of these devices, with pitch accent being the most
universal” (Gundel 1999, 296).
There are some generalizations that make it easier to find the focus. In a question-
answer situation, focus is the information in the answer that is new; it is also the
information in the answer that is required, whereas anything else the answerer includes
is old/repeated from the question, and thus could be omitted. But the focus can never
be omitted. In the question-answer situation, the question requests information which
the answer is supposed to provide. However, as Weiser (1975) notes, an answer can be
acceptable in discourse if the answer addresses the questioner’s purpose for asking the
question. She illustrates this with the example in (1).

(1) Q: How old are you?


A: Don’t worry, they’ll let me in.

To understand this, it is necessary to know that in the US, a person must be 21 years
of age and provide proof thereof in order to enter a bar or other place that sells
alcohol. The questioner thinks something like: We are going to a bar; there is a mini-
mum age of 21 to get in; I wonder if my companion is old enough. The questioner asks
for specific information. The companion responds with reassurance that the underlying
concern will not be a problem. The companion’s age remains a mystery. In this case,
the entire answer is ‘in focus’, that is, is new information, but the information given is
not the information requested in the question.
Gundel (1999, 296) notes that a constituent may be prominent because the sender
does not think the addressee’s attention is focused on where the sender would like it
to be, “because a new topic is being introduced or reintroduced (topic shift), or because
one constituent (topic or semantic focus) is being contrasted, explicitly or implicitly,
with something else”. In other words, even topics may be contrastively focused. This
raises the question of how we distinguish ‘topic’.
Before turning our attention to that issue, it will be helpful to understand one more
distinction related to focus, namely the difference between ‘plain’ focus and contrastive
focus. The notion that plain focus is the ‘new information contributed by the sender’
works well and will be the default meaning when ‘focus’ is used without any modifica-
tion in this chapter. Contrastive focus differs from plain focus in that the sender uses
contrastive focus in situations in which, in many cases, the sender believes that the
addressee needs to be given corrected information. Turning to the notion of contrastive
topic, Repp (2010) argues against Gundel’s idea that contrast in topic is the same kind
21. Information structure 465

of phenomenon as contrast in focus. Her argument revolves around the meaning of


‘contrast’ and the fact that contrast in topic is not a correction like contrast in focus.
To explain this difference, we need to introduce the notion of exhaustivity. This is most
easily done in the situation of contrastive focus, where there are two items in focus.
Consider the American Sign Language (ASL) example (2) from Aarons (1994) (‘fs’ =
fingerspelled word, ‘br’ = brow raise).

br
(2) mary(fs), jim(fs) love tease t [ASL]
[Jim doesn’t like to tease Jane.] ‘It’s Mary who Jim loves to tease.’

‘Mary’ is in contrastive focus in (2) with some previous utterance in which it was
asserted that Jim loves to tease Jane. Thus, the two contrasted alternatives under con-
sideration are Mary and Jane, and no one else. Since the set of alternatives is known,
identifiable, and explicit, ‘Mary’ is selected as the exhaustive member of the set of
alternatives. That is, Mary, and only Mary, is the choice of responses for the question
in (3).

(3) Who does Jim love to tease?

Exhaustivity then is a characteristic of contrastive focus, but not of focus in general.


For contrastive focus, the set of alternatives must be closed (our example has only two
members, Mary and Jane, in it), explicitly mentioned (Mary mentioned in (2), and Jane
in a preceding utterance), and required to exclude all other alternatives (only Mary,
not Mary as one of maybe several people other than Jane that Jim loves to tease). In
contrast, non-contrastive focus may have a open set of alternatives, which need not be
explicitly mentioned but may be inferred or presupposed, and the focused item is not
necessarily the only possible alternative that can provide a true response. For example,
if you say that Sara went to the store to buy bread, this does not require that the only
thing that she bought, or went to buy, was bread. In contrastive focus, a sentence like
(4) means that she went specifically for bread, and only bread.

(4) Sara went to the store to buy BREAD.

Similarly, example (5) means that she went to buy two things, not just one.

(5) Sara went to the store to buy BREAD and CHOCOLATE.

Exhaustivity limits possible true choices to exactly the one(s) mentioned, whereas non-
exhaustive/plain focus can mean that the item mentioned, e.g., bread, is one of several
things that Sara went to buy, the others of which are not mentioned perhaps because
the speaker does not know or the list is too long or only bread is the most relevant to
the conversational situation (for an extensive further discussion of exhaustivity related
to ASL and arguments for an Exhaustive operator, see Caponigro/Davidson (2011)).
Drubig (2003) addresses the distinction between contrastive and exhaustive types of
focus. He argues that there are two different locations for these types, with contrastive
focus located in a position in the Complementizer Phrase (CP; above Tense Phrase
466 IV. Semantics and pragmatics

(TP)) and exhaustive focus located further below inside TP. This separation reflects
his observation that focus movement, when it occurs, either targets the sentence pe-
riphery (hence the higher CP) or a position adjacent to the Verb (hence the lower
position). For Drubig, this provides an explanation for the focusing differences identi-
fied by Kiss (1995), whereby Romanian focus, which is fronted, is [Ccontrastive] and
Hungarian focus, which is preverbal, is [Cexhaustive]. Thus, there is a correlation
between semantic interpretation and position of focus in the sentence.
Having established exhaustivity as an informational relevant notion, we can now
explain Repp’s concern about the difference between contrastive focus and contrastive
topic. As we have just illustrated, contrastive focus requires (entails) that the focused
item is the complete and only true alternative. Contrastive topic, however, does not
carry this requirement; instead contrastive topic implies that the contrasted item is the
only relevant true alternative, but this implication can be cancelled by adding more
information that indicates that other alternatives are also true and relevant without
creating a self-contradictory sentence. Puglielli and Frascarelli (2007) provide a more
extensive discussion of contrastive topic, including pitch contours in spoken languages,
and show how sign languages are parallel in numerous respects. Molnar and Winkler
(2010), in their Edges and Gaps Hypothesis, pull the overlaps in contrastive focus and
topic together by analyzing the central theme of contrast as a ‘complex information-
structural notion’, which has a dual character: it is a highlighting device like focus, and
a linking function like topic. They note that movement to the domain edge for focus
marking and gap formation resulting from the omission of old/given material are com-
plementary processes, hence the name ‘Edges and Gaps’.

1.2. Types of focus

There are some additional distinctions in focus types that can be made, but only two
more will be addressed in this chapter: (i) scope-based and (ii) function-based focus.

1.2.1. Scope-based focus

The most neutral of stress patterns is the one in which the entire sentence is in focus.
These are the forms that might occur as a statement following a conversational opener
like ‘Hey, know what?’ The traditional relevant distinction here is between ‘broad’
versus ‘narrow’ focus. The notion is related to how much of the sentence is contained
in the scope of the focus operator. Typically, it is assumed that in narrow focus, only a
single constituent is in focus, whereas broad focus includes more than one constituent.
As we shall see (section 3.1), it can be problematic to determine whether more than
one constituent is itself being treated as one larger constituent. Single small constitu-
ents can be focused in the English it-cleft (6).

(6) It was Cyndi who scripted these examples.

Golde (1999) discusses the use of reflexive forms to highlight single NP constituents
in narrow focus (‘intensive NP focus’), as seen, for example, in (7). (Note, however,
21. Information structure 467

that the contrastive focus particle ‘only’ is also present in this example; see section 3.1.2
for further discussion).

(7) Only the defendant herself remained completely calm.

Larger constituents may be put into focus using syntactic constructions such as the
English wh-cleft, as illustrated in (8), where ‘sterilize surgeon’s tools’ is in focus.

(8) What Ellen does for a living is sterilize surgeon’s tools.

Note that ‘sterilize surgeon’s tools’ could be analyzed as a single VP constituent. How-
ever, it cannot be put into an it-cleft like (9).

(9) * It’s sterilize surgeon’s tools that Ellen does for a living.

There are at least three reasons why this does not work: (i) the constituent is not
‘small’; (ii) it is composed of Verb plus Direct Object, so perhaps not a single constitu-
ent in the sense needed for the it-cleft focus, and (iii) it is not a NP. Alternatives with
NPs, such as those in (10), are somewhat better but show that size and perhaps com-
plexity of the constituent are also relevant to their acceptability.

(10) a. It is sterilizing surgeon’s tools that Ellen does for a living.


b. It is sterilization of surgeon’s tools that Ellen does for a living.

These factors remind us that the issue of what is in focus is not defined by a single
pragmatic, semantic, syntactic, or prosodic feature.

1.2.2. Function-based typology of focus

Dik (1989) provides a focus typology that will prove useful for discussing the types of
focus-related research that has been done on sign languages. Dik makes a major dis-
tinction between completive and contrastive focus, then distinguishes several types of
contrastive focus. For Dik, completive focus is the highlighted item that makes a pre-
supposition true with ‘implied exclusiveness’, that is, it is the exhaustive value. The
previous example ‘Sarah went to the store to buy BREAD’ is an example of comple-
tive focus, as well as the non-contrastive use of ‘It’s bread that Sarah went to the store
to buy.’
Dik’s contrastive focus categories include Restricting (‘only’), Expanding (‘even’),
Selecting (from a closed and known set), Replacing (X, not Y), and Parallel (‘and’,
‘or’, ‘but’). Information on each of these types is available for ASL (section 3).

1.3. Topic

Just like focus, there are a number of different uses of the term ‘topic’. At the dis-
course/conversational level, Givón (1983) considers ‘topic’ to be any participant in a
468 IV. Semantics and pragmatics

discourse. At the sentence level, the terms ‘theme, rheme’ and ‘topic, comment’ have
been used to separate topic and focus, again with the focus as the new information.
This means that ‘topic’ covers whatever is not new. Another suggested interpretation
is the ‘aboutness’ topic (Reinhart 1982), that the topic is the thing that ‘the proposition
in the sentence is about’. Some researchers take this notion of topic as basic, framing
their sentence descriptions in terms of topic versus comment (‘topic-comment struc-
ture’). For example, in his discussion of Finnish Sign Language (FinSL), Jantunen
(2007, 2008) indicates that if there is a pause after the first constituent in a transitive
construction (verb and both its arguments), then the structure is not a single clause
but rather the first constituent is a ‘clause-external left-detached topic element’ and
the remainder is the comment. However, as we shall see in discussion of topicalization
(section 2.2.3) and lexical focusers (section 3.1), it is possible for the first constituent
to be followed by a pause and not be topic, but rather focus.
Vallduví (1991, 1995) argues for a separate level of Information Structure (also
Lambrecht 1994) that interacts with syntax and prosody. Of relevance is that Vallduví
includes in his model both the notions of Topic-Comment and Background-Focus. He
does this by dividing a sentence into the focus and the ground, and then further divid-
ing the ground into a ‘link’ and a ‘tail’. In doing so, he avoids the term ‘topic’, and
instead divides the non-focused information according to its function. An optional link
can connect the current sentence to preceding sentences or discourse history, much as
the traditional notion of topic would do, but Vallduví’s formulation also allows the
sentence subject to perform the same function. So for him, the distinction between
topic and subject is not an informational one, and both may be links (assuming that
the subject is not in focus). Vallduví’s tail performs a different function: since it is old/
given non-focused information, its presence in the sentence is strictly speaking redun-
dant and could be omitted. Vallduví argues that this is what happens unless the speaker
needs to indicate to the addressee that the speaker believes that the information that
the addressee has is not accurate and needs to be corrected. From Vallduví’s perspec-
tive, focus information alone tells the addressee to add this new information to the
addressee’s information store (to update the addressee’s semantics). Focus information
followed by a tail, on the other hand, indicates not only that there is new information
that the addressee should now store, but also that the speaker believes that the ad-
dressee has incorrect information stored and this new information should be used to
replace the incorrect information.

1.4. Summary

What we have seen is that speakers are concerned with the information status of their
addressees and formulate their messages to make it clear to the addressee what infor-
mation is new (focus), what is old/shared (topic/link), and what is intended as a correc-
tion (contrast/tail). Speakers also attempt to formulate sentences in such a way as to
guide the addressee’s determination of whether the new information is all the possible
information (exhaustive) or only perhaps part of what might be included (see chap-
ter 22, Communicative Interaction, for Grice’s conversational principles that help at
the larger discourse level).
21. Information structure 469

2. Linguistic encoding of topic information


When new information is introduced into a discourse as focus in one sentence, it may
become old information and be the topic of the very next sentence. If, however, the
information was not just introduced into the discourse, a conversational participant
who is sure that the addressee knows certain information because it is general knowl-
edge may still include that information in a statement without any special marking. If
there is some uncertainty about the addressee’s knowledge, the speaker may look to
the addressee for back-channel confirmation ⫺ head nodding, no puzzled looks ⫺ and
upon receiving such confirmation, may continue to treat the information as known/
shared, and may assume that it is safe to refer to that information using pronouns, pro-
drop in languages that permit it, reduced forms, or other backgrounding devices. A
sensitive speaker will take corrective action if the back-channel confirmation does not
come, by providing further explanation to the addressee. If the speaker believes that
the addressee can retrieve information with little or some effort, the information may
be marked with expressions such as ‘you know’/know-that, eye squints (Engberg-Ped-
ersen 1990), or some other memory prompter. Otherwise the speaker must treat the
information as new and code it accordingly. In this way, the flow of information is
negotiated as the conversation continues in order for maximal comprehension to be en-
sured.

2.1. Discourse level topic

As indicated above, if the speaker may reasonably assume that the addressee knows
what is being talked about and is able to follow the flow of information without diffi-
culty, ground information may be omitted, used in pronoun form, or otherwise put in
background constructions. We can see this clearly with data from an ASL production
of the story “The fox and the stork” (signed by Patrick Graybill). The fox is most
salient as the host and chef while the stork is newer as the guest. One expects then
that subsequent reference to the fox should be as though the fox were old information
and subsequent reference to the stork should be more explicit (as befits new informa-
tion). Since ASL is a pro-drop language (Kegl 2004; Lillo-Martin 1986), reference to
old (but salient) information can be omitted entirely, unless a link (as discussed in
section 1.3) is needed. Explicit NPs (nominals or pronouns) are certainly an indication
of newness (or less salience). In the analyzed several minutes of the story, there are
22 total references (overt and null) to the stork and 17 to the fox. The stork is referred
to overtly by either a lexical item or a pronoun in 44 % of the subject slots (8/18),
while the fox is referred to overtly in 18 % of the subject slots (3/17). The fox never
occurs as a direct object; the stork is referred to overtly in 25 % of the (relatively few)
object slots (1/4). To the extent that this type of analysis gives us an estimate of the
relative topicality of the fox and stork, it would appear that the fox is more topical as
reflected by the fewer occurrences of overt referencing ⫺ the stork, by contrast, is
less topical and is explicitly referenced almost three times more often than the fox
(Wilbur 1994).
Similarly, in a study of information structure and word order in Croatian Sign Lan-
guage (HZJ) narrated short stories, Milković, Bradarić-Jončić, and Wilbur (2007) re-
470 IV. Semantics and pragmatics

port that while the basic word order is SVO, this order is affected by prior context as
well as contrastive focus. In accordance with Grice’s Maxim of Quantity (‘say no more
than is necessary’), HZJ has a tendency to omit old information, or to reduce it to
pronominal status. When old information is overtly signed in non-pronominal form, it
occurs at the beginning (left side) of the sentence. A variety of mechanisms are used
to show items of reduced contextual significance: use of assigned spatial location and
eye gaze for previously introduced referents; use of the non-dominant hand for back-
grounded information; use of classifiers as pronominal indicators of previously intro-
duced referents; and complex noun phrases that allow a single occurrence of a noun
to simultaneously serve multiple functions.

(11) (H1) = dominant hand; (H2) = non-dominant hand [HZJ]


boy walk. see car1-cl (H1) walk not-see car2-cl (H1) walk (H1) fall
(H2) car1-cl----------------- (H2) bhitd
‘The boy was crossing the street and saw one car. But he didn’t see a second
car coming from the other direction. That car hit him, and he fell.’

Example (11) gives an illustration of a HZJ signer’s description of a series of three


pictures that show a boy crossing the street and being hit by a car (adapted from
Milković/Bradarić-Jončić/Wilbur (2007, 1014)). In (11), we see several of the back-
grounding devices in play. The boy is introduced with a noun sign in the first sentence,
and then, being the most salient topic of the rest of the description, is never overtly
mentioned again. In the second sentence, the first car is introduced with the dominant
hand and then kept in the discourse as backgrounded information on the non-dominant
hand in the third sentence while the dominant hand describes the boy continuing to
walk without seeing the second car. In the fourth sentence, the dominant hand indicates
that the boy continues to walk and the non-dominant hand, now referring to the second
car, shows the car hitting the boy by using classifiers and verb agreement. In the last
sentence, the boy falls, but only the verb fall is signed. Milković et al. (2007) analyze
this sequence from a Theme-Rheme perspective. The first sentence is Theme1 Rheme1.
The second one does not sign the Theme again, represented in parentheses as (T1),
and introduces Rheme2, while sentence three has the structure (T1) Rheme3 on the
dominant hand, and the Rheme2 becoming T2 on the non-dominant hand. In sentence
four, the dominant hand continues with (T1) walking, while the non-dominant hand
takes (R3) as (T3) and introduces R4 as (T3) hits (T1). In sentence five, R5 is introduced
as (T1) falls. Thus, we have several backgrounding strategies for previously introduced
referents: omission rather than pronominalization; classifiers as pronominal indicators;
spatial location; eye gaze to the spatial location; complex noun phrases (NPs containing
relative clauses); and simultaneous signing using both hands. These devices permit
information to be conveyed without the need for separate signs for every referent,
which would create longer constructions. Thus, the linguistic coding of discourse topics
is not like the special devices discussed in the next section for sentence level topics.
21. Information structure 471

2.2. Sentence level topic

2.2.1. Subject vs. topic

At the sentence level, it is necessary to discuss the relationship between ‘subject’ and
‘topic’. Topics are not necessarily grammatical subjects and grammatical subjects are
not necessarily topics (Lambrecht 1994, 118). In some languages, like Japanese, there
is a marker for subject (ga) and a marker for topic (wa). Traditionally, one test for the
difference is agreement: if a language has agreement on the verb, it will (usually) agree
with the subject but not with a topic (Crasborn et al. (2009) argue to the contrary for
Sign Language of the Netherlands (NGT); however, their claims are debatable in part
because what they refer to as an agreement marker is the resumptive pronoun in final
position; see section 2.2.2 below). Another test is that topics must be definite whereas
subjects need not be (Li/Thompson 1976; Keenan 1976).

2.2.2. Topic vs. topicalized

Another distinction that must be made is between ‘topic’ and ‘topicalized’. In this
regard, it is somewhat easier to say what topicalized is than to say what topic is. Topical-
ized traditionally implies that a noun phrase (NP/DP) or prepositional phrase (PP) has
been moved from its base position to the front of the sentence (where topics are usually
located) in order to be highlighted. Thus, topicalization is actually a form of focusing
and, as has been pointed out numerous times in the literature, should be called ‘focali-
zation’. Furthermore, the moved constituent should have the primary stress of the
sentence. Marking of topicalization will be treated in the section on marking topics,
and topicalization as a focusing device will be treated in the section on focusing.
Aside from topicalization, sentence level topic divides into two basic types: plain
topic, henceforth ‘topic’, and topic that is connected to the main clause by a resumptive
pronoun, henceforth ‘left dislocation’ (LD). These two kinds of topic normally would
be unstressed unless contrastive. Despite its suggestive name, the term ‘left dislocation’
is not a claim about movement of an item to the left periphery. The term, as carefully
defined by McCawley (1988), requires the left peripheral item to be nominal, not pro-
nominal, and unstressed. There must be a co-indexed resumptive pronoun in the main
clause, which means that the item on the left is base-generated, not moved. Likewise
with right dislocation (RD), the pronominal must be in the main clause, and the nomi-
nal must be outside the main clause, on the right and unstressed (although RD is not
the mirror image of LD; Cecchetto 1999). By this definition, ASL does not have right
dislocation as claimed by Neidle et al. (2000) (Wilbur 1994).
With this distinction made, it can be seen that the NGT examples that Crasborn et
al. (2009) use to make their argument for ‘topic agreement’ are in fact LD topics (12).
Indeed, they argue that the pronoun copy must be there in order for the left peripheral
item to be considered a topic (Crasborn et al. 2009, 359; pt = pointing sign).

neutral tilted nod neutral


(12) ptright persoon morgen thuis, ptright krant lezen ptright [NGT]
that person tomorrow at.home he newspaper read he
‘The man, tomorrow at home he will read the newspaper.’
472 IV. Semantics and pragmatics

Note also that in (12), there are two pronouns, one of which is subject of the sentence.
Thus, the final pronoun can be said to agree with both the topic and the subject,
suggesting that additional tests are needed to better evaluate their claim. Also, Bos
(1995) analyzed these same pronouns as instances of subject copy.
The three main types of ‘topic’ are illustrated in (13a⫺c) (Ziv 1994). In example
(13a), ‘As for Left Dislocation’ establishes the topic about which the assertion ‘the
definition we use here is from McCawley’ is predicated. Note that in this structure,
there is neither a resumptive pronoun nor a trace/gap. In (13b), the resumptive pro-
noun ‘it’ is co-indexed with (refers to the same thing as) ‘Left Dislocation’. Finally, in
(13c), from my (New York) dialect of English, ‘Left Dislocation’ has been fronted from
the position where the trace t is shown. Many native speakers of English would much
prefer to have the resumptive pronoun, that is, a structure more like (13b).

(13) a. Topic: As for Left Dislocation, the definition we use here is from McCawley.
b. LD: (As for) Left Dislocationi, we use McCawley’s definition of iti.
c. Topicalization: Left Dislocation, many people are confused about t.

2.2.3. Topic marking

The seminal work on topic marking in ASL is Aarons (1994), which separates three
categories of ‘topic’. She notes that topics are differentiated by both position and non-
manual marking (NMM), that ASL restricts the number of possible topic positions in
a single sentence to two, and that if there is more than one, each must come from a
different category. If one of them is topicalization, it must be the second one in se-
quence, that is, the one closer to the main sentence/CP. (In Minimalist ‘cartographic’
terminology, this means that ASL cannot have the lower TopicP filled if FocusP is
filled, assuming topicalization is in FocusP.)
Aarons describes the possible non-manual topic markings (tm) as follows and indi-
cates whether the marked constituent is base-generated or moved:

(14) tm1: raised eyebrows; head tilted slightly back and to the side; constituent is
moved
tm2: raised eyebrows; single head movement where head first tilts back then
moves downward; constituent is base-generated
tm3: raised eyebrows; rapid head nod; constituent is base-generated

In the present terminology, tm1 reflects topicalization (focus), tm2 topic, and tm3 left
dislocation. The following examples illustrate the three different categories.

(15) tm1 [ASL]


tm1
john(fs)i not-like jane. mary(fs)j, ixi loves tj.
‘John doesn’t like Jane. It’s Mary he loves.’

In (15), the object of ‘he loves’ has been fronted from its base SVO position after the
verb. As we discussed with respect to example (2), this is a case of contrastive focus,
21. Information structure 473

with ‘Mary’ here in direct contrast with ‘Jane’, who is mentioned in the preceding
sentence. It is of course necessary in contrastive focus for ‘Mary’ to be new informa-
tion, which is why ‘Mary’ is not a true topic. In contrast, in (16), vegetable is old
information (which may be being reintroduced into the context), and corn bears a
subcategory relation to the category vegetable.

(16) tm2 [ASL]


tm2
vegetable, john(fs) like corn
‘As for vegetables, John likes corn.’
(17) tm3 [ASL]
tm3 tm2
john(fs)i, vegetable, ixi prefer artichoke
‘As for John, as for vegetables, he prefers artichokes.’

In the LD structure in (17), johni is covered by tm3 and we observe the required
resumptive pronoun (ixi) in the main clause (cf. also Lillo-Martin 1991). Example (17)
also shows the sequence of LD and topic (tm2) prior to the main clause ixi prefer arti-
choke.

3. Linguistic encoding of focus


The purpose of focus is to provide new information and to draw the addressee’s atten-
tion to it. Focus may be marked by syntax, by lexical focus markers, or by prosody.
Some languages may have a rigid focus position, for instance, Hungarian, which puts
focus before the verb. There are also syntactic focusing constructions, such as the Eng-
lish it-cleft (18) and wh-cleft (19).

(18) It was Linda who called last night.


(19) What bothered me the most about his behavior is that he seemed to think no-
one noticed.

Lexical focuser markers in English are words like ‘even’ (20) and ‘only’ (21) (Rooth
1992).

(20) Even a four-year-old could do that faster than you.


(21) Only Elisabeth got paid on time.

With respect to prosodic marking of focus, there are a few issues that need to be
separated. The first one is the difference between focus and stress. It is clear that stress
is assigned under focus. As Selkirk (1984) observes, the entire focused constituent may
not be stressed, but it will contain the primary stress of the sentence inside it. One
difference between focus-related stress and the scope of focus itself is that stress is
assigned to lexical items only, whereas focus-related stress on a single item can render
474 IV. Semantics and pragmatics

the rest of the unaccented lexical items within its constituent to be in focus (Rochem-
ont/Culicover 1990; Selkirk 1984).
Using (19) above as an example, the entire focused constituent is ‘that he seemed
to think no-one noticed’. Depending on context and speaker intent, there are four
words that could have the primary stress in this clause (22a⫺d).

(22) a. that HE seemed to think no-one noticed


b. that he SEEMED to think no-one noticed
c. that he seemed to think NO-ONE noticed
d. that he seemed to think no-one NOTICED

The situation could be even more complex (23):

(23) that HE seemed to think no-one NOTICED

Thus, focus and stress are distinct notions, focus being an informational status and
stress being a prosodic status.
The second distinction is between focus and emphasis. Linguistic prominence for
‘emphasis’ can be separated from focus. Both share the characteristic of putting special
attention or weight on something, but they are grammatically separate. While focused
items may also be emphasized, not all emphasized items are focused. Since the catego-
ries that can be stressed by doubling are mutually exclusive with the categories that
can be focused by the wh-cleft, I treat doubling as a marker of emphasis (Wilbur 1994).
Doubling will be discussed further in section 3.2.3.
The third distinction is what Vallduví (1991) refers to as ‘plasticity’. Languages that
are [Cplastic] in his framework are able to shift the primary stress to different posi-
tions within a sentence without changes in word order. Thus there is an interaction
between prosody and syntax that affects how focus is realized. We return to this in
section 3.2.2.

3.1. Types of focus

Using Dik’s typology as a starting point, we illustrate different types of focus marking
from the literature on ASL.

3.1.1. Completive focus

Dik’s notion of completive focus is the one in which the focused item provides the
exhaustive new information (Lillo-Martin/de Quadros (2008) call this Information Fo-
cus in their discussion of ASL and Brazilian Sign Language, LSB). That is, if we say
‘Sarah went to the store to buy bread’ or ‘Sarah bought bread at the store’, in both
cases we mean that ‘bread’ is the only thing (the exhaustive list) that Sarah intended
to buy or bought.
21. Information structure 475

In ASL, the sign that and the syntactic cleft (English ‘it was X that …’) can be
used for completive focus. This use of that is ‘focuser that’ and should not be confused
with that when used as a demonstrative. One difference is that focuser that requires
the focused item to precede it whereas in its demonstrative use, that usually precedes
any noun that may occur with it and there is no semantic entailment of focus on the
noun. A second difference is that the focused item with focuser that is marked with
brow raise (which does not cover that) while nouns occurring with the demonstrative
do not have brow raise on them. Third, if focuser that is followed by old information
(performing either link or tail functions), there is always a prosodic break after that.
With the demonstrative, since that is followed by a noun (with, of course, possible
adjectives and other modifiers), there is no prosodic break after that. Finally, the
focused item occurring with that has primary stress, whereas the noun occurring with
the demonstrative does not (see Wilbur (1999a) and Wilbur/Schick (1987) for descrip-
tions of stress in ASL).
To show that that is the marker of completive focus, we compare it with self, which
is a contrastive focus marker. The behavior of self differs from that in two clear ways.
that requires brow raise on the focused item, whereas self does not; although brow
raise may occur with self, it is not always due to self itself, and there are instances in
which self occurs under the brow raise along with the focused item, whereas focuser
that does not. Second, that occurs with lean back while self occurs with lean forward
(Wilbur/Patschke 1998). Consider the two scenarios with that (24) and with self (25).
In (24), there is no previous mention of other drivers in person A’s utterance. In answer
to person B’s question, kay that is acceptable but kay self is not. This contrasts with
the acceptability of kay self in situations where other drivers are available in the
discourse (25). (26) illustrates a that-cleft. In all three examples, Kay is in focus.

(24) that (completive) [ASL]


A. Kay was driving her Dad’s new sports car and ran it into a tree.
B. WHO was driving the car?
br br
A. kay that *kay self
‘Kay was.’

(25) self (contrastive) ⫺ acceptable if other drivers are mentioned or possible in


situation [ASL]
A. Who was driving that car?
B. kay self
‘Kay was.’

(26) that ⫺ cleft [ASL]


A. I told Kay she should consider going into counseling.
B. You told who?
lean back
br
A. kay that, told finish
‘It’s Kay that I told.’
476 IV. Semantics and pragmatics

The use of self in (25) is an instance of what Golde (1999) refers to as ‘intensive NP
focus’. As Koulidobrova (2009) notes, self in ASL is not a long-distance anaphor, but
functions as an adnominal intensifier. This is in keeping with Ferro’s (1992) observation
for spoken languages that contrastive focus is a significant function of ‘self’ and that
the reflexive use is a historically later development. This fact is important because
when researchers address an unstudied or understudied sign language and expect to
see a reflexive, they may erroneously assign this function to all occurrences of ‘self’
without fully appreciating the bigger picture. In addition, Fischer and Johnson (1982)
observed that self in ASL also occurs in relative clauses with new information, predi-
cate nominals, and other structures where it looks suspiciously like a copula (similar
observations are made in Wilbur (1996a); also see Branchini’s (2006) analysis of the
Italian Sign Language sign pe, which appears in relative clauses and clefts).
It is worthwhile to make a short digression to consider the interaction between
completive/information focus and topic when both occur in the same sentence. Lillo-
Martin and de Quadros (2008, 169 f.) provide examples from ASL and LSB illustrating
this interaction. In one case, a moved topic (Aaron’s ‘tm1’) follows the I(nformation)-
focus constituent (27) and in the other, a base-generated topic (Aaron’s ‘tm2’) precedes
the focus (28).

(27) S1: what you read ix school? [ASL/LSB]


‘What did you read at school?’
I-foc tm1
S2: a. book stokoe, ix school, i read
tm1
b. ix school, i read book stokoe
‘At school I read Stokoe’s book.’

(28) S1: fruit, what john like? [ASL/LSB]


‘As for fruit, what does John like?
tm2 I-foc
S2: fruit, banana, john like more
‘As for fruit, John likes bananas best.’

These examples illustrate the problem with adopting a rigid rule on the position of
topic and focus in sentences. Additional tests are necessary to distinguish the specific
function of each sentence-initial phrase or clause. We turn next to the remaining types
of focus in Dik’s typology and how they are represented in sign languages.

3.1.2. Types of contrastive focus

Dik’s categories of Restricting and Expanding focus are marked in English with ‘only’
and ‘even/also’, respectively. ‘Only’ is a prototypical restrictive particle, whereas ‘also’
is an expanding/additive particle. ‘Even’ is also an expanding particle which places its
focus associate on a particular scale. In ASL, the sign only and its variant only-one
are used for Restricting focus (29) (Wilbur/Patschke 1998, 285).
21. Information structure 477

br br
cs lean back
(29) ix1 recent find-out what, kim only-one get-a [ASL]
‘I recently found out that Kim is the only one who got an A.’

For Expanding focus, both ‘also’ and ‘even’ are translated into ASL with the sign same
illustrated in (30).

lean forward
hn
(30) all know-that same bill(fs)j indexj test ixj get-a [ASL]
‘Everyone knows that even Bill got an A.’

In (29) and (30), we see the use of body leans as markers in addition to the focusing
signs. In a comparison of three sign languages (NGT, German Sign Language (DGS),
and Irish Sign Language (Irish SL)), Herrmann (2010) reports that all three have man-
ual signs for ‘only’ and ‘also’, but that the equivalent for ‘even’ required both a manual
sign and specific non-manual markers, including body lean, head tilt, raised eyebrows,
and wide eyes (a marker of surprise, or ‘counter to expected information’).
The remaining types of contrastive focus in Dik’s typology also rely heavily on such
leans, although there may be other markers, perhaps on the face, which have simply
not been identified as yet. Selecting focus involves a context in which there is a known
and closed set, from which one item is chosen (31). ASL marks the chosen item with
‘lean forward’ (Wilbur/Patschke 1998, 295).

(31) A: Kay and Kim got in a wreck. I think she wasn’t wearing her glasses.
B: Who wasn’t wearing her glasses?
lean forward
A: kay [ASL]

When Replacing focus is involved (e.g. ‘X, not Y’), lean forward is used to mark the cor-
rect response X and lean backward is used to mark the rejected response Y (32; note
that in ASL, the signs for ‘die/death’ and ‘bet’ are similar) (Wilbur/Patschke 1998, 296).

lean back lean forward


(32) ix1 not say ‘death’, ix1 say ‘bet’ [ASL]
‘I didn’t say “death”, I said “bet”.’

When Parallel focus is involved, two items in the same sentence are contrasted with
each other (‘and’, ‘or’, ‘but’). For the expression of this type of focus, ASL uses left/
right leans (33) as well as forward/backward leans (Wilbur/Patschke 1998, 296).

lean left lean right


br
(33) ix2 like what, chocolate vanilla [ASL]
‘Do you prefer chocolate or vanilla?’

Kooij, Crasborn, and Emmerik (2006) report similar results for the use of leans in
NGT.
478 IV. Semantics and pragmatics

3.2. Focus marking involving syntax and its interaction with prosody

3.2.1. Focus involving movement

In this section, we consider focus marking in which a focused item is clearly moved to
a position other than what might be expected from the neutral word order. Consider
first the ordering of the noun before the lexical focusers that, self, and only(-one),
as seen in examples (24⫺26). The focused noun appears in the specifier position of
the D(eterminer) Phrase, spec-DP, preceding the focuser sign, which is the head D of
DP. In the syntax and semantics literature, it is recognized that spec-DP is a position
in which items in the restriction of dyadic/restrictive operators, in this case the focus
operator, are found (see Wilbur (2011) for elaboration and application to ASL). Thus,
what we see here is the behavior of ASL focused nouns in accordance with syntactic
and semantic principles originally identified for spoken languages.
Another position for focused items is a sentence-initial position. The identity of this
position varies. In older analyses, it is known as the specifier of the CP. In more recent
analyses, the CP has been broken down into several specific phrases, also known as
the ‘left periphery’ (Rizzi 1997): Force Phrase, which indicates whether a sentence is,
for instance, an assertion, interrogative, or imperative; Topic Phrase; Focus Phrase;
another possible Topic Phrase; and Finite Phrase, which indicates whether the sentence
is finite (tensed) or non-finite (infinitival). The specifier of Focus Phrase would be the
likely position for fronted focused material in such analyses. Several authors (Petronio
1993; Watson 2010) have suggested specific phrases for prominence independent of CP
or its split projections. In some cases, the analysis depends on whether doubling is
treated as focusing.
Beginning with contrastive focalization (‘topicalization’), we have the movement of
a constituent to sentence-initial position for purposes of contrasting it with an item
previously mentioned by the same or a different speaker. From Aarons (1994), we
have example (15), repeated here:

tm1
(15) john(fs)i not-like jane(fs). mary(fs)j, ixi loves tj. [ASL]
‘John doesn’t like Jane. It’s Mary he loves.’

Note that the English translation uses the it-cleft, but in certain dialects, like that of
this author, a common structure would be the same as the ASL structure, that is, plain
topicalization: ‘Mary, he loves’. This structure is analyzed as movement of the focused
item to spec-CP (or spec-FocusP). Wilbur (2011) analyzes this as another case of the
material being moved to the restriction of a dyadic/restrictive operator ⫺ the focus
operator. This operator movement requires brow raise on the moved/focused item,
followed by a right constituent edge prosodic boundary marking, either a pause, blink,
head nod, eye gaze change, or a combination thereof (notated with comma after mary).
A surprising observation is that the same behavior ⫺ movement to sentence-initial
position and non-manual marking with brow raise ⫺ can occur with information that
is not in focus. Two examples of this phenomenon are (i) structures with modals or
negation, and (ii) wh-clefts (among others).
21. Information structure 479

In the case of sentence-final modals (e.g., can, can’t, should, must) or negatives, a
position which presumably indicates a focus on the modal or negative, the non-focused
material is moved to the initial Topic Phrase, where it receives brow raise from the
semantic topic operator (34a) (Wilbur/Patschke 1999). This leaves the negative or mo-
dal in sentence-final position. Given that the negative or modal does not receive brow
raise, we may conclude that it remains in situ rather than suggesting that it has moved
to Focus Phrase, where it would still come after the material in the Topic Phrase, as
such movement would place it in the restriction of the focus operator and it would
have to get brow raise. Thus, the kind of focus we find in (34a) fits best with the idea
that it is the new information, whether completive or contrastive, but that it is not
associated with any additional semantic focus operator (e.g., it is not exhaustive). Note
that this differs from the assertion of the negative illustrated in (34b).

br
(34) a. elinor doctor not [ASL]
‘It is not the case that Elinor is a doctor.’
b. elinor not doctor
‘Elinor is not a doctor.’

The same type of analysis can be suggested for wh-clefts (also known as the ‘rhetorical
question structure’).

br
(35) ellen work doCC what, clean sterilize surgeon poss tools [ASL]
‘What Ellen does for a living is sterilize surgeons’ tools.’

In (35), it seems even more obvious that clean sterilize surgeon poss tools is in
situ, and many even more complex structures occur in ASL. What we see in (35) is
the same kind of structure as in the English example (19), repeated here.

(19) What bothered me the most about his behavior is that he seemed to think no-
one noticed.

As we discussed for that case, in English the primary stress within the focused constitu-
ent ‘that he seemed to think no-one noticed’ could occur on several of the lexical items
or possibly on two in the same sentence (examples (22) and (23)). These structures
have been the subject of differing analysis (Wilbur 1996b; Hoza et al. 1997; Grolla
2004; Davidson/Caponigro/Mayberry 2008), focusing mainly on the behavior of the
wh-word, and the question of whether they form a single complex sentence (Wilbur),
clausal question-answer pairs (Grolla; Davidson/Caponigro/Mayberry), or separate
question-answer pairs (Hoza et al.). The features that need to be accounted for in a
successful analysis are (i) the presence of brow raise on old information and (ii) the
focusing of the large constituent. We turn now to a major typological difference be-
tween English and ASL that explains why ASL prefers structures like (35).
480 IV. Semantics and pragmatics

3.2.2. Prosodic differences between English and ASL and how that affects
focus marking

We return now to Vallduví’s (1991) notion of ‘plasticity’ ⫺ the ability of a language to


shift the primary stress to different positions within a sentence without changes in word
order. For example, English is [Cplastic] (36) whereas Catalan is [⫺plastic], with its
primary stress in clause-final position (37; examples from Vallduví):

(36) a. The boss [hates BROCCOLI]


b. The boss [HATES] broccoli
c. [The BOSS called]
(37) a. L’amo [odia el BRÒQUIL] [Catalan]
b. L’amo [l’ODIA t1], el bròquil1
c. [Ha trucat l’AMO]

For the equivalent of English (36a), Catalan (37a) requires no special adjustments, as
the object is in focus and in final position. In contrast, (37b) puts focus on the verb,
which normally precedes the object. A quick look at (37b) might lead one to think
that the verb is still right before the object, but in fact, the object has been right-
dislocated out of the main clause (leaving a trace t) and a resumptive clitic (l’) has
been inserted before the verb. This adjustment ensures that the verb is now in main
clause-final position, where it receives primary stress. The object is now outside of the
main clause and, meeting McCawley’s definition, is unstressed. In the Catalan equiva-
lent of the English example (36c), the subject is moved to clause-final position in order
to receive stress (37c). Thus, Catalan, unlike English, cannot shift the primary stress
within a sentence and requires that the focus information and the prominence in final
position be brought together by other means. The typology of sign languages with
respect to plasticity has yet to be investigated, but it is clear that ASL is [⫺plastic]
(Wilbur 1997, 1999b).
ASL is similar to Catalan with respect to clause-final position for focus. However,
it differs from Catalan in several respects. First, ASL does not allow right dislocation.
McCawley (1988, 95) notes that right dislocation can move complement sentences and
any type of NP to the end of a sentence, leaving a corresponding pronoun (38).

(38) a. He’s just bought a new car, my uncle.


b. It’s unbearable, the weather in Syracuse.
c. It came as a surprise, John’s resignation.

Right dislocation in Catalan occurs in a much wider range of contexts than in English.
As evidence, Vallduvi (1991, 1995) points to the fact that in Catalan, the equivalent of
the English ‘Is Luke there?’, said on the phone after the person answering says
‘Hello?’, is the right dislocation form in (39a) (capitals indicate stress):

(39) a. Que hi ES, el Lluc? [Catalan]


Q loc be-3sg the Lluc
b. *Que hi es el LLUC?
21. Information structure 481

This structure is not an afterthought version in which the speaker suddenly realizes
the need to indicate the full subject, because (39b) is not grammatical in Catalan and
because afterthoughts are stressed.
In English, intonational prominence is [Cplastic], that is, prominence can be shifted
to different positions in the sentence. In Catalan, which is [⫺plastic], intonational
prominence is fixed on clause-final position, and focus and prominence are brought
together through extensive use of both right and left dislocation. Any material that
intervenes between the focused material and the final position in the main clause tends
to be omitted (because it is background/old) or placed in initial position as topic (a
‘link’ function). This is what we suggested in our discussion of examples (24) and (26).
The difference becomes really clear when English (40) and ASL (41) are compared
sentence by sentence.

(40) a. Chris saw Ted put the book on the desk. (not the bookshelf)
b. Chris saw Ted put the book on the desk. (not in it)
c. Chris saw Ted put the book on the desk. (not the letter)
d. Chris saw Ted put the book on the desk. (not drop it)
e. Chris saw Ted put the book on the desk. (not Samantha)
f. Chris saw Ted put the book on the desk. (not just assumed it)
g. Chris saw Ted put the book on the desk. (not Javier)
br
(41) a. chris see ted put book where, desk [ASL]
br
b. desk, chris see ted put book where, on-desk
br
c. chris see ted put-on desk what, book
br
d. chris see ted book do+, desk put-on
br
e. chris see book desk put-on who, ted
hn hnCC
f. true, ted book desk put-on, chris see
br
g. chris that, see ted book desk put-on

We see from (41) that ASL does not translate the English equivalents with shifted
stress, nor does it always put in focus the exact constituent that English does. For
example, ASL cannot focus the sign on (41b) or the verb put (41d) in isolation. In
examples (41a⫺e), ASL uses the wh-cleft focusing structure (also referred to as ‘rhe-
torical questions’), which, however, does not work for focusing the main verb (41f),
which is expressed with various paraphrases, nor the main subject (41g), for which the
that-cleft is preferred. This re-wording is a result of the lack of flexibility of promi-
nence for focus marking in ASL. There is, then, an interaction between prominence
for focus and word order at the typological level (Wilbur 1997, 1999b), as represented
in (42), where [GR] is a feature reflecting whether the language prefers to use gram-
matical relations [CGR] or discourse roles [⫺GR] to determine word order. (Inde-
pendently, Van Valin (1999) observes a similar distribution using other languages.)
482 IV. Semantics and pragmatics

(42) [CPlastic] [⫺Plastic]


[CGR] English Catalan
[⫺GR] Russian Spanish, ASL

3.2.3. Doubling

Sign languages frequently employ a process of ‘copying’ or ‘doubling’ certain lexical


categories, including modals, negation, wh-words, subject pronouns, quantifiers, and
numerals. Typically, a doubled item will occur sentence-initially or in-situ, and the dou-
ble will occur sentence-finally. Doubled items are not new information, but their repeti-
tion serves to emphasize them as important (Petronio 1993). Lillo-Martin and de
Quadros (2008) treat doubling as a type of E(mphatic)-focus marking (E-foc) in both
ASL and LSB. This type of focus differs from I(nformation)-focus, in a way that is
reminiscent of the distinction that Drubig (2003) makes between focus in CP and focus
below CP in TP. According to Lillo-Martin and de Quadros, there needs to be a projec-
tion above FocP, which they call T(opic)-C(omment)P, to host base-generated topics,
and two projections below FocP, one of which is TopP, which can host moved topics,
and the other below TopP, which is E-FocP, to host doubles. When there is doubling
in their analysis, and a double is in E-FocP, the TP is moved to the specifier of TopP,
resulting in a structure which has the main clause (i.e. what has been moved to spec-
TopP) followed by the double located in the E-Foc head, thereby ensuring that the
doubled item is in sentence-final position. With one additional step, the deletion of the
doubled item from its original position, a sentence is generated with the copy in final
position, leaving only one occurrence instead of two.
Alternatively, Neidle et al. (2000) treat doubling as a plain copying process, adjoin-
ing the copy to the C head on the right in ASL, or as copying into a Tag Phrase. The
difference between these two types of copying can be seen in a comparison of copying
in ASL, HZJ, and Austrian Sign Language (ÖGS) (Šarac et al. 2007). All three lan-
guages allow doubling, but they differ on which categories can be doubled and on
whether there must be a pause before the doubled item or not. In this regard, ASL
(43) and HZJ (44), which both have the head C on the right, allow doubling with or
without a pause (Šarac et al. 2007, 215⫺218). In contrast, ÖGS, which has the head C
on the left, requires a pause before the double (indicated by a comma in (45)), and is
more restrictive in terms of what categories can be doubled, namely only those catego-
ries that can appear in a Tag Phrase (‘cd’ = chin down, ‘bf’ = brow furrow).

(43) a. must go-work must [ASL]


‘I must go to work.’
b. john will go, will
‘John will go.’
(44) a. 5-(što) prati 5-(što) [HZJ]
5-(what) wash 5-(what)
‘What is she washing?’
b. index3 index1 dječak index1
IX-3 I boy I
‘I’m that boy.’
21. Information structure 483

cd
(45) a. bub wollen lernen, wollen [ÖGS]
boy want learn, want
‘The boy wants to learn.’
hn hn
b. bub wollen fussball spielen, wollen
boy want football play want
‘The boy wants to play football/soccer.’
br bf
c. bub sollen fussball spielen, sollen
boy should football play should
‘The boy should play football/soccer.’

In ÖGS, the only doubling that can occur without a pause is subject pronoun copy. As
a result, doubled modals appear in ÖGS only as a Tag, which requires a pause and
different non-manuals, which await further investigation (45) (Šarac et al. 2007, 218 f.).
A clear pause marking, however, may be lost in narratives, which is compatible with
the phenomena of fast signing (Wilbur 2009). Differences in the occurring non-manual
signals, however, still indicate that the doubled item is located in a separate prosodic
phrase. This crosslinguistic comparison suggests that further investigation of doubling
is necessary to clarify whether there is one phenomenon or many. A further concern
for the treatment of doubling in ASL is that from a semantics perspective, focus is
associated with restrictive operator-variable structure, whereas emphasis is not (Partee
1991, 1992, 1995). This can be seen in ASL by the lack of brow raise on either the
doubled items or the material between the doubled items as illustrated in (43) (Wil-
bur 2011).

4. Conclusion

This chapter reviews different approaches to the notions of ‘topic’ and ‘focus’ and their
applications to sign languages. In the introduction to the terminology, an effort has
been made to highlight the distinctions that need to be made in pragmatics, semantics,
syntax, and prosody in order for various claims to be evaluated. These include the
difference between topic and topicalized (the latter being a form of focusing), focus
and contrast (as contrast can also apply to topics), discourse versus sentence level
topics (a discourse topic need not appear in a sentence), subject versus topic (while
subjects may be topical, they need not be structural topics), completive versus contrast-
ive focus (the latter requires both a specific presupposition and clear prosodic mark-
ing), and focus versus stress (not every word in a focused phrase may show stress
marking). In addition, the concept of exhaustivity was presented in order to be able
to make the distinction between contrastive focus, which requires that the focused item
is the only true alternative, and contrastive topic, which does not carry this require-
ment. Contrastive topic implies that the contrasted item is the only true alternative,
but this implication can be cancelled by further information in the sentence without
creating a self-contradiction.
484 IV. Semantics and pragmatics

With respect to the linguistic encoding of topic, it was shown that discourse level
topic is frequently reduced or omitted. In contrast, sentence level topic is generally
marked overtly, by pauses and special non-manuals.
For focus, in addition to the fundamental distinction between completive and con-
trastive, a number of different types of contrastive focus were presented. In all cases,
there is overt marking of focus, including the addition of focusers (e.g., that, self),
non-manual marking (leans, brow marking), and focus constructions (that-cleft, wh-
cleft, topicalization). Focus marking also reflects the interaction of syntax and prosody,
leading to a typological categorization based on the ability of a language to move its
stress marking (plasticity) and the preference of a language to determine word order
based on grammatical relations or discourse roles.
Finally, the issue of doubling as a focus process is reviewed. Two analyses are con-
trasted (Lillo-Martin/de Quadros 2008; Neidle et al. 2000). However, further research
is clearly called for, as a number of categories can be copied in some languages but
not in others, with the more restrictive languages able to copy only what would nor-
mally appear in a Tag Phrase. The absence of focus restriction associated with doubling
suggests that while it may be emphatic, perhaps a form of stress, it is not clearly focus.

5. Literature

Aarons, Debra
1994 Aspects of the Syntax of ASL. PhD Dissertation, Boston University.
Baker, Charlotte/Padden, Carol
1978 Focusing on the Non-manual Components of ASL. In: Siple, Patricia (ed.), Understand-
ing Language through Sign Language Research. New York, NY: Academic Press, 27⫺57.
Bos, Heleen
1995 Pronoun Copy in Sign Language of the Netherlands. In: Bos, Heleen/Schermer, Trude
(eds.), Sign Language Research 1994. Proceedings of the Fourth European Congress on
Sign Language Research, Munich. Hamburg: Signum, 121⫺147.
Branchini, Chiara
2006 On Relativization and Clefting in Italian Sign Language (LIS). PhD Dissertation, Uni-
versity of Urbino.
Büring, Daniel
1999 Topic. In: Bosch, Peter/Van der Sandt, Rob (eds.), Focus. Linguistic, Cognitive, and
Computational Perspectives. Cambridge: Cambridge University Press, 142⫺165.
Caponigro, Ivano/Davidson, Kathryn
2011 Ask, and Tell as Well: Question-Answer Clauses in American Sign Language. Manu-
script, UCSD.
Cechetto, Carlo
1999 A Comparative Analysis of Left and Right Dislocation in Romance. In: Studia Linguis-
tica 53, 40⫺67.
Chafe, Wallace
1976 Givenness, Contrastiveness, Definiteness, Subjects, Topics, and Point of View. In: Li,
Charles N. (ed.), Subject and Topic. New York, NY: Academic Press, 25⫺55.
Crasborn, Onno/Kooij, Els van der/Ros, Johan/Hoop, Helen de
2009 Topic Agreement in NGT (Sign Language of the Netherlands). In: The Linguistic Re-
view 26, 355⫺370.
21. Information structure 485

Creider, Chet
1979 On the Explanation of Transformations. In: Givón, Talmy (ed.), Syntax and Semantics:
Discourse and Syntax. New York, NY: Academic Press, 3⫺21.
Covington, Virginia
1973 Features of Stress in American Sign Language. In: Sign Language Studies 2, 39⫺58.
Davidson, Kathryn/Caponigro, Ivano/Mayberry, Rachel
2008 Clausal Question-Answer Pairs: Evidence from ASL. In: Proceedings of the 27 th West
Coast Conference on Formal Linguistics (WCCFL 27), Somerville, MA: Cascadilla
Press, 108⫺115.
Dik, Simon C.
1989 The Theory of Functional Grammar. Part 1: The Structure of the Clause. Dordrecht:
Foris.
Drubig, Hans Bernhard
2003 Toward a Typology of Focus and Focus Constructions. In: Linguistics 41, 1⫺50.
Engberg-Pedersen, Elisabeth
1990 Pragmatics of Nonmanual Behavior in Danish Sign Language. In: Edmondson, William/
Karlsson, Fred (eds.), SLR’87: Papers from the International Symposium on Sign Lan-
guage Research, Lappeenranta. Hamburg: Signum, 121⫺128.
Erteschik-Shir, Nomi
1997 The Dynamics of Focus Structure. Cambridge: Cambridge University Press.
Ferro, Lisa
1992 On “Self” as a Focus Marker. In: Proceedings of the Eastern States Conference on
Linguistics (ESCOL’92), 68⫺79.
Foley, William A./Van Valin, Robert D.
1985 Information Packaging in the Clause. In: Shopen, Timothy (ed.), Language Typology
and Syntactic Description. Vol 1: Clause Structure. Cambridge: Cambridge University
Press, 282⫺364.
Fischer, Susan/Johnson, Robert
1982 Nominal Markers in ASL. Paper Presented at the Annual Meeting of the Linguistic
Society of America.
Gee, James P./Kegl, Judy A.
1983 Narrative/story Structure, Pausing, and American Sign Language. In: Discourse Proc-
esses 6, 243⫺258.
Givón, Talmy
1983 Topic Continuity in Discourse: A Quantitative Cross-language Study. Amsterdam: Ben-
jamins.
Golde, Karin
1999 Evidence for Two Types of English Intensive NPs. In: Papers from the 35 th Meeting of
the Chicago Linguistics Society 35(1), 99⫺108.
Grolla, Elaine
2004 Clausal Equations in American Sign Language. Poster Presented at the 8 th International
Conference on Theoretical Issues in Sign Language Research (TISLR 8), Barcelona.
Gussenhoven, Carlos
1983 Focus, Mode and the Nucleus. In: Journal of Linguistics 19, 377⫺417.
Gundel, Jeanette
1999 On Different Kinds of Focus. In: Bosch, Peter/Van der Sandt, Rob (eds.), Focus. Lin-
guistic, Cognitive, and Computational Perspectives. Cambridge: Cambridge University
Press, 293⫺305.
Herrmann, Annika
2010 Modal Particles and Focus Particles in Sign Languages ⫺ A Cross-linguistic Study of
DGS, NGT and ISL. PhD Dissertation, Johann-Wolfgang-Goethe University, Frank-
furt.
486 IV. Semantics and pragmatics

Hoza, Jack/Neidle, Carol/MacLaughlin, Dawn/Kegl, Judy/Bahan, Benjamin


1997 A Unified Syntactic Account of Rhetorical Questions in American Sign Language. In:
ASL Language Research Project Report No. 4, Boston University.
Jantunen, Tommi
2007 On Topic in Finnish Sign Language. Manuscript, University of Jyväskylä, Finland
(Available at: http://users.jyu.fi/~tojantun/articles/JAN_topic_ms.pdf).
Jantunen, Tommi
2008 Fixed and Free: Order of the Verbal Predicate and Its Core Arguments in Declarative
Transitive Clauses in Finnish Sign Language. In: SKY Journal of Linguistics 21, 83⫺123.
Keenan, Edward
1976 Towards a Universal Definition of ‘Subject’. In: Li, Charles N. (ed.), Subject and Topic.
New York, NY: Academic Press, 303⫺333.
Kegl, Judy A.
2004 ASL Syntax: Research in Progress and Proposed Research [Reprint of 1977 Unpub-
lished Manuscript]. In: Sign Language & Linguistics 7(2), 173⫺206.
Kiss, Katalin É. (ed.)
1995 Discourse Configurational Languages. Oxford: Oxford University Press.
Kooij, Els van der/Crasborn, Onno/Emmerik, Wim
2006 Explaining Prosodic Body Leans in NGT: Pragmatics Required. In: Journal of Pragmat-
ics 38, 1598⫺1614.
Koulidobrova, Elena
2009 self: Intensifier and ‘Long Distance’ Effects in ASL. Paper presented at the 21st Euro-
pean Summer School in Logic, Language, and Information, Bordeaux.
Lambrecht, Knut
1994 Information Structure and Sentence Form: Topic, Focus, and the Mental Representations
of Discourse Referents. Cambridge: Cambridge University Press.
Li, Charles/Thompson, Sandra
1976 Subject & Topic: A New Typology of Language. In: Li, Charles N. (ed.), Subject and
Topic. New York, NY: Academic Press, 458⫺489.
Lillo-Martin, Diane
1986 Two Kinds of Null Arguments in American Sign Language. In: Natural Language and
Linguistic Theory 4, 415⫺444.
Lillo-Martin, Diane/Quadros, Ronice M. de
2008 Focus Constructions in American Sign Language and Lingua de Sinais Brasileira. In:
Quer, Josep (ed.), Signs of the Time. Selected Papers from TISLR 8. Hamburg: Signum,
161⫺176.
McCawley, James
1988 The Syntactic Phenomena of English. Chicago: University of Chicago Press.
Milković, Marina/Bradarić-Jončić, Sandra/Wilbur, Ronnie B.
2007 Information Status and Word Order in Croatian Sign Language. In: Clinical Linguis-
tics & Phonetics 21, 1007⫺1017.
Molnár, Valéria/Winkler, Susanne
2010 Edges and Gaps: Contrast at the Interfaces. In: Lingua 120, 1392⫺1415.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G.
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Nunes, Jairo/Quadros, Ronice M. de
2008 Phonetically Realized Traces in American Sign Language and Brazilian Sign Language.
In: Quer, Josep (ed.), Signs of the Time. Selected Papers from TISLR 8. Hamburg:
Signum, 177⫺190.
Partee, Barbara
1991 Topic, Focus & Quantification. In: Proceedings of SALT 1, 257⫺280.
21. Information structure 487

Partee, Barbara
1992 Adverbial Quantification and Event Structures. In: Proceedings of the Berkeley Linguis-
tic Society 1991 ⫺ Parasession on Event Structures. Berkeley, CA: Berkeley Linguistic
Society, 439⫺456.
Partee, Barbara
1995 Quantificational Structures and Compositionality. In: Bach, Emmon/Jelinek, Eloise/
Kratzer, Angelika/Partee, Barbara (eds.), Quantification in Natural Languages. Dor-
drecht: Kluwer, 541⫺601.
Petronio, Karen
1993 Clause Structure in American Sign Language. PhD Dissertation, University of Washing-
ton, Seattle.
Prince, Ellen
1978 A Comparison of Wh-Clefts and It-Clefts in Discourse. In: Language 94, 883⫺906.
Prince, Ellen
1986 On the Syntactic Marking of Presupposed Open Propositions. In: Chicago Linguistic
Society 22, 208⫺222.
Puglielli, Annarita/Frascarelli, Mara
2007 Interfaces: The Relation Between Structure and Output. In: Pizzuto, Elena/Pietrandrea,
Paola/Simone, Raffaele (eds), Verbal and Signed Languages: Comparing Structures,
Constructs, and Methodologies. Berlin: Mouton de Gruyter, 133⫺167.
Repp, Sophie
2010 Defining ‘Contrast’ as an Information-structural Notion in Grammar. In: Lingua 120,
1333⫺1345.
Reinhart, Tanya
1982 Pragmatics and Linguistics: An Analysis of Sentence Topics. In: Philosophica 27, 53⫺94.
Rizzi, Luigi
1997 The Fine Structure of the Left Periphery. In: Haegeman, Liliane (ed.), Elements of
Grammar. Handbook in Generative Syntax. Dordrecht: Kluwer, 281⫺337.
Rochemont, Michael S./Culicover, Peter W.
1990 English Focus Constructions and the Theory of Grammar. Cambridge: Cambridge Uni-
versity Press.
Rooth, Mats
1992 A Theory of Focus Interpretation. In: Natural Language Semantics 1, 75⫺116.
Šarac, Ninoslava/Schalber, Katharina/Alibašić Ciciliani, Tamara/Wilbur, Ronnie B.
2007 Crosslinguistic Comparison of Sign Language Interrogatives. In: Perniss, Pamela/Pfau,
Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Lan-
guage Structure. Berlin: Mouton de Gruyter, 207⫺244.
Selkirk, Elisabeth
1984 Phonology and Syntax. The Relation Between Sound and Structure. Cambridge, MA:
MIT Press.
Vallduví, Enric
1991 The Role of Plasticity in the Association of Focus and Prominence. In: No, Yongkyoon/
Libucha, Mark (eds.), ESCOL ‘90: Proceedings of the Seventh Eastern States Conference
on Linguistics. Columbus, OH: Ohio State University Press, 295⫺306.
Vallduví, Enric
1995 Structural Properties of Information Packaging in Catalan. In: Kiss, Katalin É. (ed.),
Discourse Configurational Languages. Oxford: Oxford University Press, 122⫺152.
Van Valin, Robert D.
1999 A Typology of the Interaction of Focus Structure and Syntax. In: Raxilina, E./Testelec,
J. (eds.), Typology and the Theory of Language: From Description to Explanation. Mos-
cow: Languages of Russian Culture, 511⫺524.
488 IV. Semantics and pragmatics

Watson, Katharine
2010 Wh-Questions in American Sign Language: Contributions of Non-manual Marking to
Structure and Meaning. MA Thesis, Purdue University.
Weiser, Anne
1975 How Not to Answer a Question: Purposive Devices in Conversational Strategy. In:
Chicago Linguistic Society 11, 649⫺660.
Wilbur, Ronnie B.
1991 Intonation and Focus in American Sign Language. In: No, Yongkyoon/Libucha, Mark
(eds.), ESCOL ’90: Proceedings of the Seventh Eastern States Conference on Linguistics.
Columbus, OH: Ohio State University Press, 320⫺331.
Wilbur, Ronnie B.
1994 Foregrounding Structures in ASL. In: Journal of Pragmatics 22, 647⫺672.
Wilbur, Ronnie B.
1996a Focus and Specificity in ASL Structures Containing self. Paper Presented at the An-
nual Conference of the Linguistic Society of America, San Diego.
Wilbur, Ronnie B.
1996b Evidence for the Function and Structure of Wh-clefts in American Sign Language. In:
Edmondson, William H./Wilbur, Ronnie B. (eds.), International Review of Sign Linguis-
tics. Hillsdale, NJ: Lawrence Erlbaum, 209⫺256.
Wilbur, Ronnie B.
1997 A Prosodic/Pragmatic Explanation for Word Order Variation in ASL with Typological
Implications. In: Lee, Kee Dong/Sweetser, Eve/Verspoor, Marjolijn (eds.), Lexical and
Syntactic Constructions and the Construction of Meaning. Amsterdam: Benjamins,
89⫺104.
Wilbur, Ronnie B.
1999a Stress in ASL: Empirical Evidence and Linguistic Issues. In: Language and Speech 42,
229⫺250.
Wilbur, Ronnie B.
1999b Typological Similarities Between American Sign Language and Spanish. In: Actas de
VI Simposio Internacional de Comunicación Social (Santiago de Cuba) 1, 438⫺443.
Wilbur, Ronnie B.
2009 Effects of Varying Rate of Signing on ASL Manual Signs and Nonmanual Markers. In:
Language and Speech 52, 245⫺285.
Wilbur, Ronnie B.
2011 Nonmanuals, Semantic Operators, Domain Marking, and the Solution to Two Outstand-
ing Puzzles in ASL. In: Sign Language & Linguistics 14(1), 148⫺178. (Special Issue on
Nonmanuals in Sign Languages, Herrmann, Annika/Steinbach, Markus (eds.)).
Wilbur, Ronnie B./Petitto, Laura A.
1983 Discourse Structure of American Sign Language Conversations; or, How to Know a
Conversation When You See One. In: Discourse Processes 6, 225⫺241.
Wilbur, Ronnie B./Patschke, Cynthia
1998 Body Leans and the Marking of Contrast in American Sign Language. In: Journal of
Pragmatics 30, 275⫺303.
Wilbur, Ronnie B./Patschke, Cynthia
1999 Syntactic Correlates of Brow Raise in ASL. Sign Language & Linguistics 2, 3⫺40.
Wilbur, Ronnie B./Petersen, Lisa
1998 Modality Interactions of Speech and Signing in Simultaneous Communication. In: Jour-
nal of Speech, Language & Hearing Research 41, 200⫺212.
Wilbur, Ronnie B./Schick, Brenda
1987 The Effects of Linguistic Stress on ASL. In: Language and Speech 30, 301⫺323.
22. Communicative interaction 489

Ziv, Yael
1994 Left and Right Dislocation: Discourse-functions and Anaphor. In: Journal of Pragmat-
ics 22(6), 629⫺645.

Ronnie B. Wilbur, West Lafayette, Indiana (USA)

22. Communicative interaction


1. Introduction
2. Grice’s Co-operation Principle
3. Speech acts
4. Turn-taking
5. Coherence and cohesion
6. Narratives
7. Pragmatic adequacy
8. Influence of cultural/hearing status
9. Conclusion
10. Literature

Abstract
In many of the areas of communicative interaction, very little research has been carried
out, the majority of the available studies focusing on American Sign Language (ASL).
With regard to Grice’s maxims and speech act theory, few differences are expected be-
tween languages, but in other cases, such as turn-taking, it is not clear whether the results
(e.g. concerning the use of eye gaze in ASL or floor sharing in British Sign Language
(BSL)) can be transferred to other sign languages. Role shift and anaphoric reference
are devices used to create cohesion in ASL discourse; other sign languages studied thus
far seem to use these devices too.
Narrative devices such as spatial mapping and eye gaze patterns have been noted in
narratives but it is not clear how common these are in sign languages in general. There
appear to be different conventions for indicating the formality of signing and for whis-
pering. Little is known about the acquisition of the aspects described above but it appears
that some of them develop quite slowly.

1. Introduction
Communicative interaction is a broad topic and can cover most aspects of pragmatics.
Here we will focus on those aspects not covered in previous chapters in this section of
the handbook, and where specifically interaction is the main point of interest. In many
490 IV. Semantics and pragmatics

sign languages, such aspects have not been studied at all and thus the data we can
report are limited. It will be a challenge of the next decade to broaden our knowledge
in this area.
We will start this chapter with a discussion of Grice’s Co-operation Principle (sec-
tion 2) and the application of speech act theory to sign languages (section 3). To date
only little research has been done in these areas but the few available studies suggest
that there are no significant differences between signed and spoken languages. The
discussion of turn-taking rules, however, reveals some modality-dependent characteris-
tics (section 4). In all languages, communication and interaction must be coherent and
cohesive; the means to achieve this in sign languages is discussed in section 5. When
we tell a story, there are linguistic, but also social, devices and rules that we must apply;
these are often distinctive in sign languages (section 6). Section 7 deals with pragmatic
adequacy. Finally, hearing status may influence all of the above, and this influence is
briefly discussed in the final section 8.

2. Grice’s Co-operation Principle

Rules governing conversational interactions were set out by Grice in 1975. The set of
four rules that Grice suggested was given the name ‘Co-operation Principle’ since they
are based on an underlying assumption that regular interactions are intended for the
efficient transfer of information to which end the participants co-operate. The four
rules ⫺ or maxims, as they are mostly called in this context ⫺ cover the aspects of
quantity, quality, relevance, and manner. Grice’s rules were based on the analysis of
spoken language interaction but the underlying assumption appears to be universal,
and should thus also be valid for sign language interaction. Consequently, the four
maxims seem to apply in similar ways in sign languages. The example in (1) shows how
an utterance, here in Sign Language of the Netherlands (NGT), is pragmatically
strange if the maxim of quantity is not applied (Baker/van den Bogaerde 2008, 85).

(1) tomorrow willem index1 act. romeo. [NGT]


‘Tomorrow Willem and I will do some acting. Romeo.’

Clearly, either Willem or the signer will be acting the part of Romeo, but the signer
fails to provide enough information to indicate who will be playing this part. In the
same way, it is not acceptable to give misleading information.

(2) Passer-by A: here cinema where? [NGT]


‘Where is there a cinema nearby?
Newspaper man B: walk right, then walk left. opposite.
‘Turn right, then left. Then it is opposite you’

The sequence in (2) is fine if B knows or believes that the cinema is open and function-
ing. It is likely that A wishes to see a film and not just look at the building. However,
if B knows that the cinema is shut for repair, then he fails to adhere to the maxim of
quality. A will assume that the cinema is functioning and go to look for it. Obviously,
22. Communicative interaction 491

telling an outright lie is also against the maxim of quality. Both these maxims seem to
be as valid for sign languages as they are for spoken languages, but to the best of our
knowledge, almost no research has been done on this topic. Mindess (2006, 118) com-
ments that it is not acceptable in the United States to pretend to be deaf if you are
hearing. This would be an example of an application of the maxim of quality to a
situation that is specific for sign language users, but it is not evident that this applies ev-
erywhere.
The maxim of relevance specifies that only information that is relevant to the topic
should be mentioned. The addressee will automatically assume that the information
provided is relevant and try to make the signer’s contribution fit.

(3) Jan: next week holiday index1 [NGT]


‘I shall be on holiday next week.’
Marie: painter my father.
‘My father is a painter.’

Without knowing more about the context, it seems as though Marie is violating the
maxim of relevance in example (3). But if a preceding discussion had revealed that Jan
wanted to have his house painted while being on holiday, then Marie’s contribution
would be relevant. Such notions of relevance appear to be very general and, again,
seem to apply to sign languages in the same way as to spoken languages. There are,
however, cultural differences concerning the kind of information it is appropriate to
give or withhold in certain situations. In some Asian cultures, for example, it is relevant
to tell the addressee how your family is and to ask about his/her family and so on
before starting a business conversation. In many other cultures, including the North
American culture, this is considered irrelevant. To our knowledge, no specific research
has addressed this aspect for sign languages.
In the context of relevance, it is necessary to also mention the notion of implicature.
Marie in (3) does not explicitly state that her father could paint Jan’s house while he
is on holiday; this message is implied. Again, this notion seems to work similarly in
sign languages but this has not yet been researched.
The fourth maxim, the maxim of manner, is sometimes seen as overlapping with
the other three. It contains the following four sub-points: avoid obscurity of expression,
avoid ambiguity, be brief, and be orderly. Politeness is a notion that is connected to
this maxim as will be discussed further in section 7.2.

3. Speech acts

In his classic work, Searle (1969) distinguished three types of acts that are performed
when a sentence is produced, namely

a) the actual utterance of the sentence (uttering words, morphemes, sentences);


b) the propositional act (referring and predicating);
c) the illocutionary act (stating, questioning, commanding, promising, etc.).
492 IV. Semantics and pragmatics

Tab. 22.1: Performative verbs shared in English and ASL (Campbell 2001, 80, Table 5.1)
Assertive Commissive Directive Declaration Expressive
declare accept ask declare approve, praise
suggest promise urge resign complain
predict threaten encourage surrender congratulate
report agree require approve thank
warn reject order name apologize
describe offer prohibit define
object bet suggest abbreviate
allow open, close
authorize vote
cancel

Campbell (2001) is one of the few studies that attempt to apply Searle’s theory to a
sign language, in this case American Sign Language (ASL). Very similar types of acts
were observed in ASL users, that is, they produce and understand utterances known
as promises, orders, permissions, excuses, or assertions. Campbell looked at the struc-
ture of direct speech acts and explored how these are expressed in ASL. She selected
five types of English performative verbs ⫺ namely assertive, commissive, directive,
declarative, and expressive verbs (see Table 22.1 for examples) ⫺ and compared these
to their translations or expressions in ASL. In utterances with a performative verb, the
illocutionary force is immediately clear, because the verb explicitly states the action
performed, as in example (4) (Campbell 2001, 75).

(4) PROMISE food me bring will [ASL]


‘I promise to bring some food’

ASL and English were found to share a set of performative verbs (see Table 22.1).
In those cases where there is no exact ASL equivalent for an English performative
verb, ASL signers use “specific markers that create an equivalent semantic content”
(2001, 18). These markers can be similar verbs, other lexical signs (Campbell calls these
“non-verbs”), or non-manual signals. An example of a similar verb is the ASL verb
say used for English claim. The verb reassert in the English sentence ‘I reassert that
the book is mine’ is translated without using a verb in ASL: true book mine. Examples
of non-manual signals include ‘nodding’ which expresses the equivalent of the English
performative verbs assert, claim, and suggest. The ASL notCheadshake may be used
to express the equivalent of verbs like negate, disclaim, or disapprove.
Celo (1996) found that non-manuals are important in the production of illocution-
ary and perlocutionary force in sign languages. For Italian Sign Language (LIS), he
describes a performative sign which is produced at the beginning and at the end of a
signed yes/no question. It indicates an interrogative intention, but has no equivalent
in spoken Italian. It is produced with a flat O-handshape and articulated either in
neutral signing space or on the back of the non-dominant hand (Metzger/Bahan
2001, 118).
22. Communicative interaction 493

Indirect speech acts contain no performative verb, and their meaning cannot be
derived without taking into account the context in which the utterance is produced;
see (5) for an NGT example (taken from Baker/van den Bogaerde 2008, 85).

y/n
(5) Jan: tonight party index2 come [NGT]
‘Are you coming to the party tonight?’
Marie: tomorrow exams
‘I have exams tomorrow.’

Marie’s answer to Jan’s yes/no-question is not explicit, that is, she does not reply ‘yes’
or ‘no’. Implicitly, however, she makes clear that she cannot come to the party because
she has exams the next day and therefore has to study that evening. She is relying on
Jan’s knowledge of the world to understand her reply and interpret it as a negative answer.
Indirect requests are often formulated as yes/no-questions in many spoken lan-
guages. In English, for example, asking Can you open a window? is not usually an
enquiry about the addressee’s physical ability of opening a window but is a request to
do so. Nonhebel (2002) studied the non-manual aspects of indirect requests in NGT.
Yes/no-questions asking for information are marked in NGT by the non-manual mark-
ers ‘eyebrows up’ and ‘head forward’ (Coerts 1992). Nonhebel found that when an
utterance was not an informative yes/no-question but rather an indirect request, the
chin was often lowered instead of the head being put forward. This non-manual mark-
ing was observed regularly but not consistently. In 11 out of 15 cases, it was produced
simultaneously with the general request-sign aub (‘please’; see Figure 22.1). In fact,
the lexical sign aub marked all indirect requests in her data, and, as stated above, was
often, but not always, accompanied by a lowering of the chin, raised eyebrows, and
pursed lips.

Fig. 22.1: The NGT sign aub (‘please’)

4. Turn-taking

The ability to take your turn at the correct place in a conversation is a skill that
crucially depends on the recognition of specific signals that the signer or speaker gives
as they near the point where a change is possible. These points are called “Transition
Relevance Places (TRP)” following the classic work by Sacks, Schegloff, and Jefferson
(1974). It is known from the work on spoken language discourse that there is no univer-
494 IV. Semantics and pragmatics

sal pattern for turn-taking. Rather, there are cultural differences in what may constitute
a TRP and in the kind of signals used to regulate turn-taking. A few studies have
investigated adult turn-taking systems in sign languages, for example for ASL (Baker
1977) and British Sign Language (BSL) (Coates/Sutton-Spence 2001). More research
has been done on children’s development of turn-taking, in particular on attention
strategies. Both data from adults and children will be discussed. Here we will address
the visual attention aspects of turn-taking (section 4.1), eye-gaze behaviour during
turns (section 4.2), and overlap in conversations (section 4.3).

4.1. Visual attention for turns

Since sign languages are articulated in the visual-spatial modality, successful communi-
cation can only take place when the addressee is paying visual attention to the signed
message. This visual attention from the addressee can already be present at the begin-
ning of the signer’s turn or the signer has to obtain visual attention. As Baker (1977,
218) describes for ASL, it is possible to actively get the addressee’s attention by using
an index sign, by touching the addressee, or by waving a hand in the addressee’s visual
field (see Figure 22.2). It is also possible to stamp on the floor or bang on a vibrating
working surface or to switch lights on and off. There are politeness rules attached to
using these explicit methods. For example, in many Deaf communities, it is considered
impolite to tap someone hard on the shoulder or on the back. It is also possible to use
less explicit methods of gaining attention such as beginning to sign. Even if there was
no prior eye contact, the addressee will often perceive the signing in his/her peripheral
vision and then give the signer full visual attention.

Fig. 22.2: Gaining visual attention by waving in the addressee’s visual field (Baker/van den Bo-
gaerde 2008, 86)
22. Communicative interaction 495

Similar attention-getting methods are also used with children but the difference
with children is that they have to learn to look for communication. Parents in interac-
tion with their deaf children use both explicit and implicit devices as described above,
but they also sometimes transfer the location of a sign into the visual field of the child.
Thus, the NGT sign train may be moved from its usual location, the neutral signing
space, down to the floor next to the toy train, or signs may be articulated next to a
picture in a book (see also Mather 1989 for ASL). Van den Bogaerde (2000) found
that checking for signing from the mother started to appear in deaf children of deaf
parents around the age of two years. At age three, these children perceive more than
80% of the signs in the input. Spencer (2000) studied the attention patterns of groups
of American deaf and hearing infants (aged 9⫺18 months) in interaction with their
deaf and hearing parents. The deaf infants with deaf parents spent more time with
coordinated shared attention. All deaf infants spent more time watching their mothers
than the hearing infants. Hearing children with hearing mothers spent more time look-
ing at objects than children with deaf mothers. Harris and Mohay (1997) found similar
results for BSL and Australian Sign Language (Auslan) in deaf children of 18 months.
In general, deaf mothers were more insistent on getting attention but there were large
individual differences between the deaf and hearing mothers. Hearing mothers in a
book-reading task with three-year-old children were found to have difficulty in dividing
the child’s attention between the signing and the book (Swisher 1992). This was in
contrast to the behavior of deaf mothers who tended to wait longer until the child
looked at them before they signed. The deaf parents in a study of two-year-old hearing
children learning Puerto Rican Sign Language behaved similarly (Mather et al. 2006).
These parents ensure visual attention for themselves as signers as well as for the objects
that are the topic of conversation. Around the age of three years, children bilingual in
both a sign language and a spoken language will also change their gaze behavior ac-
cording to the language they are using (Richmond-Welty/Siple 1999). Twins acquiring
both spoken English and ASL, for instance, established mutual gaze at the beginning
of their ASL utterances and mostly maintained their gaze during the signer’s turn,
whereas when speaking English, mutual gaze was observed infrequently and not often
at the beginning of an utterance. In studies of classroom interaction between deaf
children and a deaf teacher, both Mather (1987) for ASL and Smith and Sutton-Spence
(2005) for BSL indicate that eye gaze is an important implicit strategy for gaining
attention in this situation. Prinz and Prinz’s early study (1985) showed a clear develop-
ment of the more implicit means of gaining attention as opposed to explicit means
such as pulling hair or clothes, with the development continuing past seven years. In
sum, the requirements of the visual communication system have a clear impact on the
development of attention-giving and attention-getting skills.

4.2. Eye-gaze behaviour

In her analysis of the turn-taking behaviour of adults using ASL, Baker (1977) identi-
fied mutual eye gaze as an important turn-taking signal with various functions. As can
be seen from the fragment in (6), mutual eye gaze is not continuous. Note that eye
gaze at the conversational partner is transcribed by a dotted line (Baker 1977, 226).
496 IV. Semantics and pragmatics

(6) Tom: why you like here like berkeley [ASL]


GazeT: ---------------------------------------------------------------
GazeJ: ------------ --------------------
Joe: well a to z a to z

Tom: all in
GazeT: ---------------------------------------------------------------
GazeJ: ------------ -------
Joe: all in well have italy food to china food well

Tom:
GazeT: ---------------------------------------------------------------
GazeJ: ------------ --------------------
Joe: think food well other place not have all in not

Tom: ‘that’C
GazeT: ------------ Both signers simultaneously lower their hands to half rest,
GazeJ: ------------ palms toward the signer
Joe: have well

Free translation:
Tom: Why do you like Berkeley?
Joe: Well, it has everything. Berkeley has Italian food, Chinese food, all
kinds of food. Well, other places don’t have that kind of variety.
Tom: Yes, I know what you mean.

Important is the mutual eye gaze at the beginning (why you) and the eye gaze of the
addressee at the signer. The signer Joe breaks off his gaze for some periods while he
is signing. Holding gaze on the signer seems to be a continuation signal in ASL. Signers
can hold the floor by looking away but also by using an explicit sign such as um in
Auslan (Johnston/Schembri 2007, 263). Slowing down can on the other hand indicate
a Transition Relevant Place.

4.3. Overlap in signing

It is common in most languages for there to be some overlap in speaking or signing


between conversation partners. The addressee often gives signals that he is paying
attention to the conversation in the form of backchannels such as good, bad, surprise,
and so on. There are, however, clear cultural differences in spoken languages as to the
form and frequency of such backchannels.
In the conversation in (6), there is some indication of overlap or simultaneous sign-
ing between the signers. Baker (1977, 216) reports that more and longer periods of
simultaneous signing occurred in her data than reported for simultaneous speaking in
American English: a mean of 1.5 seconds in length compared to 0.5 seconds. Similarly,
in their BSL study of adults, Coates and Sutton-Spence (2001) report considerable
amounts of simultaneous signing, as did Thibeault (1993, in Metzger/Bahan 2001, 129)
22. Communicative interaction 497

for one analyzed conversation in Filipino Sign Language. The fragment in (7) is taken
from a BSL conversation between four women (Coates/Sutton-Spence 2001, 520 f.). It
shows extensive use of collaborative floor, that is, stretches of discourse where utteran-
ces of multiple participants overlap but jointly contribute to the same topic and com-
plement each other.

(7) [BSL]
a. TA interesting maths you-see
TR well now interesting---------
N you-see it’s-strange he clever but crap
F
b. TA
TR hey hey------------- me-too art teacher similar
N make you wonder why
F how-many----what
c. TA
TR similar--- yes-but similar------- oh-yes similar odd clothes
N have (xxx) clothes art theirs means that
F
d. TA yes-right i agree you how you?
TR art odd------------------------- clothing
N odd “laughs” --
F same you odd
e. TA that’s why odd
TR me ---- --- get-out-of-it ---- me art ---
N ---->
F you before you art school you--- mean
f. TA ugh true-- ---------------
TR “shakes head” not-really but deaf hey! deaf only
N “shakes head”
F (xxx)
g. TA you’re right true that’s it me teacher art
TR hearing only hearing odd that’s-it horrible
N “nods firmly”
F
h. TA teacher mohican handle-bar-moustache long-hair white clothes
TR mohican
N
F
i. TA thick-cloth like i-dunno different their way linked art love strange.
TR
N
F

Free translation:
Tanya: That’s interesting. Maths. You see.
498 IV. Semantics and pragmatics

Nancy: You see! It’s strange. He was clever but crap. It makes you wonder why …
Trish: Me too. I had an art teacher who was similar. Yes, similar, with his odd
clothes. That’s artists for you. Odd and wear odd clothing.
Frances: Odd like you then, Trish.
Tanya: Yes, that’s why you’re odd.
Frances: You went to art school, didn’t you Trish?
Trish: Yes, I did art but I left.
Frances: <comments not clearly visible>
Trish: Not really, but he was deaf. Hey, that’s the point, he was deaf. Only the
hearing teachers were odd.
Tanya: Yes, you’re right, that’s true. I had an art teacher =
Trish: = with a horrible Mohican cut.
Tanya: Yes, he had a Mohican cut and a handle-bar moustache. He had long hair
and wore white clothes of some thick cloth. I dunno. There’s something
different about art teachers. It must be because of their love of art. They
are very strange.

An example of such an overlap is where Frances has only just begun to ask the question
about Trish going to art school (7e), when Trish starts to answer it. In (7g/h) Tanja
mentions her art teacher and immediately Trish adds further information about him,
namely his horrible Mohican hairstyle. It is suggested that overlaps in signing do not
create interference in discourse as has been suggested for overlapping speech. Rather,
such overlaps have been argued to have their base in the establishment of solidarity
and connection (see the discussion of Hoza (2007) in section 7). However, spoken
languages also vary and the same argument could be made for languages such as Span-
ish and Hebrew where considerable overlap is allowed. We do not yet know how much
variation there is between sign languages in terms of overlap allowed.
Children have to learn the turn-taking patterns of their language community. In a
study of children learning NGT, Baker and van den Bogaerde (2005) found that chil-
dren acquire the turn-taking patterns over a considerable number of years. In the first
two years, there is quite some overlap between the turns of the deaf mother and her
child. This seems to be related to the child starting to sign when the mother is still
signing. Around the age of three, both children studied showed a decrease in the
amount of overlap, which is interpreted as an indication of the child learning the basics
of turn-taking. However, in the deaf child with more advanced signing skills, at age six
the beginnings of collaborative floor are evident with the child using overlaps to con-
tribute to the topic and to provide feedback.
Interpreters have been found to play an important role in turn-taking where signing
is involved. A major point of influence lies in the fact that they often need to identify
the source of utterance to be interpreted for the other participants (Metzger/Fleet-
wood/Collins 2004). In ASL conversations, the interpreters identified the source by
pointing, body shift, using the name sign, or referring to the physical appearance of
the source, either individually or in combination: the more complex the situation, the
more likely an explicit combination of strategies. Thus, body shift was most common
in interpreting dyadic conversations, whereas a combination of body shift, pointing,
and use of name sign occurred more often in multi-party conversations. Source attribu-
tion does not always occur, but is quite common in multi-party conversations and
22. Communicative interaction 499

reaches the 100 % level in deaf-blind conversations where the mode of communication
is visual-tactile (see chapter 23, Manual Communication Systems: Evolution and Varia-
tion). A signer wishing to contribute to a multi-party conversation has to indicate his/
her desire to take the turn. This usually requires mutual eye gaze between the person
already signing and the potential contributor. If the interaction is being interpreted,
this process is more complex since the person wishing to sign also has to take into
account the hearing participants and therefore has to keep an eye on the interpreter
for possible contributions from them. In the case of meetings, the chairperson plays a
crucial role here (Van Herreweghe 2002), as will be discussed further in section 8.

5. Coherence and cohesion

According to the Co-operation Principle of Grice (section 2), it is important to be


efficient in the presentation of information (maxim of quantity) and to indicate how
utterances are related to one another (maxim of relevance). Creating coherence and
cohesion within a discourse is thus essential. Participants in a conversation also need
to create coherence and cohesion between various contributions. In sign languages, this
is achieved by using a number of devices, some of which appear to be modality-specific.
Reference is an important means of creating cohesion. As in spoken languages,
signers can repeat lexical items that they either have previously produced themselves
or that have been produced by another signer, thereby creating lexical cohesion. In
the BSL conversation in (7), the participants repeat both their own lexical items but
also those of others. For example, in (7d) the sign odd is produced first by Trish, and
is then repeated by Frances and Tanya. In ASL, it has been observed that referents
can be introduced by use of a carefully fingerspelled word and later referred to by use
of a rapidly fingerspelled version (Metzger/Bahan 2001).
Referents that are going to occur frequently in the discourse are often assigned a
fixed location in the hemispheric space in front of the signer (see chapter 19, Use of
Sign Space). This location can subsequently be pointed to with the hand and/or the
eyes in order to establish anaphoric reference, thus creating cohesion. Clearly, this
strategy is efficient as it follows Grice’s maxim of quantity of information. Movement
of agreeing signs that target these locations as well as the use of person and object
classifiers and list buoys are also means of creating anaphoric reference (see chapter
8 on verb agreement and chapter 9 on classifiers). Spatial mapping, as this use of space
is called, plays a major role in creating coherent discourse structures (Winston 1991).
Children take some time to learn to use these devices (Morgan 2000) and occasionally
overuse full nouns instead of using anaphoric reference, as also observed in children
acquiring a spoken language.
There is a second type of space which is also commonly used to create cohesion
and coherence. The signer can use his own body as a shifted referential location in
order to describe the interaction of characters and the course of events (Morgan 2002,
132). The perspective of one of the participants can thus be portrayed. This is also
known as role-shift, perspective shift (Lillo-Martin 1995), or constructed action (Metz-
ger 1995) (see chapter 17 for discussion). In the Jordanian Sign Language example in
Figure 22.3 (Hendriks 2008, 142), the signer first takes the perspective of Sylvester, the
500 IV. Semantics and pragmatics

Fig. 22.3: Role shift in re-telling of a Tweety cartoon in Jordanian Sign Language (Hendriks
2008, 142)

cat, to illustrate how he looks at Tweety, the bird, through binoculars (Figure 22.3a);
then she switches to the perspective of Tweety, who does the same (Figure 22.3b). In
the same story, the signer again takes on the role of Tweety and uses her own facial
expressions and movements (e.g. looking around anxiously) to tell the story. By using
these two types of spaces, either separately or overlapping, cohesion within the dis-
course is established.
Longer stretches of discourse can be organized and at the same time linked by use of
discourse markers. Roy (1989) found two ASL discourse markers, now and now-that,
that were used in a lecture situation to divide the lecture into three parts, viz. the
introduction, the body of the lecture, and the conclusion. The sign on-to-the-next-
part marking a transition was also found. For Danish Sign Language (DSL), a manual
gesture has been described that has several different discourse functions (Engberg-
Pedersen 2002). It appears to be used, for example, for temporal sequencing, eviden-
tiality, and (dis)confirmation. Engberg-Pedersen calls this gesture the ‘presentation
gesture’ since it imitates the hand movement of someone holding up something for the
other to look at. The hand is flat and the palm oriented upwards. In the example in
(8), adapted from Engberg-Petersen (2002, 151), the presentation gesture is used to
link the two sentences (8a) and (8b).

y/n
(8) a. index1 ask want look-after index3a [presentation gesture] / [DSL]
b. indexforward nursery-school strike [presentation gesture] /
‘I asked, “Would you look after her, since the nursery school is on strike?”’

Similar discourse markers exist in other sign languages, such as Irish Sign Language
and New Zealand Sign Language (McKee/Wallingford 2011), but some sign languages,
such as, for example, German Sign Language, seem not have them (Herrmann 2007;
for a discussion of such markers in the context of grammaticalization, see Pfau/Stein-
bach (2006)).
An aspect related to coherence is when a correction is necessary because a mistake
has been made or the utterance was unclear, the so-called conversational repairs (Sche-
gloff/Jefferson/Sacks 1977). There are many possible types of repair such as self-initi-
ated repair, self-completed repair, other-initiated repair, other-completed repair, and
22. Communicative interaction 501

word search. Repairs initiated by others can result from an explicit remark, such as
What do you mean?, or from non-verbal behavior indicating lack of comprehension.
There is hardly any research on repairs in signed interaction. Dively (1998) is one of
the few studies on this aspect; it is based on material from ethnographic interviews
with three deaf ASL signers. A specific characteristic of signed repairs identified by
Dively was the use of simultaneity. Firstly, non-manual behaviors such as averting eye
gaze and turning the head away from the addressee were used to indicate a search for
a lexical item on the part of the signer. Furthermore, it was possible to sign with the
one hand, for example, a first person pronoun on the right hand, and indicate the need
for repair with the other hand, in this case by means of the sign wait-a-minute (Dively
1998, 157).

6. Narratives
Storytelling is important in all cultures, but in those that do not have a writing system
for their language, it often plays a central part in cultural life. Sign languages do not
have a (convenient) written form (see chapter 43, Transcription); thus in many Deaf
cultures, storytelling skills are as highly valued as they are in spoken languages without
a written from. A good storyteller in ASL, for instance, can swiftly and elegantly
change between different characters and perspectives, which includes close-ups and
long shots from every conceivable angle (Mindess 2006, 106). Several narrative genres
have been described for sign languages: jokes, tales about the old days, stories of per-
sonal experience, legends, and games, all of which are also known in spoken languages.
Some genres seem to be specific to sign languages. Mindess gives examples of ABC
and number stories (Carmel 1981, in Mindess 2006, 106). In such stories, the ASL
handshapes representing the letters A⫺Z or the figures 0⫺9 are used for concepts and
ideas. Signs are selected such that sequences of handshapes create certain patterns.
These stories are for fun, but also for children to learn the alphabet and numbers.
Nowadays many can be found on YouTube on the internet (e.g. ASL ABC Story!). A
typical joint activity, group narrative, is described by Rutherford (1985, 146):

In a group narrative each person has a role or roles. These can be as characters in the story
or as props. The story line can be predetermined, or it can occur spontaneously. The subject
matter can range from actually experienced events (e.g. a family about to have a baby) to
the borrowed, and embellished, story line of a television program or movie. They can be
created and performed by as few as two to as many as ten or twelve; two to six participants
being more common. Most important, these narratives are developed through the use of
inherent elements of ASL. Though they make use of mime and exaggerated performance,
as does adult ASL storytelling, they are, like the narratives of hearing children, sophisticated
linguistic expressions.

Other forms of ASL narrative, i.e. folklore, are fingerspelling, mime, one-handshape
stories, and skits (Rutherford 1993, in Mindess 2006, 106).
Languages vary in the way they package information in a certain type of discourse,
but all speakers or signers have to deal with linguistic aspects at the sentence and story
level, to take into account information needs of the addressee(s), and to sequence
502 IV. Semantics and pragmatics

large amounts of information (Morgan 2006, 315; Becker 2009). The structure of a
narrative has many modality-independent aspects such as creating the setting, the plot
line, and providing emotive responses. The devices that the narrator has at his disposal
depend on the language. In shaping the story and keeping it interesting and under-
standable, the narrator has a range of narrative devices to choose from, such as shifting
from the point of view of the narrator to that of one of the participants, creating little
detours by introducing subtopics, or making use of dramatic features like changes in
intonation or loudness of voice. In sign languages, aspects like facial expression and
use of the body play an important role at different levels in the narrative.
In signed narratives, modality-specific linguistic devices are used to organize and
structure the story (Sutton-Spence/Woll 1999, 270⫺275), such as spatial mapping (e.g.
Winston 1995; Becker 2009), eye gaze behavior (Bahan/Supalla 1995), or the use of one
or two hands (Gee/Kegl 1983). Besides these modality-specific devices, more general
strategies like the use of discourse markers, the choice for particular lexical items, or
pausing (Gee/Kegl 1983) are also observed. Gee and Kegl studied pause-structure in
relation to story-structure in ASL and found that the two structures almost perfectly
correlate: the longest pauses indicate the skeleton of the story (introduction, story,
conclusion) while the shortest pauses marked units at the sentence level. Bahan and
Supalla (1995) looked at eye gaze behavior at the sentence level and found two basic
types, namely gaze to the audience and characters’ gaze, which each serve a different
function. Gaze to the audience indicates that the signer is the narrator. When the
signer is constructing the actions or dialogue of one of the protagonists in the story, he
will not look at the conversational partner(s) but at the imagined interlocutor (Metz-
ger/Bahan 2001, 141). Pauses in combination with eye gaze direction scaffold, as it
were, the story.

7. Pragmatic adequacy
Edward Hall (1976) distinguished high and low context cultures. In a high context
culture, people are deeply involved with each other, information is widely shared, and
there is a high dependence on context. In other words, if you do not share the same
cultural experience as everyone else, you might not understand what is going on in any
given conversation. In contrast, in a low context culture, people are less involved with
each other, more individualistic, and most information is made explicit. Deaf communi-
ties have been described as being high context cultures (Mindess 2006, 46 f.), and this
is said to be reflected in the communicative style. American Deaf signers, for instance,
have been described as being more direct than hearing speakers (Mindess 2006, 82 ff.).
Here we will first discuss register with respect to the formality of interactions (section 7.1),
then politeness and taboo (7.2), and finally humor (7.3).

7.1. Register

The formality of the situation has an impact on different aspects of signing. Formal
signing is characterized by enlarged signs and slower signing (Baker-Shenk/Cokely
22. Communicative interaction 503

Fig. 22.4: Two girls whispering hiding their signing using their coats (Jansma/Keppels 1993)

1980). A different handshape may be used in different registers or contexts based on


formality; as is discussed in Baker and van den Bogaerde (2008) and in Berenz (2002),
the [-hand is used in formal settings as an index instead of the point. Informal signing,
on the other hand, shows more assimilation between signs, centralization of locations,
e.g. locations on the head being realized lower in space, and the size of the movement
is reduced. Two handed-signs are often reduced to articulation with one hand (Scher-
mer et al. 1991). Besides these phonological aspects, lexical choice may also be influ-
enced (Crasborn 2001, 43). Russo (2004) has also related register to the amount of
iconicity used ⫺ the more formal the situation, the less iconicity.
Shouting and whispering can be appropriate behaviors in certain situations. The
forms used for shouting and whispering have been described for sign languages (Cras-
born 2001, 196, 199⫺201; Mindess 2006, 26). Shouting is usually characterized by bigger
signs and slower movements. However, if shouting occurs in anger, then movements
may in fact be accelerated but the signing is then accompanied by an angry, exagger-
ated facial expression. In the whispering mode in ASL, signs that are normally articu-
lated on the face and body can be displaced towards a location to the side, below the
chest, or to a location that cannot easily be observed (Emmorey/McCullough/Brentari
2003, 41). Deaf children whispering in sign language have been observed to either hide
their strong hand behind an object or to block the view of the strong hand using their
weak hand. In the school playground, children have been seen to hide the hands of
another child signing under their coat, as shown in Figure 22.4 (Jansma/Keppels 1993).

7.2. Politeness and taboo

What is considered to be polite behavior differs per country, culture, language, and
situation. Hall (1989) and Mindess (2006) investigated politeness in ASL (see also
Roush 2011). They report that it is impolite in ASL to impair communication, for
504 IV. Semantics and pragmatics

example, by holding someone’s hands to stop them signing or by turning your back on
someone while they are signing. This can be mitigated by using signs that Hall (1989,
95) glosses as time-out or one-five, so that the interlocutor knows that the conversa-
tion will be briefly interrupted. As for the interaction of hearing with deaf people,
Mindess found that talking in a way that deaf people cannot speech-read or answering
the phone without explanation are perceived as impolite. In ASL, it is also considered
a taboo to inquire about the addressee’s hearing loss, ability to speak, feelings at miss-
ing music, and so on. Mindess also describes how to pass between two people having
a signed conversation, thereby potentially interrupting their conversation. The best
way, she says, is to just walk right through and not attract attention to yourself, perhaps
with very tiny articulation of the sign excuse-me. Hearing people unfamiliar with the
Deaf way, not wanting to be rude, often behave in exactly the opposite way: they
extensively apologize and in that way disrupt the conversation to a much larger extent.
Hoza (2007) recently published a more extensive study of politeness in ASL, in
which he applies the general politeness schema of Brown and Levinson (1987). His
study revealed that politeness forms in ASL are different from those used in spoken
English. This has its roots, according to Hoza (2007, 208), in a different culture-base:
the culture of the American Deaf community being based on involvement, in contrast
to the majority culture which is based on independence, that is, the desire not to im-
pose. This explains why signs such as please and thank-you are used less and, when
used, also differently in ASL as compared to spoken English. In this way, he also
accounts for the finding that English speakers are more indirect in their speech than
ASL signers (see section 3). Like Mindess (2006, 84), Hoza found that Deaf Americans
are more direct than hearing Americans who use English. However, they still use
indirect forms if politeness requires this (see also Roush 1999; Nonhebel 2002; Arets
2010 on NGT). Hoza includes the concept of face in his analysis: “Face can be under-
stood to be of two kinds: (a) the desire to be unimpeded in one’s actions and (b) the
desire to be approved of” (Hoza 2007, 21). In his study, the term involvement is used
to describe the type of politeness that is associated with showing approval and camara-
derie, and the term independence to describe the type of face associated with not want-
ing to impose (2007, 22). Hoza identified the following five non-manual markers associ-
ated with politeness strategies in formulating requests and rejections in ASL (Hoza
2007, 185).

pp ⫺ polite pucker (similar in form to the adverbial marker mm, which conveys
the sense of normally or as expected): expresses a small imposition and
cooperation is assumed; it has involvement function only;
tight lips ⫺ appears to be a general default politeness marker for most requests of
moderate imposition (p. 141) and has both involvement and independ-
ence function;
pg ⫺ polite grimace: expresses significant threats to both involvement and in-
dependence (p. 149);
pg-frown ⫺ polite grimace-frown: is associated with a severe imposition, both in in-
volvement and independence (p. 162);
bt ⫺ body/head teeter: indicates extreme threats to both involvement and in-
dependence in one of two ways. When it co-occurs with other non-manual
markers, it intensifies these markers. When it appears without a non-
22. Communicative interaction 505

manual marker, it questions the possibility of compliance with a request


or the possibility of an option working out (p. 178).

The introduction of new technologies, like texting via smartphones, video-messages,


and Skype-connections with visual contact between callers, is also challenging “one of
the most basic tenets of Deaf culture: the primacy of face-to-face interactions” (Mind-
ess 2006, 151). Mindess states that in any situation where signers are dependent on
eye contact and responsive back-channeling for mutual understanding, it is “terribly
distracting to see participants’ heads bobbing up and down as they continually glance
at their pagers” (Mindess 2006, 152). Clearly, the unspoken rules of Deaf behavior are
challenged here (e.g. eye contact) and new rules need to be developed for pragmati-
cally adequate behavior.
For some aspects of languages, there are strong taboos but these depend very much
on the individual culture. Some signs indicating body parts or taboo signs cannot be
made in all social contexts in some cultures. Body parts are often referred to by point-
ing to the actual limb or area (of the organ) but Pyers (2006) found that there are
cultural taboos that influence which body parts can be indexed. For instance, in North
American society it is considered socially inappropriate to point to genitals. ASL re-
spects this taboo and consequently, signs for genitalia have been lexicalized, so that
the actual location of the genitals on the body is not involved (Pyers 2006, 287). Nowa-
days, it is very easy to look up taboo signs or signs considered to be too explicit to use
in all contexts on the internet. For instance, there is a veritable archive of taboo signs
in the form of short clips to be found on YouTube. Apparently, however, these signs
are not taboo to the extent that people refuse to be associated with them on the
internet ⫺ the young people who demonstrate these signs are clearly recognizable in
the video clips. Interestingly such taboo signs are used by patients who suffer from
Gilles de la Tourette’s syndrome just as taboo words are used by hearing Tourette’s
patients (Morris et al. 2000; Dalsgaard/Damm/Thomsen 2001).
Some cultural gestures used together with spoken languages are sometimes adopted
into sign languages; however, the register of such cultural gestures in the spoken lan-
guage is not always perceived by deaf signers. This can lead to communication prob-
lems in interaction between hearing and deaf people. As Pietrosemoli (2001) reports,
the sign for ‘having sex’ is a taboo cultural gesture in Spanish. This gesture is one-
handed with a d-handshape and palm orientation to the body. If this gesture is used
in Venezuelan Sign Language, it can be quite inappropriate as (9), adapted from Piet-
rosemoli (2004, 170), illustrates.

(9) biology index1pl study now plants [having sex] how [Venezuelan SL]
‘In biology we are now studying how plants fuck.’

Where signs have a resemblance to taboo cultural gestures, language change can take
place to avoid problems of inappropriate use (Pietrosemoli 1994).

7.3. Humor
Another aspect that is culturally defined is the use of humor. It takes firm knowledge
of the culture of a group of people and appropriate pragmatic skills to be able to
506 IV. Semantics and pragmatics

decide whether or not a joke or a pun can be made in a particular situation. Deaf
humor is often based on the shared experience of deaf people (Sutton-Spence/Woll
1998, 264), as is humor in all cultures. Klima and Bellugi (1979) first described the sign
plays and humor used in ASL. Bienvenu (1994) studied how humor may reflect Deaf
culture and came up with four categories on which Deaf humor is based: the visual
nature of humor; humor based on deafness as an inability to hear; humor from a
linguistic perspective; and humor as a response to oppression (Bienvenu 1994, 17).
Young deaf children used to learn early in life, in the deaf schools, how to imitate their
friends or teachers ⫺ not with the aim to insult them, but as a form of entertainment.
Imitation is still a favorite pastime, especially in international settings (Bouchauveau
1994) where storytelling is used to exchange information about, for instance, differen-
ces between countries.
The inability to hear also provides the basis for well-known jokes, where the use of
noise or sounds will identify the hearing, thus also identifying the deaf, who are not
reacting. Linguistic jokes include riddles, puns, and sign games, for example, changing
the sign understand to little-understand by using the pinkie finger instead of the
index finger (Klima/Bellugi 1979, 324). A signer can express in sign that s/he is oiling
the joints in the hands and arms with a small oil can, to indicate that s/he is preparing
for a presentation in sign language (Sutton-Spence/Woll 1998, 266). One thing is clear
about humor ⫺ it is necessary to know the culture, and the context in which humor is
used, to be able to appreciate it. Hearing people often miss the point in signed jokes
or puns, just as deaf people often do not appreciate hearing spoken humor, not only
because the linguistic finesse is lacking, but also because there is a lack of knowledge
about each other’s culture.

8. Influence of cultural/hearing status

When interacting with each other, hearing and deaf people can use a signed or a
spoken language, or a form of sign-supported speech. The choice for a particular lan-
guage mode certainly depends partly on the hearing status of the participants and
partly on their fluency in the language(s) involved. But it is not hearing status and
fluency in a language alone that ultimately decide in what form communication will
take place. What is decisive is the attitude a person has towards Deafness and sign
language and her/his general outlook and views on life in combination with personal
experience and skills.
Young, Ackerman, and Kyle (2000) explored the role of hearing status in the inter-
action between deaf and hearing employees. Deaf people associated the use of sign
language with personal respect, value, and confidence, and hearing colleagues’ willing-
ness to sign was considered more significant than their fluency. Hearing employees
connected sign language use to change, pressure, and the questioning of professional
competence. In order to improve relations, the deaf perceived the challenges involved
as person-centered, meaning that they wanted to be involved, make relationships, and
feel good in the working environment. In contrast, the hearing participants were found
to be more language-centered, that is, they struggled with how well, how confidently,
and how consistently they could sign. In other words: whereas for the deaf people, the
22. Communicative interaction 507

willingness of hearing people to sign was paramount, for hearing people themselves,
the standard to which they signed was the most important (2000, 193).
In a study on procedures during mixed deaf-hearing meetings, Van Herregweghe
(2002) was able to demonstrate that the choice of a deaf or a hearing chairperson, and
subsequently the choice for sign language or spoken language as main language in the
meeting, had far-reaching consequences for the participation of the deaf in the flow of
conversation and thus in the decision-taking process.
In communicative interaction involving sign language, the cultural stance people
take seems to have more impact on the linguistic choices and possibilities than their
hearing status. Even so, being hearing or deaf does have some consequences, for exam-
ple, for the perception of signs. Deaf people are found to have better peripheral vision
than hearing people. They regularly scan their surroundings to compensate for the
absence of acoustic cues and typically monitor the arm and hand motions with periph-
eral vision while looking at a conversational partner’s eyes (Bavelier et al. 2000). Even
hearing children of deaf parents (Codas) who are native signers make different use of
their visual and auditory cortex than deaf born individuals due to the fact that they
can hear (Fine et al. 2005). Their bilingualism (for instance, in English and ASL) is
different from deaf bilinguals who use the same languages. In what way the more acute
peripheral vision of deaf native signers influences signed or spoken interaction, with
either deaf or hearing participants, is not yet known.

9. Conclusion
In the previous sections, we have described various aspects of interaction involving a
sign language. With respect to many, in fact almost all, of the relevant aspects, to date
no, or relatively little, research has been carried out. Most of the available studies focus
on ASL but in many cases, it is not clear wether the results found for one sign language
can be transferred to another. In areas such as the Gricean maxims, it seems likely
that there are universal principles but, again, almost no research has investigated this
topic from a sign language perspective. On the other hand, in other areas, we can
anticipate considerable differences between sign languages. In turn-taking, for exam-
ple, it is known that spoken languages differ greatly in the signals they use and the
patterns observed. It can thus be expected that sign languages will show a similar
amount of variation. Clearly, there is still considerable work to be done.

10. Literature
Arets, Maureen
2010 An (Im)polite Request. The Expression of Politeness in Requests in Sign Language of
the Netherlands. MA Thesis, University of Amsterdam.
ASL ABC Story! http://www.youtube.com/watch?v=qj1MQhXfVJg (Accessed on 01/11/09).
Bahan, Ben/Supalla, Sam
1995 Line Segmentation and Narrative Structure: A Study of Eye-gaze Behavior in Ameri-
can Sign Language. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and
Space. Hillsdale, NJ: Lawrence Erlbaum, 171⫺191.
508 IV. Semantics and pragmatics

Baker, Anne/Bogaerde, Beppie van den


2005 Eye Gaze in Turntaking in Sign Language Interaction. Paper Presented at the 10th In-
ternational Congress for Study of Child Language, Berlin, July 2005.
Baker, Anne/Bogaerde, Beppie van den
2008 Interactie en Discourse [Interaction and Discourse]. In: Baker, Anne/Bogaerde, Beppie
van den/Pfau, Roland/Schermer, Trude (eds.), Gebarentaalwetenschap. Een Inleiding
[Sign Linguistics. An Introduction]. Deventer: Van Tricht, 83⫺98.
Baker, Charlotte
1977 Regulators and Turn-taking in American Sign Language Discourse. In: Friedman, Lynn
A. (ed.), On the Other Hand. New York: Academic Press, 218⫺236.
Baker-Shenk, Charlotte/Cokely, Dennis
1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver
Spring, MD: TJ Publishers.
Bavelier, Daphne/Tomann, Andrea/Hutton, Chloe/Mitchell, Teresa/Corina, David/Liu, Guoying/
Neville, Helen
2000 Visual Attention to the Periphery Is Enhanced in Congenitally Deaf Individuals. In:
Journal of Neuroscience 20(RC93), 1⫺6.
Becker, Claudia
2009 Narrative Competences of Deaf Children in German Sign Language. In: Sign Lan-
guage & Linguistics 12(2), 113⫺160.
Berenz, Norine
2002 Insights into Person Deixis. In: Sign Language & Linguistics 5(2), 203⫺227.
Bienvenu, Martina J.
1994 Reflections of Deaf Culture in Deaf Humor. In: Erting, Carol J./Johnson, Robert C./
Smith, Dorothy L.S./Snider, Bruce D. (eds.), The Deaf Way, Perspectives from the Inter-
national Conference on Deaf Culture. Washington, DC: Gallaudet University Press,
16⫺23.
Bouchauveau, Guy
1994 Deaf Humor and Culture. In: Erting, Carol J./Johnson, Robert C./Smith, Dorothy L. S./
Snider, Bruce D. (eds.), The Deaf Way, Perspectives from the International Conference
on Deaf Culture. Washington, DC: Gallaudet University Press, 24⫺30.
Bogaerde, Beppie van den
2000 Input and Interaction in Deaf Families. PhD Dissertation, University of Amsterdam.
Utrecht: LOT.
Brown, Penelope/Levinson, Stephen
1987 Politeness: Some Universals in Language Usage. Cambridge, MA: Cambridge Univer-
sity Press.
Campbell, Cindy
2001 The Application of Speech Act Theory to American Sign Language. PhD Dissertation,
University at Albany, State University of New York.
Celo, Pietro
1996 Pragmatic Aspects of the Interrogative Form in Italian Sign Language. In: Lucas, Ceil
(ed.), Multicultural Aspects of Sociolinguistics in Deaf Communities. Washington, DC:
Gallaudet University Press, 132⫺151.
Coates, Jennifer/Sutton-Spence, Rachel
2001 Turn-taking Patterns in Deaf Conversation. In: Journal of Sociolinguistics 5, 507⫺529.
Coerts, Jane
1992 Nonmanual Grammatical Markers. An Analysis of Interrogatives, Negations and Topi-
calisations in Sign Language of the Netherlands. PhD Dissertation, University of Am-
sterdam.
Crasborn, Onno
2001 Phonetic Implementation of Phonological Categories in Sign Language of the Nether-
lands. PhD Dissertation, University of Leiden. Utrecht: LOT.
22. Communicative interaction 509

Dalsgaard, Søren/Damm, Dorte/Thomsen, Per


2001 Gilles de la Tourette Syndrome in a Child with Congenital Deafness. In: European
Child & Adolescent Psychiatry 10, 256⫺259.
Dively, Valery L.
1998 Conversational Repairs in ASL. In: Lucas, Ceil (ed.), Pinky Extension and Eye Gaze.
Language Use in Deaf Communities. Washington, DC: Gallaudet University Press,
137⫺169.
Emmorey, Karen/McCullough, Stephen/Brentari, Diane
2003 Categorical Perception in American Sign Language. In: Language and Cognitive Proc-
esses 18(1), 21⫺45.
Engberg-Pedersen, Elisabeth
2002 Gestures in Signing: The Presentation Gesture in Danish Sign Language. In: Schulmeis-
ter, Rolf/Reinitzer, Heimo (eds.), Progress in Sign Language Research: In Honor of
Siegmund Prillwitz. Hamburg: Signum, 143⫺162.
Fine, Ione/Finney, Eva M./Boynton, Geoffrey M./Dobkins, Karen M.
2005 Comparing the Effects of Auditory Deprivation and Sign Language Within the Audi-
tory and Visual Cortex. In: Journal of Cognitive Neuroscience 17(10), 1621⫺1637.
Gee, James P./Kegl, Judy A.
1983 Narrative/Story Structure, Pausing and American Sign Language. In: Discourse Proc-
esses 6, 243⫺258.
Grice, Paul
1975 Logic and Conversation. In: Cole, Peter/Morgan, Jerry L. (eds.), Studies in Syntax and
Semantics III: Speech Acts. New York: Academic Press, 183⫺198.
Hall, Edward
1976 Beyond Culture. Reprint, New York: Anchor/Doubleday, 1981.
Hall, Susan
1989 train-gone-sorry: The Etiquette of Social Conversations in American Sign Language.
In: Wilcox, Sherman (ed.), American Deaf Culture. An Anthology. Burtonsville, MD:
Linstok Press, 89⫺102.
Harris, Margaret/Mohay, Heather
1997 Learning to Look in the Right Place: A Comparison of Attentional Behaviour in Deaf
Children with Deaf and Hearing Mothers. In: Journal of Deaf Studies and Deaf Educa-
tion 2, 95⫺103.
Hendriks, Bernadet
2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD
Dissertation, University of Amsterdam. Utrecht: LOT.
Herrmann, Annika
2007 The Expression of Modal Meaning in German Sign Language and Irish Sign Language.
In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation. Compara-
tive Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 245⫺271.
Hoza, Jack
2007 It’s Not What You Sign, It’s How You Sign It: Politeness in American Sign Language.
Washington, DC: Gallaudet University Press.
Jansma, Sonja/Keppels, Inge
1993 The Effect of Immediately Preceding Input on the Language Production of Deaf Chil-
dren of Hearing Parents. MA Thesis, University of Amsterdam.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language. An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
Klima, Edward S./Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
510 IV. Semantics and pragmatics

Lillo-Martin, Diane
1995 The Point of View Predicate in American Sign Language. In Emmorey, Karen/Reilly,
Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 155⫺170.
Mather, Susan
1987 Eye Gaze and Communication in a Deaf Classroom. In: Sign Language Studies 54,
11⫺30.
Mather, Susan
1989 Visually Oriented Teaching Strategies with Deaf Preschool Children. In: Lucas, Ceil
(ed.), The Sociolinguistics of the Deaf Community. New York: Academic Press, 165⫺
187.
Mather, Susan/Rodriguez-Fraticelli, Yolanda/Andrews, Jean F./ Rodriguez, Juanita
2006 Establishing and Maintaining Sight Triangles: Conversations Between Deaf Parents and
Hearing Toddlers in Puerto Rico. In Lucas, Ceil (ed.), Multilingualism and Sign Lan-
guages. Washington, DC: Gallaudet University Press, 159⫺187.
McKee, Rachel L./Wallingford, Sophia
2011 ‘So, Well, Whatever’: Discourse Functions of Palm-up in New Zealand Sign Language.
In; Sign Language & Linguistics 14(2), 213⫺247.
Metzger, Melanie/Bahan, Ben
2001 Discourse Analysis. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cam-
bridge: Cambridge University Press, 112⫺144.
Metzger, Melanie
1995 Constructed Dialogue and Constructed Action in American Sign Language. In: Lucas,
Ceil (ed.), Sociolinguistics of Deaf Communities. Washington, DC: Gallaudet University
Press, 255⫺271.
Metzger, Melanie/Fleetwood, Earl/Collins, Steven D.
2004 Discourse Genre and Linguistic Mode: Interpreter Influences in Visual and Tactile
Interpreted Interaction. In: Sign Language Studies 4(2), 118⫺136.
Mindess, Anna
2006 Reading Between the Signs. Intercultural Communication for Sign Language Interpreters
(2nd edition). Yarmouth, MN: Intercultural Press.
Morgan, Gary
2000 Discourse Cohesion in Sign and Speech. In: International Journal of Bilingualism 4(3),
279⫺300.
Morgan, Gary
2002 Children’s Encoding of Simultaneity in British Sign Language Narratives. In: Sign Lan-
guage & Linguistics 5(2), 131⫺165.
Morgan, Gary
2006 The Development of Narrative Skills in British Sign Language. In: Schick, Brenda/
Marschark, Marc/Spencer, Patricia (eds.), Advances in Sign Language Development in
Deaf Children. Oxford: Oxford University Press, 314⫺343.
Morris, Huw/Thacker, Alice/Newman, Peter/Lees, Andrew
2000 Sign Language Tics in a Pre-lingually Deaf Man. In: Movement Disorders 15(2), 318⫺
320.
Nonhebel, Annika
2002 Indirecte Taalhandelingen in Nederlandse Gebarentaal. Een Kwalitatieve Studie naar
de Non-manuele Markering van Indirecte Verzoeken [Indirect Speech Acts in NGT: a
Qualitative Study of the Non-manual Marking of Indirect Requests]. MA Thesis, Univer-
sity of Amsterdam.
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 3⫺98.
22. Communicative interaction 511

Pietrosemoli, Lourdes
1994 Sign Terminology for Sex and Death in Venezuelan Deaf and Hearing Cultures: A
Preliminary Study of Pragmatic Interference. In: Erting, Carol J./Johnson, Robert C./
Smith, Dorothy L./Snider, Bruce D. (eds.), The Deaf Way: Perspectives from the Interna-
tional Conference on Deaf Culture. Washington, DC: Gallaudet University Press,
677⫺683.
Pietrosemoli, Lourdes
2001 Politeness and Venezuelan Sign Language. In: Dively, Valerie/Metzger, Melanie/Taub,
Sarah/Baer, Anne Marie (eds.), Signed Languages: Discoveries from International Re-
search. Washington, DC: Gallaudet University Press, 163⫺179.
Prinz, Philip M./Prinz, Elizabeth A.
1985 If Only You Could Hear What I See: Discourse Development in Sign Language. In:
Discourse Processes 8, 1⫺19.
Pyers, Jenny
2006 Indicating the Body: Expression of Body Part Terminology in American Sign Language.
In: Language Sciences 28, 280⫺303.
Richmond-Welty, E. Daylene/Siple, Patricia
1999 Differentiating the Use of Gaze in Bilingual-bimodal Language Acquisition: A Com-
parison of Two Sets of Twins with Deaf Parents. In: Journal of Child Language 26,
321⫺388.
Roush, Daniel
1999 Indirectness Strategies in American Sign Language. MA Thesis, Gallaudet University.
Roy, Cynthia B.
1989 Features of Discourse in an American Sign Language Lecture. In: Lucas, Ceil (ed.),
Sociolinguistics of the Deaf Community. San Diego: Academic Press, 231⫺251.
Russo, Tommaso
2004 Iconicity and Productivity in Sign Language Discourse: An Analysis of Three LIS Dis-
course Registers. In: Sign Language Studies 4(2), 164⫺197.
Rutherford, Susan
1985 The Traditional Group Narrative of Deaf Children. In: Sign Language Studies 47,
141⫺159.
Sacks, Harvey/Schegloff, Emanuel A./Jefferson, Gail
1974 A Simplest Systematics for the Organization of Turn-taking for Conversation. In: Lan-
guage 50, 696⫺735.
Schegloff, Emanuel/Jefferson, Gail/Sacks, Harvey
1977 The Preference for Self-correction in the Organization of Repair in Conversation. In:
Language 53, 361⫺382.
Schermer, Trude/Koolhof, Corline/Harder, Rita/de Nobel, Esther (eds.)
1991 De Nederlandse Gebarentaal. Twello: Van Tricht.
Searle, John
1969 Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge Univer-
sity Press.
Smith, Sandra/Sutton-Spence, Rachel
2005 Adult-child Interaction in a BSL Nursery ⫺ Getting Their Attention! In: Sign Lan-
guage & Linguistics 8(1/2), 131⫺152.
Spencer, Patricia
2000 Looking Without Listening: Is Audition a Prerequisite for Normal Development of
Visual Attention During Infancy? In: Journal of Deaf Studies and Deaf Education 5(4),
291⫺302.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
512 IV. Semantics and pragmatics

Swisher, M. Virginia
1992 The Role of Parents in Developing Visual Turn-taking in Their Young Deaf Children.
In: American Annals of the Deaf 137, 92⫺100.
Van Herreweghe, Mieke
2002 Turn-taking Mechanisms and Active Participation in Meetings with Deaf and Hearing
Participants in Flanders. In: Lucas, Ceil (ed.), Turntaking, Fingerspelling, and Contact
in Signed Languages. Washington, DC: Gallaudet University Press, 73⫺106.
Winston, Elizabeth A.
1991 Spatial Referencing and Cohesion in an ASL Text. In: Sign Language Studies 73,
397⫺410.
Young, Alys/Ackermann, Jennifer/Kyle, Jim
2000 On Creating a Workable Signing Environment ⫺ Deaf and Hearing Perspectives. In:
Journal of Deaf Studies and Deaf Education 5(2), 186⫺195.

Anne Baker, Amsterdam (The Netherlands)


Beppie van den Bogaerde, Utrecht (The Netherlands)
V. Communication in the visual modality

23. Manual communication systems: evolution and


variation
1. Introduction
2. The origin of sign languages
3. Sign language types and sign language typology
4. Tactile sign languages
5. Secondary sign languages
6. Conclusion
7. Literature

Abstract
This chapter addresses issues in the evolution and typology of manual communication
systems. From a language evolution point of view, sign languages are interesting because
it has been suggested that oral language may have evolved from gestural (proto)lan-
guage. As far as typology is concerned, two issues will be addressed. On the one hand,
different types of manual communication systems, ranging from simple gestural codes
to complex natural sign languages, will be introduced. The use and structure of two types
of systems ⫺ tactile sign languages and secondary sign languages ⫺ will be explored in
more detail. On the other hand, an effort will be made to situate natural sign languages
within typological classifications originally proposed for spoken languages. This ap-
proach will allow us to uncover interesting inter-modal and intra-modal typological dif-
ferences and similarities.

1. Introduction
Throughout this handbook, when authors speak of ‘sign language’, they usually refer
to fully-fledged natural languages with complex grammatical structures which are the
major means of communication of many (but not all) prelingually deaf people. In the
present chapter, however, ‘sign language’ is sometimes understood more broadly and
also covers manual communication systems that do not display all of the features usu-
ally attributed to natural languages (such as, for example, context-independence and
duality of patterning). In addition, however, labels such as ‘gestural code’ or ‘sign
system’ will also be used in order to make a qualitative distinction between different
types of systems.
This chapter addresses issues in the emergence and typology of manual communica-
tion systems, including but not limited to natural sign languages. The central theme
connecting the sections is the question of how such systems evolve, as general means
514 V. Communication in the visual modality

of communication but also in more specialized contexts, and how the various systems
differ from each other with respect to expressivity and complexity. The focus will be
on systems that are the primary means of communication in a certain context ⫺ no
matter how limited they are. Co-speech gesture is thus excluded from the discussion,
but is dealt with in detail in chapter 27.
In section 2, we will start our investigation with a discussion of hypotheses concern-
ing the origin of (sign) languages, in particular, the gestural theory of language origin.
In section 3, we present an overview of different types of manual communication sys-
tems ⫺ from gestural codes to natural sign languages ⫺ and we sketch how sign lan-
guage research relates to linguistic typology. In particular, we will address selected
topics in intra- and inter-modal typological variation. In the next two sections, the focus
will be on specific types of sign languages, namely the tactile sign languages used in
communication with deafblind people (section 4) and sign languages which, for various
reasons, are developed and used within hearing groups or communities, the so-called
‘secondary sign languages’ (section 5).

2. The origin of sign languages


The origin and evolution of language is currently a hotly debated issue in evolutionary
biology as well as in linguistics. Sign languages are interesting in this context because
some scholars argue that manual communication may have preceded vocal communi-
cation. Since language does not fossilize, all the available evidence for evolutionary
scenarios is indirect and comes from diverse sources including fossil evidence, cultural
artifacts (such as Acheulean hand-axes), birdsong, and co-speech gesture. In the follow-
ing, I will first present a brief sketch of what we (think we) know about language
evolution (section 2.1) before turning to the gestural theory of language origin (sec-
tion 2.2).

2.1. The evolution of language

According to Fitch (2005, 2010), three components have been identified as crucial for
the human language faculty: speech (that is, the signal, be it spoken or signed), syntax,
or grammar (that is, the combinatorial rules of language), and semantics (that is, our
ability to convey an unlimited range of meanings).
Human speech production involves two key factors, namely our unusual vocal tract
and vocal imitation. The descended larynx of humans enables them to produce a
greater diversity of formant frequency patterns. While this anatomical change is cer-
tainly an important factor, recent studies indicate that “selective forces other than
speech might easily have driven laryngeal descent at one stage of our evolution” (Fitch
2005, 199). Since other species with a permanently descended larynx have been discov-
ered (e.g. lions), it is likely that the selective force is the ability to produce impressive
vocalizations (the ‘size exaggeration hypothesis’; also see Fitch 2002). Still, it is clear
that early hominids were incapable of producing the full range of speech sounds (Lie-
berman 1984; Fitch 2010).
23. Manual communication systems: evolution and variation 515

Imitation is a prerequisite for language learning and communication. Interestingly,


while non-human primates are highly constrained when it comes to imitation, other
species, like birds and dolphins, are very good at imitating vocalizations. Birdsong in
particular has attracted the attention of scholars because it shows interesting parallels
with speech (Marler 1997; Doupe/Kuhl 1999). First, most songbirds learn their species-
specific songs by listening to other members of their species. Second, they pass through
a critical period in development; acquisition after the critical period results in defective
songs. Third, at least some birdsong displays syntactic structure in that smaller units
are combined to form larger units (Okanoya 2002). In contrast to human language,
however, birdsong is devoid of compositional meaning.
Based on these parallels, it has been suggested (for instance, by Darwin) that the
earliest stage of language evolution may have been musical. Fitch (2005, 220) refers to
this stage as ‘prosodic protolanguage’, that is, a language which is characterized by
complex, learned vocalization but lacks compositional meaning. Presumably, the evolu-
tion of this protolanguage was driven by sexual selection (Okanoya 2002). At a later
stage, communicative needs may have motivated the addition of semantics. “By this
hypothesis, music is essentially a behavioral ‘fossil’ of an earlier human communication
system” (Fitch 2005, 221; also see Fitch 2006).
While the above scenario could be paraphrased as ‘syntax without semantics’, an
alternative scenario suggests that early stages of language were characterized by ‘se-
mantics without syntax’; this is referred to as ‘asyntactic protolanguage’. According to
this hypothesis, protolanguage consisted of utterances of only a single word, or simple
concatenations of words, without phrase structure (Jackendoff 1999; Bickerton 2003).
Jackendoff (1999, 273) suggests that single-word utterances associated with high affect,
such as wow!, ouch!, and dammit! are “‘fossils’ of the one-word stage of language
evolution ⫺ single-word utterances that for some reason are not integrated into the
larger combinatorial system”. Jackendoff further assumes that the first vocal symbols
were holistic gestalts (pretty much like primate calls) and that a phonological system
evolved when the repertoire of symbols (the lexicon) increased. Since a larger lexicon
requires more phonological distinctions, one may speculate that the evolution of the
vocal tract (the descended larynx) was “driven by the adaptivity of a larger vocabulary,
through more rapid articulation and enhanced comprehensibility” (Jackendoff 1999,
274).
A third evolutionary scenario, which assumes a ‘gestural protolanguage’, will be
addressed in the following section. Before concluding this section, however, I want to
point out that the recent isolation of a language-related gene, called Forkhead-box P2
(or FOXP2), has caused considerable excitement among linguists and evolutionary
biologists (Vargha-Khadem et al. 1995). It has been found that the human version of
FOXP2 is functionally identical in all populations worldwide, but differs significantly
from that of chimpanzees. Statistical analysis of the relevant changes suggests that
these changes occurred not more than 200,000 years ago in human phylogeny (see
Fitch (2005, 2010) for details).

2.2. The gestural theory of language origin


I shall now describe one scenario, the gestural theory of language origin, in more detail
because it emphasizes the crucial role of manual communication in the evolution of
516 V. Communication in the visual modality

language (Hewes 1973, 1978; Armstrong/Wilcox 2003, 2007; Corballis 2003). According
to this theory, protolanguage was gestural, that is, composed of manual and facial
gestures. The idea that language might have evolved from gestures is not a new one;
actually, it has been around since the French Enlightenment of the 18th century, if not
longer (Armstrong/Wilcox 2003). The gestural hypothesis is consistent with the exis-
tence of co-speech gesture (see chapter 27), which thus could be interpreted as a rem-
nant of gestural protolanguage, and with the fact that sign languages are fully-fledged,
natural languages. Further support comes from the observation that apes are consider-
ably better at learning signs than speech (Gardner/Gardner/van Cantfort 1989).
As for anatomical developments, it has been established that bipedalism and en-
largement of the brain are the defining anatomical traits of the hominid lineage (which
separated from the lineage leading to chimpanzees approximately 5⫺6 million years
ago). Once our ancestors became bipedal, the hands were available for tool use and
gestural communication. Fossil evidence also indicates that about three million years
ago, “the human hand had begun to move toward its modern configuration” while “the
brain had not yet begun to enlarge, and the base of the skull, indicative of the confor-
mation of the vocal tract, had not begun to change toward its modern, speech-enabling
shape” (Armstrong/Wilcox 2003, 307). In other words: it seems likely that manual
communication was possible before vocal communication, and assuming that there was
a desire or need for an efficient exchange of information, gestural communication may
have evolved.
Gradually, following a phase of co-occurrence, vocal gestures must have replaced
manual gestures. However, given the existence of sign languages, the obvious question
is why this change should have occurred in the first place. Undoubtedly, speech is
more useful when interlocutors cannot see each other and while holding tools; also, it
“facilitated pedagogy through the simultaneous deployment of demonstration and ver-
bal description” (Corballis 2010, 5). Some scholars, however, doubt that these pressures
would have been powerful enough to motivate a change from manual to vocal commu-
nication and thus criticize the gestural hypothesis (MacNeilage 2008).
In the 1990s, the gestural theory was boosted when mirror neurons (MNs) were
discovered in the frontal cortex of non-human primates (Rizzolatti/Arbib 1998). MNs
are activated both when the monkey performs a manual action and when it sees an-
other monkey perform the same action. According to Fitch (2005, 220), this discovery
is exciting for three reasons. First, MNs have “the computational properties that would
be required for a visuo-manual imitation system”, and, as mentioned above, imitation
skills are crucial in language learning. Second, MNs have been claimed to support the
gestural theory because they respond to manual action (Corballis 2003). Third, and
most importantly, MNs are located in an area of the primate brain that is analogous
to Broca’s area in humans, which is known to play a central role in both language
production and comprehension. The fact that (part of) Broca’s area is not only involved
in speech but also in motor functions such as complex hand movements (Corballis
2010) lends further support to an evolutionary link between gestural and vocal commu-
nication (also see Arbib 2005).
Clearly, when it comes to the evolution of cognition in general, and the evolution
of language in particular, one should “not confuse plausible stories with demonstrated
truth” (Lewontin 1998, 129). Given the speculative nature of many of the issues ad-
dressed above, it seems impossible to prove that the gestural theory of language origin
23. Manual communication systems: evolution and variation 517

is correct. According to Corballis (2010, 5), the gestural theory thus “best serves as a
working hypothesis to guide research into the nature of language, and the genetic and
evolutionary changes that gave rise to our species” ⫺ a statement that might as well
be applied to the other evolutionary scenarios.

3. Sign language types and sign language typology

3.1. Sign language types

In this section, I will provide a non-exhaustive typology of manual communication


systems (to use a fairly neutral term), proceeding from simple context-bound gestural
codes to complex natural sign languages. We will see that more complex systems may
evolve from simpler ones ⫺ a development which, to some extent, might mirror proc-
esses which presumably also played a role in the evolution of (sign) language.
First, there are gestural communication systems and technical manual codes used,
for instance, over distances that preclude oral communication (such as the crane driver
guider gestures described by Kendon (2004, 292 f.)), under water (manual symbols used
by scuba divers), or in situations which require silence (for instance, manual communi-
cation during hunting; see e.g. Lewis (2009)). Clearly, all of these manual codes are
only useful in very specific contexts. Still, the existence of hunting codes in particular
is interesting in the present context because it has been argued that at least some sign
languages may have developed from manual codes used during hunting (Divale/Zipin
1977; Hewes 1978).
Crucially, none of these gestural communication systems is used by deaf people.
This is a feature they share with ‘secondary sign languages’, sign languages which, for
various reasons, were developed and used by hearing people. The manual communica-
tion systems commonly subsumed under the label ‘secondary sign language’ (e.g. sign
languages used by Australian Aboriginals or monks) show varying degrees of lexical
and grammatical complexity, but all of them appear to be considerably more elaborate
than the manual codes mentioned above. Aspects of the use and structure of secondary
sign languages will be discussed in detail in section 5.
So-called ‘homesign’ systems are also used in highly restricted contexts, these con-
texts, however, not being situational in nature (e.g. diving, hunting) but rather being
family contexts. Prelingually deaf children growing up in hearing families without sign
language input may develop gestural communication systems to interact with their
parents and siblings. Within a family, such systems may be quite effective means of
communication, but typically, they are used for only one generation and are not trans-
mitted beyond the family. While at first sight, a homesign system may appear to be a
fairly simple conglomerate of mostly iconic gestures, research has shown that these
gestures are discrete units and that there is evidence of morphological and syntactic
structure (e.g. predicate frames, recursion) in at least some homesign systems (Goldin-
Meadow 2003; see chapter 26 for extensive discussion). Homesign systems are known
to have the potential to develop further into fully-fledged sign languages, once home-
signers get in contact with each other, for example, at a boarding school ⫺ as has been
518 V. Communication in the visual modality

documented, for instance, for Nicaraguan Sign Language (Kegl/Senghas/Coppola 1999;


see chapter 36, Language Emergence and Creolisation, for discussion).
Moving further from less complex systems towards ‘true’ sign languages, we find
various types of manual communication systems that combine the lexicon of a sign
language with structural elements of the surrounding spoken language. Such systems ⫺
for instance, Manually-Coded English (MCE) in the United States and Nederlands met
Gebaren (Sign-supported Dutch) in the Netherlands ⫺ are commonly used in educa-
tional settings or, more generally, when Deaf signers interact with hearing second lan-
guage learners of a sign language. Even within this class of systems, however, a consid-
erable amount of structural variation exists (also see Crystal/Craig (1978), who refer
to such systems as ‘contrived sign languages’). Some systems mirror the structure of a
spoken language to the extent that functional morphemes are represented by dedicated
signs or fingerspelling (e.g. the copula verb be or bound morphemes like -ing and third
person singular -s in English-based systems). Other systems are closer to a particular
sign language in that many of the grammatical mechanisms characteristic of the sign
language are preserved (e.g. use of space, non-manual marking), but signs are ordered
according to the rules of the spoken language (for MCE, see Schick (2003); also see
chapter 35, Language Contact and Borrowing).
Turning finally to natural sign languages, further classifications have been proposed
(Zeshan 2008). To some extent, these classifications reflect developments in the field
of sign language linguistics (Perniss/Pfau/Steinbach 2007; also see chapter 38). In the
1960s and 1970s, linguistic research on sign languages started with descriptions of a
number of western sign languages, such as American Sign Language (ASL), Sign Lan-
guage of the Netherlands (NGT), and Swedish Sign Language (SSL). Apart from a
few exceptions, it was only from the 1990s onwards that these descriptions were com-
plemented by studies focusing on non-western sign languages, e.g. Brazilian Sign Lan-
guage (LSB), Indopakistani Sign Language (IPSL), and Japanese Sign Language (NS).
More recently, the so-called ‘village sign languages’, that is, sign languages used in
village communities with a high incidence of genetic deafness, have entered the stage
of sign language linguistics (see chapter 24 for discussion).
In Figure 23.1 different types of manual communication systems are arranged along
a continuum of complexity and possible developmental paths from one system to an-

Fig. 23.1: Types of manual communication sys- Fig. 23.2: The mosaic of sign language data
tems; the arrows indicate possible (adapted from Zeshan 2008, 675)
developments of one system into
another
23. Manual communication systems: evolution and variation 519

other are pointed out. Focusing on the rightmost box in Figure 23.1, the natural sign
languages, Zeshan (2008, 675) presents different subtypes in a ‘mosaic of sign language
data’, an adapted version of which is presented in Figure 23.2. In this mosaic, western
and non-western sign languages are both classified as ‘urban sign languages’, contrast-
ing them with village sign languages. Note that Zeshan also hypothesizes that further
sign language types may yet have to be discovered (the ‘?’-box in Figure 23.2).
Taken together, the discussion in this section shows that manual communication
systems differ from each other with respect to (at least) the following parameters: (i)
complexity and expressivity of the system; (ii) type and size of community (or group)
in which the system is used; and (iii) influence of surrounding spoken language on the
system (see Crystal/Craig (1978, 159) for a classificatory matrix of different types of
manual communication systems (‘signing behaviors’), ranging from cricket signs via
symbolic dancing to ASL).

3.2. Sign languages and linguistic typology

Having introduced different types of sign systems and sign languages, I will now zoom
in on natural sign languages in order to address some of the attested inter-modal and
intra-modal typological patterns and distinctions. Two questions will guide our discus-
sion: (i) in how far can typological classifications that have been proposed on the basis
of spoken languages be applied to sign languages, and (ii) to what extent do sign
languages differ from each other typologically? Obviously, developments within the
field of sign language typology have gone hand in hand with the increased number of
sign languages being subject to linguistic investigation. Given that many typologically
relevant aspects are discussed extensively in sections II and III of this handbook, I will
only provide a brief overview of some of the phenomena that have been investigated
from a typological perspective; I refer the reader to the relevant chapters for examples
and additional references. I will focus on morphological typology, word order, negation,
and agreement (also see Schuit/Baker/Pfau 2011; Slobin accepted).

3.2.1. Morphological typology

Spoken languages are commonly classified based on their morphological typology, that
is, the amount of (linear) affixation and fusion. A language with only monomorphemic
words is of the isolating type, while a language which allows for polymorphemic words
is synthetic (or polysynthetic if it also features noun incorporation). A synthetic lan-
guage in which morphemes are easily segmented is agglutinative; if segmentation is
impossible, it is called fusional (Comrie 1989).
Signs are known to be of considerable morphological complexity (Aronoff/Meir/
Sandler 2005), but the fact that morphemes tend to be organized simultaneously rather
than sequentially makes a typological classification less straightforward. Consider, for
instance, the NGT verb give. In its base form, this verb is articulated with a u-hand and
consists of a location-movement-location (L-M-L) sequence (movement away from the
signer’s body). The verb can be modified such that it expresses a complex meaning
like, for example, ‘You give me a big object with some effort’ by changing the hand-
520 V. Communication in the visual modality

shape, the direction and manner of movement, as well as non-manual features. All of
these changes happen simultaneously, such that the resulting sign is still of the form
L-M-L; no sequential affixes are added. Simultaneity, however, is not to be confused
with fusion; after all, all of the morphemes involved (viz. subject and object agreement,
classifier, manner adverb) are easily segmented. It therefore appears that NGT is ag-
glutinative (a modality-independent classification), but that morphemes are capable of
combining simultaneously (a modality-specific feature). Surely, simultaneous morphol-
ogy is also attested in spoken languages (e.g. tone languages) but usually, there is a
maximum of two simultaneously combined morphemes.
As for intra-modal typology, it appears that all sign languages investigated to date
are of the same morphological type. Still, it is possible that they differ from each other
in the amount of manual and non-manual morphological operations that can be applied
to a stem (Schuit 2007).

3.2.2. Word order

In the realm of syntax, word order (or, more precisely, constituent order) is probably
the typological feature that has received most attention. For many spoken languages,
a basic word order has been identified, where ‘basic’ is usually determined by criteria
such as frequency, distribution, pragmatic neutrality, and morphological markedness
(Dryer 2007). Typological surveys have revealed that by far the most common word
orders are S(ubject)-O(bject)-V(erb) and SVO. In Dryer’s (2011) sample of 1377 lan-
guages, 565 are classified as SOV (41 %) and 488 (35 %) as SVO. The third most fre-
quent basic word order is VSO, which is attested in 95 (7 %) of the languages in the
sample. In other words: in 83 % of all languages, the subject precedes the verb, and in
79 % (including the very few OVS and VOS languages), the object and the verb are
adjacent. However, it has been argued that not all languages exhibit a basic word order
(Mithun 1992). According to Dryer, 189 languages in his sample (14 %) lack a domi-
nant word order.
Given that to date, word order has only been investigated for a small number of
sign languages, it is impossible to draw firm conclusions. A couple of things, however,
are worth noting. First, in all sign languages for which a basic word order has been
identified, the order is either SOV (e.g. Italian Sign Language, LIS) or SVO (e.g. ASL).
Second, for some sign languages, it has also been suggested that they lack a basic word
order (Bouchard 1997). Third, it has been claimed that in some sign languages, word
order is not determined by syntactic notions, but rather by pragmatic (information
structure) notions, such as Topic-Comment. Taken together, we can conclude (i) that
word order typology can usefully be applied to sign languages, and (ii) that sign lan-
guages differ from each other in their basic word order (see Kimmelman (2012) for a
survey of factors that may influence word order; also see chapter 12 for discussion).

3.2.3. Negation

In all sign languages studied to date, negation can be expressed manually (i.e. by a
manual particle) and non-manually (i.e. by a head movement). Therefore, at first sight,
23. Manual communication systems: evolution and variation 521

the expression of negation appears to be typologically highly homogenous. However,


based on a typological survey, Zeshan (2004) proposes that sign language negation
actually comes in two different types: manual dominant and non-manual dominant
systems. The former type of system is characterized by the fact that the use of a manual
negative particle is obligatory; such a system has been identified in, for example, Turk-
ish Sign Language (TİD) and LIS. In contrast, in non-manual dominant sign languages,
sentences are commonly negated by a non-manual marker only; this pattern is found,
for instance, in NGT, ASL, and IPSL. Moreover, there are differences with respect to
the non-manual marker. First, as far as the form of the marker is concerned, some sign
languages (e.g. TİD) employ a backward head tilt, in addition to a negative headshake
(which is the most common non-manual marker across all sign languages studied).
Second, within the group of non-manual dominant sign languages, there appear to be
sign language specific constraints concerning the scope of the non-manual marker (see
chapter 15 for discussion).
As for the typology of negation in spoken languages, an important distinction is
that between particle negation (e.g. English) and morphological/affixal negation (e.g.
Turkish). Moreover, in languages with split negation (e.g. French), two negative el-
ements ⫺ be it two particles or a particle and an affix ⫺ are combined to negate a
proposition (Payne 1985). According to Pfau (2008), this typology can be applied to
sign languages. He argues that, for instance, German Sign Language (DGS), a non-
manual dominant sign language, has split negation, with the manual negator being a
particle (which, however, is optional) and the non-manual marker, the headshake, be-
ing an affix which attaches to the verb. In contrast, LIS has simple particle negation;
in this case, the particle may be lexically specified for a headshake. If this account is
on the right track, then, as before, we find inter-modal typological similarities as well
as intra-modal differences.

3.2.4. Agreement

The sign language phenomenon that some scholars refer to as ‘agreement’ is particu-
larly interesting from a cross-modal typological point of view because it is realized in
the signing space by modulating phonological properties (movement and/or orienta-
tion) of verbs (see chapter 7 for extensive discussion; for a recent overview also see
Lillo-Martin/Meier (2011)).
We know from research on spoken languages that languages differ with respect to
the ‘richness’ of their verbal agreement systems. At the one end of the continuum lie
languages with a ‘rich’ system, where every person/number distinction is spelled out
by a different morphological marker (e.g. Turkish); at the other end, we find languages
in which agreement is never marked, that is, ‘zero’ agreement languages (e.g. Chinese).
All languages that fall in between the two extremes could be classified as ‘poor’ agree-
ment languages (e.g. English, Dutch). A further classification is based on the distinction
between subject and object agreement. In spoken languages, object agreement is more
marked than subject agreement, that is, all languages that have object agreement also
have subject agreement, while the opposite is not the case. Finally, in a language with
agreement ⫺ be it rich or poor ⫺ generally all verbs agree in the same way (Corbett
2006).
522 V. Communication in the visual modality

All of these aspects appear to be different in sign languages. First, in all sign lan-
guages for which an agreement system has been described, only a subgroup of verbs
(the so-called ‘agreeing’ verbs) can be modulated to show agreement (Padden 1988).
Leaving theoretical controversies aside, one could argue that agreeing verbs mark ev-
ery person/number distinction differently, that is, by dedicated points in space. In con-
trast, other verbs (‘plain verbs’) can never change their form to show agreement.
Hence, in a sense, a rich and a zero agreement system are combined within a single
sign language. Second, subject agreement has been found to be generally more marked
than object agreement in that (i) some verbs only show object agreement and (ii)
subject agreement is sometimes optional.
In addition, while agreement markers for a certain person/number combination may
differ significantly across spoken languages, all sign languages that mark agreement do
so in a strikingly similar way. Still, we also find intra-modal variation. Some sign lan-
guages, for instance, do not display an agreement system of the type sketched above
(e.g. Kata Kolok, a village sign language of Bali (Marsaja 2008)). In other sign lan-
guages, agreement can be realized by dedicated auxiliaries in the context of plain verbs
(see chapter 10 for discussion). It thus seems that in the realm of agreement, well-
known typological classifications are only of limited use when it comes to sign lan-
guages (also see Slobin (accepted) for a typological perspective on sign language agree-
ment). Space does not allow me to go into detail, but at least some of the patterns we
observe are likely to result from specific properties of the visual modality, in particular,
the use of signing space and the body of the signer (Meir et al. 2007).

3.2.5. Summary

The above discussion makes clear that sign language typology is a worthwhile en-
deavor ⫺ both from an inter- and intra-modal perspective. One can only agree with
Slobin (in press), who points out that “the formulation of typological generalizations
and the search for language universals must be based […] on the full set of human
languages ⫺ spoken and signed”. As for inter-modal variation, we have seen that cer-
tain (but not all) typological classifications are fruitfully applied to sign languages.
Beyond the aspects addressed above, this has also been argued for the typology of
relative clauses: just like spoken languages, sign languages may employ head-internal
or head-external relative clauses (see chapter 16, Complex Sentences, for discussion).
Slobin (accepted) discusses additional typological parameters such as locus of marking
(head- vs. dependent marking), framing (verb- vs. satellite-framed), and subject vs.
topic-prominence, among others, and concludes that all sign languages are head-mark-
ing, verb-framed, and topic-prominent, that is, that there is no intra-modal variation in
these areas. This brings us back to the question whether sign languages ⫺ certain
typological differences notwithstanding ⫺ are indeed typologically more similar than
spoken languages and in how far the modality determines these similarities ⫺ a ques-
tion that I will not attempt to answer here (see chapter 25, Language and Modality,
for further discussion).
Obviously, recurring typological patterns might also be due to genetic relationships
between sign languages (see chapter 38) or reflect the influence of certain areal fea-
tures also attested in surrounding spoken languages (e.g. use of question particles in
23. Manual communication systems: evolution and variation 523

East Asian sign languages). In addition, socio-demographic factors such as type of


community (community size and number of second language learners) and size of
geographical area in which a language is used have also been argued to have an influ-
ence on certain grammatical properties of a language (Kuster 2003; Lupyan/Dale 2010).
This latter factor might, for instance, result in a certain degree of typological homo-
geneity among village sign languages. At present, however, only little is known about
the impact of such additional factors on sign language typology.

4. Tactile sign languages


Sign languages are visual languages and therefore, successful signed communication
crucially relies on visual contact between the interlocutors (as pointed out in sec-
tion 2.2, this constraint may have contributed to the emergence of spoken languages).
As a consequence, sign language is not an accessible means of communication for
people who are deaf and blind. Tactile sign languages are an attempt to overcome this
obstacle by shifting the perception of the language from the visual to the haptic chan-
nel. Obviously, this shift requires certain accommodations. In this section, I will first
say a few words about the etiology of deafblindness before turning to characteristic
features of tactile sign languages.

4.1. Deafblindness

‘Deafblindness’ is a cover term which describes the condition of people who suffer
from varying degrees of visual and hearing impairment. It is important to realize that
the term does not necessarily imply complete deafness and blindness; rather, deafblind
subjects may have residual hearing and/or vision. Still, all deafblind have in common
that their combined impairments impede access to visual and acoustic information to
the extent that signed or spoken communication is no longer possible.
Deafblindness (DB) may have various etiologies. First, we have to distinguish con-
genital DB from acquired DB. Congenital DB may be a symptom associated with
congenital rubella (German measles) syndrome, which is caused by a viral infection of
the mother during the first months of pregnancy. Congenital DB rarely occurs in isola-
tion; it usually co-occurs with other symptoms such as low birth weight, failure to
thrive, and heart problems. The most common cause for acquired DB appears to be
one of the various forms of Usher syndrome, an autosomal recessive genetic disorder.
All subjects with Usher syndrome suffer from retinitis pigmentosa, a degenerative eye
disease which affects the retina and leads to progressive reduction of the visual field
(tunnel vision), sometimes resulting in total blindness. Usher type 1 is characterized
by congenital deafness while subjects with Usher type 2 are born hard-of-hearing. Oc-
casionally, in the latter type, hearing loss is progressive. In addition, DB may result
from hearing and/or visual impairments associated with ageing ⫺ actually, this is prob-
ably the most common cause for DB. Three patterns have to be distinguished: (i) a
congenitally deaf person suffers from progressive visual impairment; (ii) a congenitally
blind person suffers from progressive hearing loss; or (iii) a person born with normal
524 V. Communication in the visual modality

Fig. 23.3: The Lorm alphabet (palm of left hand shown)

hearing and vision experiences a combination of both deteriorations (Aitken 2000;


Balder et al. 2000).
Depending on the onset and etiology of DB, a deafblind individual may choose
different communication methods. Some of these methods are related to spoken lan-
guage, or rather writing, in that they employ haptic representations of letters. Letters
may, for instance, be drawn in the palm of the deafblind receiver. A faster method is
the so-called Lorm alphabet (after Hieronymus Lorm (1821⫺1902), who, deafblind
himself, developed the system in 1881), which assigns letters to locations on the fingers
or the palm. Some letters are represented by a point (e.g. ‘E’ ⫺ touch top of receiver’s
ring finger), others by lines (e.g. ‘D’ ⫺ brush along receiver’s middle finger from top
towards palm); see Figure 23.3. Other communicative strategies are based on sign lan-
guage; these strategies are more commonly used by individuals who are born with
unimpaired vision but are congenitally or prelingually deaf and have acquired sign
language at an early age. People with Usher syndrome, for instance, who can still see
but suffer from tunnel vision may profit from signed input when use is made of a
reduced signing space in front of the face. Once the visual field is further reduced or
has disappeared completely, a subject may switch to tactile sign language, a mode of
communication that will be elaborated on in the next section.

4.2. Characteristics of tactile communication

Generally, tactile sign languages are based on existing natural sign languages which,
however, have to be adapted in certain ways to meet the specific needs of deafblind
people. To date, characteristics of tactile communication have been explored for tactile
ASL (Reed et al. 1995; Collins/Petronio 1998; Quinto-Pozos 2002), tactile SSL (Mesch
2001), tactile NGT (Balder et al. 2000), tactile French Sign Language (Schwartz 2009),
and tactile Italian Sign Language (Cecchetto et al. 2010).
Conversations between deafblind people are limited to two participants. Four-
handed interactions have to be distinguished from two-handed interactions. In the
former, the conversational partners are located opposite each other and the receiver’s
hands are either both on top of the signer’s hands (monologue position; see Fig-
23. Manual communication systems: evolution and variation 525

Fig. 23.4: Positioning of hands in tactile sign language; the person on the right is the receiver
(source: http://www.flickr.com)

ure 23.4) or are in different positions, one under and one on top of the signer’s hands
(dialogue position; Mesch 2001). In two-handed interactions, the signer and the re-
ceiver are located next to each other. In this setting, the receiver is usually more passive
(e.g. when receiving information from an interpreter). In both settings, the physical
proximity of signer and receiver usually results in a reduced signing space.
In the following subsections, we will consider additional accommodations at various
linguistic levels that tactile communication requires.

4.2.1. Phonology

As far as the phonology of signs is concerned, Collins and Petronio (1998) observe
that handshapes were not altered in tactile ASL, despite the fact that some handshapes
are difficult to perceive (e.g. ASL number handshapes in which the thumb makes
contact with one of the other fingers). Due to the use of a smaller signing space, the
movement paths of signs were generally shorter than in visual ASL. Moreover, the
reduced signing space was also found to affect the location parameter; in particular,
signs without body contact tend to be displaced towards the center of the signing space.
Balder et al. (2000) describe how in NGT, signs that are usually articulated in the
signing space (e.g. walk) are sometimes articulated on the receiver’s hand. In signs
with body contact, Collins and Petronio (1998) observe an interesting adaptation: in
order to make the interaction more comfortable, the signer would often move the
respective body part towards the signing hand, instead of just moving the hand towards
the body part to make contact. Finally, adaptations in orientation may result from the
fact that the receiver’s hand rests on top of the signer’s hand. Occasionally, maintaining
the correct orientation would require the receiver’s wrist to twist awkwardly. Collins
and Petronio do not consider non-manual components such as mouthings and mouth
gestures. Clearly, such components are not accessible to the deafblind receiver. Balder
et al. (2000) find that in minimal pairs that are only distinguished by mouthing (such
as the NGT signs brother and sister), one of the two would undergo a handshape
change: brother is signed with a u-hand instead of a W-hand.
526 V. Communication in the visual modality

4.2.2. Morphology

Non-manuals also play a crucial role in morphology because adjectival and adverbial
modifications are commonly expressed by non-manual configurations of the lower face
(Liddell 1980; Wilbur 2000). The data collected by Collins and Petronio (1998) suggest
that non-manual morphemes are compensated for by subtle differences in the sign’s
manual articulation. For instance, instead of using the non-manual adverbial “mm”,
which expresses relaxed manner, a verbal sign (e.g. drive) can be signed more slowly
and with less muscle tension (also see Collins 2004). For NGT, Balder et al. (2000) also
observe that manual signs may replace non-manual modifiers; for example, the manual
sign very-much may take over the function of an intensifying facial expression accom-
panying the sign angry to express the meaning ‘very angry’.

4.2.3. Syntax

Interesting adaptations are also attested in the domain of syntax, and again, for the
most part, these adaptations are required to compensate for non-manual markers.
Mesch (2001) presents a detailed analysis of interrogative marking in tactile SSL. Obvi-
ously, yes/no-questions pose a bigger challenge in tactile conversation since wh-ques-
tions usually contain a wh-sign which is sufficient to signal the interrogative status of
the utterance. Almost half of the yes/no-questions from Mesch’s corpus are marked by
an extended duration of the final sign. Mesch points out, however, that such a sentence-
final hold also functions more generally as a turn change signal; it can thus not be
considered an unambiguous question marker. In addition, she reports an increased use
of pointing to the addressee (indexadr) in the data; for the most part, this index occurs
sentence-finally, but it may also appear initially, in second position, and it may be
doubled, as in (1a). In this example, the final index is additionally marked by an
extended duration of 0.5 seconds (Mesch 2001, 148).

(1) a. indexadr interested fish reel-in indexadr-dur(0.5) [Tactile SSL]


‘Are you interested in going fishing?’
b. indexadr what plane what [Tactile ASL]
‘What kind of a plane was it?’

Other potential manual markers such as an interrogative (palm up) gesture or drawing
of a question mark after the utterance were uncommon in Mesch’s data. In contrast,
yes/no-questions are commonly ended with a general question sign in tactile NGT and
tactile ASL (Balder et al. 2000; Collins/Petronio 1998). Moreover, Collins and Petronio
report that in their data, many wh-questions also involve an initial index towards the
receiver. Note that in the tactile ASL example in (1b), the index is neither subject nor
object of the question (adapted from Collins/Petronio (1998, 30)); rather, it appears to
alert the receiver that a question is directed to him.
None of the above-mentioned studies considers negation in detail. While the nega-
tive polarity of an utterance is commonly signaled by a negative headshake only in the
sign languages under investigation, it seems likely that in their tactile counterparts, the
23. Manual communication systems: evolution and variation 527

use of manual negative signs is required (see Frankel (2002) for the use of tactually
accessible negation strategies in deafblind interpreting).
In a study on the use of pointing signs in re-told narratives of two users of tactile
ASL, Quinto-Pozos (2002) observes a striking lack of deictic pointing signs used for
referencing purposes, i.e. for establishing or indicating a pre-established arbitrary loca-
tion in signing space, which is linked to a non-present human, object, or locative refer-
ent. Both deafblind subjects only used pointing signs towards the recipient of the narra-
tive (2nd person singular). In order to indicate other animate or inanimate referents,
one subject made frequent use of fingerspelling while the other used nominal signs
(e.g. girl, mother) or a sign (glossed as she) which likely originated from Signed
English. Quinto-Pozos hypothesizes that the lack of pointing signs might be due to the
non-availability of eye gaze, which is known to function as an important referencing
device in visual ASL. The absence of eye gaze in tactile ASL “presumably influences
the forms that referencing strategies take in that modality” (Quinto-Pozos 2002, 460).
Also, at least in the narratives, deictic points towards third person characters have the
potential to be ambiguous. Quinto-Pozos points out that deafblind subjects probably
use pointing signs more frequently when referring to the location of people or objects
in the immediate environment.

4.2.4. Discourse

As far as discourse organization is concerned, most of the available studies report that
tactile sign languages employ manual markers for back-channeling and turn-taking
instead of non-manual signals such as head nods and eye gaze (Baker 1977). Without
going into much detail, manual feedback markers include signs like oh-i-see (nodding
d-hand), different types of finger taps that convey meanings such as “I understand”
or “I agree”, squeezes of the signer’s hand, and repetition of signs by the receiver
(Collins/Petronio 1998; Mesch 2001). Turn-taking signals on the side of the signer in-
clude a decrease in signing speed and lowering of the hands (see Mesch (2001, 82 ff.)
for a distinction of different conversation levels in tactile SSL). Conversely, if the re-
ceiver wants to take over the turn, he may raise his hands, lean forward, and/or pull
the passive hand of the signer slightly (Balder et al. 2000; Schwartz 2009).
In addition, deafblind people who interact on a regular basis may agree on certain
“code signs” which facilitate the communication. A code sign may signal, for instance,
that someone is temporarily leaving the room or it may indicate an emergency. For
tactile NGT, Balder et al. (2000) mention the possibility of introducing a sentence by
the signs tease or haha to inform the receiver that the following statement is not
meant seriously, that is, to mark the pragmatic status of the utterance.

4.2.5. Summary

Taken together, the accommodations sketched above allow experienced deafblind sign-
ers to converse fluently in a tactile sign language. Thanks to the establishment of na-
tional associations for the deafblind, contact between deafblind people is increasing,
possibly leading to the emergence of a Deafblind culture, distinct from, but embedded
528 V. Communication in the visual modality

within, Deaf culture (MacDonald 1994). It is to be expected that an increase in commu-


nicative interaction will lead to further adaptations and refinements of the source sign
language to meet the specific needs of deafblind users.

5. Secondary sign languages


In contrast to the sign languages discussed in the previous sections, secondary sign
languages (sometimes also referred to as ‘alternate sign languages’) do not result from
the specific communicative needs of deaf or deafblind people. Rather, they are devel-
oped in hearing societies in which they are used as a substitute for spoken language in
certain situations. Amongst the motivations for the development of a secondary sign
language are religious customs and the need for a mode of communication in contact
situations. Generally, secondary sign languages are not full-fledged natural sign lan-
guages but rather gestural communication systems, or ‘kinesic codes’ (Kendon 2004),
with restricted uses and varying degree of elaboration. This crucial difference notwith-
standing, the term ‘sign language’ will be used throughout this section. Four types of
secondary sign languages will be considered in the following subsections: Sawmill Sign
Language, monastic sign languages, Aboriginal sign languages of Australia, and Plains
Indian Sign Language. In all subsections, an attempt will be made to provide informa-
tion about the origin and use of the respective sign language, its users, and selected
aspects of its structure.
It should be pointed out at the outset, however, that the four sign languages ad-
dressed in this section are highly diverse from a linguistic and sociolinguistic point of
view ⫺ possibly too diverse to justify subsuming them under a single label. I will get
back to this issue in sections 5.4 and 5.5.

5.1. Sawmill Sign Language

In section 3.1, I pointed out that simple gestural communication systems are sometimes
used in settings that preclude oral communication (e.g. hunting, diving). Occasionally,
such gestural codes may develop into more complex systems (see Figure 23.1). In this
section, I will discuss a sign language which emerged in a saw mill, that is, in an ex-
tremely noisy working environment in which a smooth coordination of work tasks
is required.

5.1.1. On the origin and use of Sawmill Sign Language

According to Johnson (1977), a sawmill sign language ⫺ he also uses the term ‘indus-
trial sign-language argot’ ⫺ has been used widely in the northwestern United States
and western Canada. The best-documented case is a language of manual gestures spon-
taneously created by sawmill workers in British Columbia (Canada) (Meissner/Philpott
1975a,b). For one of the British Columbia mills, Meissner and Philpott describe a typi-
cal situation in which the sign language is used: the communicative interaction between
23. Manual communication systems: evolution and variation 529

Fig. 23.5: Layout of a section of British Columbia sawmill: the head saw, where slabs of wood are
cut off the log (re-drawn from a sketch provided by Meissner/Philpott (1975a, 294)).
Copyright for original sketch © 1975 by Gallaudet University Press. Reprinted with
permission.

three workers at the head saw (see Figure 23.5, which is part of a figure provided by
Meissner/Philpott (1975a, 294)). The head sawyer (➀ in Figure 23.5) controls the plac-
ing of the log onto the carriage while the tail sawyer (➂) guides the cants cut from the
log as they drop on the conveyor belt. Both men face the setter (➁), who sits in a
moving carriage above their heads, but they cannot see each other. The setter, who
has an unobstructed view of the mill, controls the position of the log and co-operates
with the head sawyer in placing the log. While the mill is running, verbal communica-
tion among the workers is virtually impossible due to the immense noise. Instead, a
system of manual signs is used. Meissner and Philpott (1975a, 292) report that they
“were struck by its ingenuity and elegance, and the opportunity for expression and
innovation which the language offered under these most unlikely circumstances”.
For the most part, signs are used for technical purposes, in particular, to make the
rapid coordination of tasks possible. In one case, the head sawyer signed to the setter
index1 push-button wrong tell lever-man (‘I pushed the wrong button. Tell the
leverman!’) and, within seconds, the setter passed the message on to the leverman. In
another case, one of the workers signed time change saw-blade (‘It’s time to change
the blade’). Interestingly, however, it turned out that use of signs was not confined to
the transmission of technical information. Rather, the workers also regularly engaged
in personal conversations. The tail sawyer, for instance, would start with a gestural
remark to the setter, which the setter, after moving his carriage, would pass on to the
head sawyer, who in turn would make a contribution. Most of the observed personal
exchanges involved terse joking (2a) ⫺ “all made with the friendliest of intentions”
(Meissner/Philpott 1975a, 298) ⫺ or centered on topics such as cars, women (2b), and
sports events (2c).

(2) a. index2 crazy old farmer [Sawmill SL]


‘You crazy old farmer.’
b. index1 hear index2 woman knock^up
‘I hear your wife is pregnant.’
530 V. Communication in the visual modality

c. how football go
‘How’s the football game going?’

When comparing sign use in five mills, Meissner and Philpott (1975a) observe that a
reduction of workers due to increased automation leads to a decline in the rate of
manual communication. They speculate that further automation will probably result in
the death of the sign language. It thus seems likely that at present (i.e. 37 years later),
Sawmill Sign Language is not used anymore.
Johnson (1977) reports a single case of a millworker ⫺ in Oregon, not in British
Columbia ⫺ who, after becoming deaf, used sign language to communicate with his
wife and son. Johnson claims that this particular family sign language is an extension
of the sawmill sign language used in southeast Oregon. Based on a lexical comparison,
he concludes that this sign language is closely related to the Sawmill Sign Language
described by Meissner and Philpott.

5.1.2. Lexicon and structure of Sawmill Sign Language

Based on direct observation and consultation with informants, Meissner and Philpott
(1975b) compiled a dictionary of 133 signs, 16 of which are number signs and eight
specialized technical signs (e.g. log-not-tight-against-blocks). Some number signs
may also refer to individuals; two, for instance, refers to the engineer and five to the
foreman, corresponding to the number of blows on the steam whistle used as call
signals. Not surprisingly, most of the signs are iconically motivated. The signs woman
and man, for example, are based on characteristic physical properties in that they refer
to breast and moustache, respectively. Other signs depict an action or movement, e.g.
turning a steering wheel for car and milking a cow for farmer (2a). Interestingly,
pointing to the teeth signifies saw-blade. Meissner and Philpott also describe “audio-
mimic” signs in which the form of the sign is motivated by phonological similarity of
the corresponding English words: grasping the biceps for week (week ⫺ weak), grasp-
ing the ear lobe for year (ear ⫺ year), and use of the sign two in the compound
two^day (‘Tuesday’; cf. the use of two (for ‘to’) and four (‘for’) described in the
next section).
The authors found various instances in which two signs are combined in a com-
pound, such as woman^brother (‘sister’), fish^day (‘Friday’), and knock^up (‘preg-
nant’, cf. (2b)). At least for the first of these, the authors explicitly mention that the
order of signs cannot be reversed. Also note that the first two examples are not loan
translations from English.
Pointing is used frequently for locations (e.g. over-there) and people; lip and face
movements (including mouthings) may help in clarifying meanings. In order to disam-
biguate a name sign that could refer to several people, thumb pointing can be used.
As for syntactic structure, the examples in (2) suggest that the word order of Sawmill
Sign Language mirrors that of English. However, just as in many other sign languages,
a copula does not exist. Depending on the distance between interlocutors, interroga-
tives are either introduced by a non-manual marker (raised eyebrows or backward jerk
of head) or by the manual marker question, which is identical to the sign how (2c),
which is articulated with a fist raised to above shoulder height, back of hand facing
23. Manual communication systems: evolution and variation 531

outward. Meissner and Philpott do not mention the existence of grammatical non-
manual markers that accompany strings of signs, but they do point out that mouthing
of a word may make a general reference specific. In conclusion, it appears that gener-
ally, “the sawmill sign language is little constrained by rules and open to constant
innovation” (Meissner/Philpott 1975a, 300).

5.2. Monastic sign languages

While noise was the motivation for development of the sawmill sign language discussed
in the previous section, in this section, the relevant factor is silence. Silence plays a
significant and indispensable role in monastic life. It is seen as a prerequisite to a life
without sin. “The usefulness of silence is supremely necessary in every religious insti-
tute; in fact, unless it is properly observed, we cannot speak of the religious life at all,
for there can be none” (Wolter 1962; cited in Barakat 1975, 78). Hence, basically all
Christian monastic orders impose a law of silence on their members. However, only in
a few exceptional cases, this law of silence is total. For the most part, it only applies
to certain locations in the cloister (e.g. the chapel and the dormitory) and to certain
times of the day (e.g. during reading hours and meals).

5.2.1. On the origin and use of monastic sign languages

According to van Rijnberk (1953), a prohibition against speaking was probably im-
posed for the first time in 328 by St. Pachomius in a convent in Egypt. In the sixth
century, St. Benedict of Nursia wrote “The Rule of Benedict”, an influential guide to
Western monasticism, in which he details spiritual and moral aspects of monastic life
as well as behavioral rules. Silence is a prominent feature in the Rule. In chapter VI
(“Of Silence”), for instance, we read: “Therefore, because of the importance of silence,
let permission to speak be seldom given to perfect disciples even for good and holy
and edifying discourse, for it is written: ‘In much talk thou shalt not escape sin’ (Prov
10:19)” (Benedict of Nursia 1949). St. Benedict also recommends the use of signs for
communication, if absolutely necessary (chapter XXXVIII: “If, however, anything
should be wanted, let it be asked for by means of a sign of any kind rather than a
sound”). Later, all of the religious orders that emerged from the order of St. Bene-
dict ⫺ the Cistercians, Trappists, and Cluniacs ⫺ maintained the prescription of silence.
A fixed system of signs came into appearance with the foundation of Cluny in the
year 909 (Bruce 2007). In 1068, a monk named Bernard de Cluny compiled a list of
signs, the Notitia Signorum. This list contains 296 signs, “a sizeable number which
seems to indicate that many were in use before they were written down” (Barakat
1975, 89). Given an increasing influence of the Cluniacs from the eleventh century on,
signs were adopted by other monasteries throughout Western Europe (e.g. Great Bri-
tain, Spain, and Portugal).
It is important to point out that monastic sign languages were by no means intended
to increase communication between monks in periods of silence. Rather, the limited
inventory of signs results from the desire to restrict communication. “The administra-
tion of the Order has rarely seen fit to increase the sign inventory for fear of intrusion
532 V. Communication in the visual modality

upon the traditional silence and meditative atmosphere in the monasteries” (Barakat
1975, 108) ⫺ one may therefore wonder why Barakat’s dictionary includes compound
signs like wild+time (‘party’). Signs may vary from one convent to another but gener-
ally, as remarked by Buyssens (1956, 30 f.), the variation is limited “de sorte qu’un
Trappiste de Belgique peut parfaitement se faire comprendre d’un Trappist de Chine”.

5.2.2. Lexicon and structure of Cistercian Sign Language

The most thorough studies on monastic sign language to date are the ones by Barakat
(1975) and Bruce (2007). Barakat studied the sign language as used by the monks of
St. Joseph’s Abbey in Spencer, Massachusetts. His essay on the history, use, and struc-
ture of Cistercian Sign Language (CisSL) is supplemented by a 160-page dictionary,
which includes photographs of 518 basic signs and the manual alphabet as well as lists
describing derived (compound) signs, the number system, and signs for important
saints and members of St. Joseph’s Abbey. In contrast, Bruce (2007) explores the ra-
tionales for religious silence and the development and transmission of manual forms
of communication. His study contains some information on the Cluniac sign lexicon
and the visual motivation of signs, but no further linguistic description of the language.
In the following, I will therefore focus for the most part on the information provided
by Barakat (but also see Stokoe (1978)).
Many of the signs that are used reflect in some way the religious and occupational
aspects of the daily lives of the brothers. Barakat distinguishes five different types of
signs. First, there are the pantomimic signs. These are concrete signs which are easily
understood because they either manually describe an object or reproduce actual body
movements that are associated with the action the sign refers to. Signs like book and
cross belong to the former group while signs like eat and sleep are of the latter type.
Not surprisingly, these signs are very similar or identical to signs described for natural
sign languages. Secondly, the group of pure signs contains signs that bear no relation
to pantomimic action or speech. These signs are arbitrary and are therefore considered
“true substitutes for speech, […] an attempt to develop a sign language on a more
abstract and efficient level” (Barakat 1975, 103). Examples are god (two A-hands
contact each other to form a triangle), day (@-hand contacts cheek), and yellow (R-
hand draws a line from between eyebrows to tip of nose). Group three comprises what
Barakat refers to as qualitative signs. Here, the relation between a sign and its meaning
is associative, “roughly comparable to metaphor or connotation in spoken language”
(p. 104). Most of the signs in this group are compounds. Geographical notions, for
instance, generally include the sign courtyard plus modifier(s), as illustrated by the
examples in (3).

(3) a. drink C T C courtyard (‘England’) [CisSL]


b. red + courtyard (‘Russia’)
c. secular + courtyard + shoot + president C K (‘Dallas, TX’)

The examples also illustrate that use is made of handshapes from the manual alphabet:
the ‘T’ in (3a) representing ‘tea’, the ‘K’ in (3c) as a stand-in for ‘Kennedy’ (note that
this manual alphabet is different from the one used in ASL). Other illustrative exam-
23. Manual communication systems: evolution and variation 533

ples of qualitative signs are mass + table (‘altar’), red + metal (‘copper’), and black +
water (‘coffee’).
The last two groups of signs are interesting because they include complex signs that
are partially or completely dependent upon speech by exploiting homonymy (e.g.
knee ⫺ ney, see below) as well as fingerspelling. Most of these signs are invented to
fill gaps in the vocabulary. Barakat distinguishes between signs partially dependent on
speech and speech signs, but the line between the two groups appears to be somewhat
blurry. Clear examples of the former type are combinations that reflect derivational
processes such as, for example, sing C R (‘singer’) and shine + knee (‘shiney’) ⫺ this
is reminiscent of the phenomenon that Meissner and Philpott refer to as ‘audiomimic’
signs. In the latter group, we find combinations such as sin + sin C A C T (‘Cincinatti,
Ohio’) and day C V (‘David’).
Stokoe (1978) compares the lexicons of CisSL and ASL and finds that only one out
of seven CisSL signs (14 %) resembles the corresponding ASL sign. It seems likely
that most of these signs are iconic, that is, belong to the group of pantomimic signs.
Stokoe himself points out that in many cases of resemblances, the signs may be related
to ‘emblems’ commonly used in American culture (e.g. drive, telephone). Based on
this lexical comparison, he concludes that CisSL and ASL are unrelated and have not
influenced each other.
Turning to morphology, there seems to be no evidence for morphological structure
beyond the process referred to as compounding above and the derivational processes
that are based on spoken language. But even in CisSL compounds, signs are merely
strung together and there is no evidence for the phonological reduction or assimilation
processes that are characteristic of ASL compounds (Klima/Bellugi 1979; see chapter 5,
Word Classes and Word Formation, for discussion). Thus, the CisSL combination hard
+ water can be interpreted as ‘ice’ but also as ‘hard water’. In contrast, a genuine ASL
compound like soft^bed can only mean ‘pillow’ but not ‘soft bed’. Barakat distin-
guishes between simple derived signs, which consist of a maximum of three signs, and
compound signs, which combine more than three signs. Compound signs may be of
considerable complexity, as shown by the examples in (4). Clearly, expressing that
Christ met Judas in Gethsemane would be a cumbersome task.

(4) a. vegetable + courtyard + cross + god + pray + all + time [CisSL]


‘Gethsemane’ (= ‘the garden (vegetable + courtyard) where
Christ (cross + god) prayed for a long time’)
b. secular + take + three + O + white + money + kill + cross + god
‘Judas’ (= ‘man who took thirty pieces of silver (white + money)
that killed Christ (cross + god)’)

While simple signs generally have a fixed constituent order, the realization of com-
pound signs may “vary considerably from brother to brother because of what they
associate with the places or events” (Barakat 1975, 114 f.).
With respect to syntactic structure, Barakat (1975, 119) points out that, for the most
part, the way signs are combined into meaningful utterances “is dependent upon the
spoken language of the monks and the monastery in which they live”. Hence, in CisSL
of St. Joseph’s Abbey, subject-verb-complement appears to be the most basic pattern.
Index finger pointing may serve the function of demonstrative and personal pronouns,
534 V. Communication in the visual modality

but only when the person or object referred to is in close proximity. Occasionally,
fingerspelled ‘B’ and ‘R’ are used as the singular (7ab) and plural copula, respectively.
Negation is expressed by the manual sign no, which occupies a pre-verbal position,
just as it does in English (e.g. brother no eat).
CisSL does not have a dedicated interrogative form. Barakat observes that yes/no-
questions are preceded or followed by either a questioning facial expression or a ques-
tion mark drawn in the air with the index finger. Also, there is only one question word
that finds use in wh-questions; Barakat glosses this element as what and notes that it
may combine with other elements to express specific meanings, e.g. what + time
(‘when’) or what + religious (‘who’; literally ‘what monk’). Such simple or complex
question signs always appear sentence-initially. From his description, we may infer that
a questioning look is not observed in wh-questions.
Finally, the expression of complex sentences including dependent clauses appears
to be difficult in CisSL. “The addition of such clauses is one source of garbling in the
language and most, if not all, the monks interviewed had some trouble with them”
(Barakat 1975, 133). The complex sentence in (5a) is interesting in a couple of respects:
first, the sign all is used as a plural marker; second, rule expresses the meaning of
how, while the connective but is realized by the combination all + same; third, the
sign two is used as infinitival to; and finally, the plural pronoun we is a combination
of two indexical signs (Barakat 1975, 134).

(5) a. all monk know rule two give vegetable seed [CisSL]
all same ix2 ix1 not know rule
‘The monks know how to plant vegetables but we don’t.’
b. wood ix2 give ix1 indulgence two go two work
‘Can I go to work?’

Modal verbs are generally expressed by circumlocutions. Example (5b) shows that can
is paraphrased as ‘Would you give me permission”, the sign wood being used for the
English homonym would. As a sort of summary, I present part of The Lord’s Prayer in
(6). Note again the use of the sign courtyard, of fingerspelled letters, of concatenated
pronouns, and of the combination of four + give to express forgive.

(6) a. ix2 ix1 father stay god courtyard blessed B ix2 name [CisSL]
‘Our Father, who art in Heaven, hallowed be thy name;’
b. ix2 king courtyard come ix2 W B arrange
‘thy kingdom come, thy will be done,’
c. this dirt courtyard same god courtyard
‘on earth as it is in Heaven.’
d. give ix2 ix1 this day ix2 ix1 day bread
‘Give us this day our daily bread,’
e. four give ix2 ix1 sin same ix2 ix1 four give sin arrange fault
‘and forgive us our trespasses as we forgive those who trespass against us.’

Some of the solutions the Cistercians came up with appear rather ingenious. Still, it is
clear that the structure is comparably simple and that there is a strong influence from
the surrounding spoken language. Barakat stresses the fact that CisSL has traditionally
23. Manual communication systems: evolution and variation 535

been intended only for the exchange of brief, silent messages, and that, due to its
“many defects”, it can never be an effective means for communicating complex messa-
ges. He concludes that “[a]lthough this sign language, as others, is lacking in many of
the grammatical elements necessary for expressing the nuances of thought, it does
function very effectively within the context of the monastic life” (Barakat 1975, 144).

5.3. Aboriginal sign languages

The use of complex gestural or sign systems by Aborigines has been reported for many
different parts of Australia since the late 19th century. Kendon (1989, 32) provides a
map indicating areas where sign language has been or still is used; for the sign lan-
guages still in use, symbols on the map also provide information on the frequency of
use and the complexity of the system. The symbols suggest that the most complex
systems are found in the North Central Desert area and on Cape York. Kendon himself
conducted his research in the former area, with particular attention to the sign lan-
guages of the Warlpiri, Warumungu, and Warlmanpa (Kendon 1984, 1988, 1989). In
his earlier studies, Kendon speaks about Warlpiri Sign Language (WSL ⫺ since all of
the data were collected in the Warlpiri community of Yuendumu), but in his 1989
book, he sometimes uses the cover term North Central Desert Sign Languages
(NCDSLs). Another study (Cooke/Adone 1994) focuses on a sign language used at
Galiwin’ku and other communities in Northeast Arnhemland, which is referred to as
Yolngu Sign Language (YSL). According to the authors, YSL bears no relation to the
sign languages used in Central Australia (beyond some shared signs for flora, fauna,
and weapons; Dany Adone, personal communication).

5.3.1. On the origin and use of Aboriginal sign languages

Kendon (1984) acknowledges that NCDSLs may, in the first instance, have arisen for
use during hunting, as is also suggested by Divale and Zippin (1977, 186), who point
out that the coordination of activities during hunting “requires some system of commu-
nication, especially if the plans of the hunters are to be flexible enough to allow them
to adapt to changing conditions of the chase” ⫺ clearly a context that would favor the
development of a silent communication system that can be used over larger distances.
Hunting, however, is certainly not the most important motivation for sign language
use. Rather, NCDSLs are used most extensively in circumstances in which speech is
avoided for reasons of social ritual (also see Meggitt 1954). As for the North Central
Desert area, Kendon (1989) identifies two important ritual contexts for sign language
use: (i) male initiation and, more importantly, (ii) mourning.
At about 13 years of age, a boy is taken into seclusion by his sister’s husband and
an older brother. After some initial ceremonies, he is taken on a journey, which may
last two or three months, during which he learns about the topography of the region
and acquires hunting skills. Following the journey, the boy undergoes circumcision and
after he has been circumcised, he goes into seclusion again for another two to three
months. During the first period of seclusion until after the circumcision, the boy is
enjoined to remain silent. As pointed out by Meggitt (1975, 4), “novices during initia-
536 V. Communication in the visual modality

tion ceremonies are ritually dead” and since dead people cannot speak, they should
communicate only in signs. The extent to which a boy makes use of signs during that
period, however, appears to vary. Finally, after circumcision, the boy is released from
all communicative restriction in a special ceremony (Kendon 1989, 85 f.).
A more important factor motivating sign use, however, are ceremonies connected
with death and burial. In all communities studied by Kendon, speech taboos are ob-
served during periods of mourning following the death of a group member. The taboo,
however, applies only to women ⫺ in some communities only to the widow, in others
also to other female relatives of the deceased. Duration of the speech taboo varies
depending on factors such as “closeness of the relative to the deceased […] and the
extent to which the death was expected” (Kendon 1989, 88) and may last up to one
year (for widows). As in the case of male initiation, the taboo is lifted during a special
‘mouth opening’ ceremony.
Findings reported in Kendon (1984) suggest that WSL is not common knowledge
for all members of the Yuendumu community. Rather, use of WSL was mostly confined
to middle-aged and older women. This is probably due to the fact that the most impor-
tant context for sign use, mourning, is restricted to women, as is also supported by the
observation that women who experienced bereavement showed better knowledge of
WSL. Meggitt (1954) also notes that women generally know and use more signs than
men do, but he adds as an additional factor that the use of signs allows women to
gossip about topics (such as actual and probable love affairs) that are not meant for
the husband’s ears.
As for YSL, Cooke and Adone (1994, 3) point out that the language is used during
hunting and in ceremonial contexts “where proximity to highly sacred objects demands
quietness as a form of respect”; however, they do not mention mourning as a motiva-
tions for sign language use. Interestingly, they further suggest that in the past, YSL
may have served as a lingua franca in extended family groups in which, due to compul-
sory exogamy, several spoken Yolngu languages were used (also see Warner 1937).
Moreover, they point out that YSL is also used as a primary language by five deaf
people (three of them children at the time) ⫺ a communicative function not mentioned
by Kendon. Actually, the data reported in Cooke and Adone (1994) come from a
conversation between a hearing and a deaf man (also see Kwek (1991) for use of sign
language by and in communication with a deaf girl in Punmu, an Aboriginal settlement
in the Western Desert region in Western Australia).

5.3.2. Lexicon and structure of Aboriginal sign languages

According to Kendon (1984), WSL has a large vocabulary. He recorded 1,200 signs
and points out that the form of the majority of signs is derived from depictions of some
aspect of their meaning, that is, they are iconic (see chapter 18 for discussion). Often,
however, the original iconicity is weakened or lost (as also observed by Frishberg
(1975) for ASL). Kendon (1989, 161) provides the example of the sign for ‘mother’, in
which the fingertips of a spread hand tap the center of the chest twice. It may be
tempting to analyze this form as making reference to the mother’s breasts, but clearly
this form “is not in any sense an adequate depiction of a mother’s breast”. Kendon
23. Manual communication systems: evolution and variation 537

describes various strategies for sign creation, such as presenting (e.g. rdaka ‘hand’:
one hand moves toward the signer while in contact with the other hand), pointing (e.g.
langa ‘ear’: tip of @-hand touches ear), and characterizing (e.g. ngaya ‘cat’: ?-hand
represents the arrangement of a cat’s paw pads). Often, characterizing signs cannot be
understood without knowledge about certain customs. In the sign for ‘fully initiated
man’, for instance, the u-hand is moved rapidly across the upper chest representing
the horizontal raised scars that are typical for fully initiated men (Kendon 1989, 164).
Interestingly, there are also signs that are motivated by phonetic characteristics of
the spoken language. For instance, the Warlpiri word jija may mean ‘shoulder’ and
‘medical sister’ (the latter resulting from an assimilation of the English word sister).
Given this homophony, the WSL sign for ‘shoulder’ (tapping the ipsilateral shoulder
with the middle finger) is also used for ‘medical sister’ (Kendon 1989, 195). Similarly,
in YSL, the same sign (i.e. touching the bent elbow) is used for ‘elbow’ and ‘bay’
because in Djambarrpuyngu, one of the dominant Yolngu languages, the term likan
has both these meanings (Cooke/Adone 1994).
Compounds are frequent in NCDSLs, but according to Kendon, almost all of them
are loans from spoken languages. That is, in almost all cases where a meaning is ex-
pressed by a compound sign, the same compound structure is also found in the sur-
rounding spoken language. Fusion (i.e. reduction and/or assimilation) of the parts is
only occasionally observed; usually the parts retain their phonological identity. For
instance, in Anmatyerre, we find the compound kwatyepwerre (‘lightning’), which is
composed of kwatye (‘water’) and pwerre (‘tail’); in the sign language, the same mean-
ing is also expressed by the signs for ‘water’ and ‘tail’ (Kendon 1989, 207). An example
of a native compound is the sign for ‘heron’, which combines the signs for ‘neck’ (a
pointing sign) and ‘tall’ (upward movement of @), yielding a descriptive compound
that can be glossed as ‘neck long’ (the corresponding Warlpiri word kalwa is mono-
morphemic). See Kendon (1989, 212⫺217) for discussion of Warlpiri preverb construc-
tions, such as jaala ya-ni (‘go back and forth’) that are rendered as two-part signs
in WSL.
Reduplication is a common morphological process in Australian languages, and it
is also attested in the sign languages examined. As for nominal plurals, Kendon (1989,
202 f.) finds that nouns that are pluralized by means of reduplication in Warlpiri (espe-
cially nouns referring to humans, such as kurdu ‘child’) are also reduplicated in WSL
(e.g. kurduCC), while signs that are pluralized by the addition of a suffix (-panu/
-patu) are pluralized in WSL by the addition of a quantity sign (<-hand, palm toward
signer, fingers moving back and forth). Cooke and Adone (1994) further report that
reduplication is also used to mark iterativity on process verbs in YSL ⫺ again similar
to what is observed in the surrounding spoken language (for pluralization, see also
Pfau/Steinbach (2006) and chapter 6). It has to be pointed out, however, that the use
of reduplication is not necessarily proof of borrowing, because reduplication appears
to be a common feature in all sign languages studied to date ⫺ irrespective of the
presence of this feature in the surrounding spoken language.
For WSL, Kendon (1989, 243 f.) notes a distinction between directional and plain
verbs (where the category ‘directional’ seems to include agreeing and spatial verbs;
Padden 1988). Among the verbs that can agree with spatial locations by means of
movement are wapami (‘move’), yani (‘go’), yinyi (‘give’), and kijirni (‘throw’); other
verbs, such as wangkami (‘talk’) and ngarni (‘ingest’), only change their orientation.
538 V. Communication in the visual modality

Interestingly, in the context of plain verbs, a dedicated ‘direction marker’, a directional


pointing sign, can be added. This sign is reminiscent of indexical agreement auxiliaries
described for various sign languages (see chapter 10 for discussion).
Kendon (1989) also discusses a number of ‘suffix markers’, that is, signs which he
takes to be the sign equivalent of nominal suffixes ⫺ case markers and derivational
suffixes ⫺ of the surrounding spoken languages. Amongst these WSL markers are:
(i) ‘u-hand pronation’, which may function as a possessive or associative marker; the
latter is glossed as kurlu (based on Warlpiri) in (7a) (Kendon 1989, 230); (ii) ‘<-hand
pronation’, which expresses different types of negative meanings; and (iii) a two-
handed sign expressing a meaning which in Warlpiri would be realized by the (causal)
ablative (abl) suffix -ngurlu: the base hand has a u-handshape, palm oriented upwards,
the dominant hand forms a fist, which is in contact with the palm of the base hand and
is moved toward the signer by flexion of the wrist. In (7b) this sign is used in order to
express that the woman is the cause of the man’s dissatisfaction (Kendon 1989, 233).

(7) a. wati maliki kurlu [WSL]


man dog associative
‘the man with the dog’
b. wurna karnta ngurlu rdinyirlpa jinta parnkamirra
travel woman abl dissatisfied the.one runs.thither
‘He travelled thither (to the creek) dissatisfied because of the woman.’

Still, most of the numerous Warlpiri suffixes have no sign equivalent. Kendon (1989,
237) stresses that the WSL inventory of suffix markers includes only suffixes “that
make differences that a recipient could not be left to settle on the basis of context”.
In many Australian spoken languages, including Warlpiri (Hale 1983), a rich case
marking system goes hand in hand with a highly flexible word order. Given that most
of these case-markers are absent in NCDSLs, one might expect that word order in the
sign languages would be more constrained. However, based on a comparison of signed
and spoken renditions of the same stories, Kendon (1988) concludes that there are no
significant differences between the spoken and the signed versions. Rather, he finds a
tendency for OV order in both the spoken and signed renditions, as illustrated in
both clauses in (8a). Subjects are often omitted, but when they occur, they tend to be
placed first.
The examples in (8) also illustrate parallels in terms of a match of signs to words
(from Kendon (1988, 245); (8a) slightly adapted). The sign mani (‘get’) in the first
clause in (8a) is a plain verb. In the second clause, the same sign appears in a construc-
tion with a preverb, which roughly contributes the meaning ‘around neck’, thus yielding
the meaning ‘carry on neck’. This is similar to the Warlpiri expression (preverbCroot)
nyurdi ma-ni meaning ‘to carry meat round the neck’ (8b). Kendon stresses that mani
in the second clause in (8a) does not contribute to the semantics; the meaning might
as well have been expressed by nyurdi alone. Still, the signer employs the complex
expression in order to match the signed expression to the structure of the spoken
language. Kendon further notes that both examples contain a ‘directional clitic’: both
the clitic -rra in (8b), which attaches to the preverb, and the pointing sign in (8a),
which combines with the verb, indicate the direction in which the man moved.
23. Manual communication systems: evolution and variation 539

(8) a. kuna mani / kuyu nyurdi mani ^point3 [WSL]


intestines get meat carry on neck direction
‘He gutted (the kangaroo). He carried the meat round his neck thither.’
b. Kuna-rra-lpa-rla ma-nu [Warlpiri]
intestines-thither-impf-3p.dat get-past
‘He gutted (the kangaroo) for her over there.’
Nyurdi-rra-lpa ma-nu
preverb-thither-impf-3p.dat cause-past
‘He carried it thither around his neck.’

In their corpus of YSL data, Cooke and Adone (1994) also find frequent occurrence
of SV and OV order (only two cases of SOV) and conclude that the language tends to
be verb-final.
Kendon does not discuss properties of interrogatives but notes that NCDSLs are
“overwhelmingly systems of manual expression” in that “facial action does not appear
to be formalized as a grammatical device” (Kendon 1989, 155). Thus, NCDSLs appear
to be markedly different from ‘primary’ sign languages in this respect. He attributes
this to the fact that these secondary sign languages have a close association with the
spoken languages of their users. Cooke and Adone observe that YSL has only one
general question sign (a twist of the hand), which can be disambiguated by a mouthing;
this sign tends to appear in clause-final position (e.g. you yesterday where ‘Where
were you yesterday?’), as is also commonly observed in other sign languages (see chap-
ter 14). Crucially, wh-words do not usually appear in sentence-final position in the
ambient spoken languages. Dany Adone (personal communication) confirms that non-
manuals are not systematically used to mark interrogatives (but see Adone (2002) for
the use of non-manuals, e.g. lip pointing, in other contexts).
As for NCDSL, Kendon (1988, 240) concludes that “these systems are to a large
extent structured by the spoken languages of their users ⫺ so much, indeed, that we
may be justified in regarding them as in some degree analogous to writing systems”.
Despite the existence of loan compounds and reduplication, Cooke and Adone (1994)
reach a different conclusion for YSL. They argue that with respect to morphology
and syntax, YSL displays only little relationship with the Yolngu languages spoken in
the region.

5.4. Plains Indian Sign Language

Mallery (2001 [1881]) was probably the first to provide a linguistic description of a sign
language used among North American Indians. As pointed out by Davis (2007, 85),
“[t]he North American continent was once an area of extreme linguistic and cultural
diversity, with hundreds of distinct and mutually unintelligible languages spoken by the
native populations”. It is likely that the Plains Indian Sign Language (PISL) emerged in
order to facilitate communication between members of different tribes. In fact, it ap-
pears that during the 19th and the early part of the 20th century, use of PISL was so
common that it can be considered a lingua franca (Taylor 1975; Davis 2005, 2006, 2010).
540 V. Communication in the visual modality

5.4.1. On the origin and use of Plains Indian Sign Language

PISL (also referred to as North American Indian Sign Language or just “hand talk”)
was used throughout the Great Plains, an area centrally located on the North American
continent and covering approximately one million square miles, as well as in the neigh-
boring Northern Plateau area. A map provided by Taylor (1975, 227) indicates that
the sign language was used from Saskatchewan (Canada) in the North (e.g. Plains Cree
tribe) to Texas in the South (e.g. Comanche) and from Montana in the West (e.g. Nez
Perce) to Missouri in the East (e.g. Osage) (also see Davis 2010, 10). In both east-
west and north-south directions, the most widely separated points are at a distance of
1,000 miles from each other.
The earliest descriptions of Indians signing were written by Álvar Núñez Cabeza
de Vaca in 1542 (Bonvillian/Ingram/McCleary 2009). The origins of PISL remain uncer-
tain but it seems likely “that signed communication was already used among indige-
nous peoples across the North American continent prior to European contact” (Davis
2010, 19; also see Wurtzburg/Campbell 1995). Various scholars suggest that the sign
language originated in the Gulf Coast region of Western Louisiana and Texas (God-
dard 1979; Wurtzburg/Campbell 1995), from where it spread northward, trade being
“the principal agent for the diffusion of the sign language throughout the Plains during
the 19th century” (Taylor 1975, 225).
However, not all Plains Indian tribes used the sign language. Referring to earlier
studies, Taylor (1975) reports that the largest number of users were located in the
Central Plains and that the Crows (Manitoba), Cheyennes (Manitoba and South Da-
kota), and Blackfeet (Alberta and Saskatchewan) were regarded as the most proficient
sign users (see Davis (2010, 7 f.) for an overview of tribes who used sign language,
together with sources of historical and current documentation). Apparently, adult men
have been the primary (or at least most visible) users of the sign language, probably
due to their more prominent role in public life. This, however, does not imply that
women were forbidden knowledge and use of the sign language. Mallery (2001 [1881],
391) remarks that Cheyenne, Kiowa, and Comanche women knew and practiced the
sign language and that, in fact, the Comanche women were “the peers of any sign
talkers”. He even reports the assertion that the signs used by males and females were
different, though mutually intelligible. It is important to point out that sign use was
not restricted to situations in which the interlocutors had no language in common but
was also observed between members of the same tribe. According to Taylor (1975,
229), “the sign language was, and is, regarded as an additional communications channel,
in no way subordinated to the vocal-auditory”. Within tribes, purposes of sign use include
public entertainment (e.g. storytelling, the Cheyenne “sign dance”), oratory, ritual prac-
tices, and activities, such as hunting, which require silence (Taylor 1975; Davis 2005).
While it seems clear that the primary function of PISL was that of an alternative to
spoken language, the signed language was also acquired natively and signed fluently
by both deaf and hearing members of native communities. Deaf tribal members played
a vital role in the development and transmission of the language ⫺ a fact that clearly
distinguishes PISL from other secondary sign languages. In fact, based on the acquisi-
tion pattern and the observation that PISL fulfills a wide variety of discourse functions,
Davis (2010, 180⫺182) concludes that classifying PISL as a secondary language is not
justified. This conclusion is further corroborated by the linguistic features of PISL
described in the next section.
23. Manual communication systems: evolution and variation 541

According to some authors (Davis 2005, 2010; Farnell 1995), PISL is still being
learned today within some native groups and used in, for instance, traditional storytel-
ling and rituals. The number of (native) users, however, has decreased dramatically.
On the one hand, English has long taken over the role of a lingua franca amongst
hearing Indians; on the other hand, most of the deaf members of native groups now
attend schools for the deaf, where they learn ASL as a primary language (Davis 2010).
The extant number of PISL users is unknown.

5.4.2. Lexicon and structure of Plains Indian Sign Language

Davis (2010, chapter 8) offers an extensive description of linguistic properties of PISL.


Here, I can only touch on some selected aspects of the lexicon and structure of the
language (see http://pislresearch.com/, developed and maintained by Jeffrey Davis, for
further information as well as film clips and illustrations documenting PISL). Mallery
(2001 [1881]) and Tomkins (1969 [1931]) provide dictionaries of PISL (see Davis (2007,
2010) for a lexical comparison of different varieties of PISL and of PISL and early
20th century ASL). What is remarkable about Mallery’s collection is that it also in-
cludes signs for different tribes (e.g. cheyenne) and proper names (e.g. spotted-tail,
a Dakota chief), a list of common phrases, and even stories and dialogues. For the sake
of illustration, consider the following excerpt of a conversation between Tendoy, chief
of the Shoshoni Indians (Idaho), and Huerito, an Apache chief of New Mexico, which
took place in Washington in April 1880 (adapted from Mallery (2001 [1881], 486⫺490)
to comply with the conventions used in this handbook). The two Indians did not speak
each other’s language and had never met before that occasion.

(9) H: q-form [PISL]


‘Who are you?’
T: shoshoni chief
‘I am the chief of the Shoshoni.’
H: winter how-many
‘How old are you?’
T: fifty six
‘Fifty-six (years).’
H: good(intensive), buffalo country index2
‘Very well. Are there any buffalo in your country?’
T: yes, buffalo black many […]
‘Yes, there are many black buffalos.’
T: night+ index2 go-to(south)3a your-country3a,
index1 go-to(north)3b my-country3b rain deep much
see-each-other no-more
‘In two days, you go to your country and I go to my country, where there is
a lot of snow, and we shall see each other no more.’

From this short excerpt, it is clear that, apart from content signs, the PISL lexicon
contains question words (including the general question sign q-form), pronouns, nu-
merals, quantifiers, and negation. Davis (2010, 144) adds that PISL makes use of (head-
542 V. Communication in the visual modality

initial) compounds such as white-man^soldier^chief (‘white officer’) and man^mar-


ry^no (‘bachelor’). Mallery’s extensive study includes a comparison of PISL signs to
gestures (e.g. Ancient Greek and Neapolitan gestures) and to the signs used by “deaf-
mutes” in the United States. Moreover, he examines the origin and iconic motivation
of signs based on pictographs and ethnologic facts (the sign for friend, for instance,
being derived from the meaning ‘we two smoke together’).
With respect to the grammatical structure of PISL, Mallery argues that “there is in
the gesture speech no organized sentence such as is integrated in the languages of
civilization” and that one “must not look for articles or particles or passive voice or
case […] or even what appears in those languages as a substantive or a verb. The sign
radicals, without being specifically any of our parts of speech, may be all of them in
turn” (Mallery 2001 [1881], 359). Also, he mentions the absence of a copula verb and
the lack of tense inflection. It is by now well-known, of course, that all of these charac-
teristics are also fairly common in typologically diverse spoken languages and natural
sign languages.
In this context, it is rather interesting that Mallery notes that signs tend to be ar-
ranged in a fixed order (e.g. the object preceding the verb) ⫺ an observation which
contradicts his above claim that there is “no organized sentence” (also see Kroeber
(1958) for sign order within compounds and utterances). He describes the example in
(10a) and points out that both the Indians and deaf signers would convey this utterance
in the same way (Mallery 2001 [1881], 361). Clearly, this is exactly how a similar struc-
ture would be realized in many sign languages, for which signs like done have been
analyzed as completive or perfective markers (see chapter 9 for discussion). Davis
(2010) concludes that PISL has basic SOV order, although other orders are attested
(including structures involving null arguments and topicalization); he provides the ex-
ample in (10b) (Davis 2010, 154; example slightly adapted). It is noteworthy that SOV
is also the most common word order in the ambient spoken languages (although it has
been claimed that some of these languages have flexible word order). Kroeber (1958)
thus concludes that PISL syntax is based directly on spoken language.

(10) a. sleep done, index1 river go-to [PISL]


‘When I have had a sleep, I will go to the river.’
b. index3(left) index3(right) weapons war-bonnet exchange
‘They (he and he) exchange weapons and war bonnets.’
c. sioux say [white-man^soldier^chief brave above-all
[sioux __ fight]]
‘The Sioux say (that) the officer is the bravest (that) the Sioux
have ever fought.’

As for syntactic structure, Davis (2010, 159) further argues that recursion is attested
in PISL and illustrates his claim with the example in (10c), which supposedly contains
a sentential complement and a relative clause (the underscore indicating the position
in which the head noun ‘officer’, which is the object of ‘fight’, is interpreted).
In addition, Mallery observes that “relations of ideas and objects are […] expressed
by placement. The sign talker is an artist, grouping persons and things so as to show
the relations between them, […] his scenes move and act, are localized and animated”
(p. 360). That is, signers make use of the signing space. From the examples he discusses,
23. Manual communication systems: evolution and variation 543

it appears as if locations are not used for arbitrary reference, such as establishing loci
for non-present referents. Taylor (1975, 235) explicitly states that “when someone not
present is referred to, an appropriate noun must be used” (see Davis (2010, 151) for
discussion of PISL pronouns). Also, example (9) suggests that signers use an absolute
frame of reference. Note that the spatial verb go-to is signed once toward the north
(relative position of Idaho) and once toward the south (relative position of New Mex-
ico). Finally, in one of Mallery’s examples, we come across a possible instance of an
agreeing verb: apparently, the movement of talk (forward movement from the chin)
can be reversed to express the meaning ‘he talked to me’. Davis (2010, 148) provides
additional examples of spatial (locative) verbs (e.g. bring, come) and agreeing (indicat-
ing) verbs (e.g. see; also note the reciprocal form see-each-other in (9)).
Finally, from Mallery’s detailed description of dialogues, we can infer that non-
manual interrogative marking exists: in one example, the sequence hear index2 is
accompanied by “a look of inquiry” (p. 492) to express the meaning “Did you hear of
it?” In addition, a manual interrogative marker (viz. the sign q-form in (9)), which
precedes and follows the interrogative clause, is commonly used. Davis (2010, 164)
points out that, depending on the context, this marker can fulfill the function of a wh-
word, of a question particle (in yes/no-questions), or of a discourse opener.
Based on these characteristics, as well as the sociolinguistic properties described
above, it seems safe to conclude that PISL is more than just “gesture talk”. Rather,
it shows many of the properties characteristic of natural sign languages. It is multi-
generational, cross-cultural, non-emergent, highly conventionalized, and has a high sta-
tus (cf. the comparative chart provided by Davis (2010, 183), where PISL is compared
to Deaf community sign languages, Aboriginal sign languages, Nicaraguan Sign Lan-
guage, and homesign, among others). Davis thus concludes that the label ‘secondary
sign language’ is inappropriate because “a particular signed language (PISL in this
case) can potentially serve in both primary and alternate capacities” (Davis 2010, 186).
Consequently, PISL might be more similar to some village sign languages than to sec-
ondary types (personal communication).

5.5. Summary

In this section, I have provided sketches of the origin, use, and structure of four second-
ary sign languages. The discussion reveals that these sign languages developed for vari-
ous reasons (ritual/taboo, noise, as lingua franca); it further suggests that the four sign
languages are of varying grammatical complexity. Table 23.1 is an attempt to take stock
of some of the linguistic features of these secondary languages, based on the informa-
tion available in the respective sources. Three aspects of grammar are included in the
comparison ⫺ compounding, (spatial) agreement, and the realization of interrogatives.
In addition, the influence from the surrounding spoken language is evaluated.
We may conclude ⫺ albeit with some caution ⫺ that Sawmill Sign Language and
CisSL show the simplest grammatical structure as well as a strong influence from the
surrounding spoken language. CisSL, however, appears to have a richer lexicon and
more complex word formation strategies. Influence of the spoken language is also
strong in NCDSLs. Yet, this group of sign languages also displays some features that
are characteristic of natural sign languages. In addition, it is clear from Kendon’s de-
544 V. Communication in the visual modality

Tab. 23.1 Comparison of selected linguistic features of secondary sign languages


compounding (spatial) interrogatives influence from
agreement spoken language
strong
⫺ no sim. NMM a ⫺ mouthing
⫺ mostly E-b ⫺ audiomimic signs
Sawmill SL no ⫺ only one G-QS,
⫺ no PR/A ⫺ compounds
sentence-initial
⫺ word order
strong
⫺ no sim. NMM a ⫺ audiomimic signs
⫺ mostly E-b
⫺ QM in air ⫺ MA in compounds
Cistercian SL ⫺ no PR/A no
⫺ only one G-QS, ⫺ MA for copula
⫺ may incl. MA
sentence-initial ⫺ word order
strong
⫺ mouthing
⫺ no sim. NMM ⫺ compounds
North Central ⫺ mostly loans
yes ⫺ no information ⫺ reduplication c
Desert SLs b ⫺ no P/RA
on question sign ⫺ suffix markers
⫺ word order
⫺ sim. NMM
⫺ few loans from ⫺ one G-QS; weak
Plains Indian SL spoken lang. yes sentence-initial ⫺ few compounds
⫺ no PR/A and/or -final ⫺ word order (?)
⫺ additional QS?
a
A non-manual marker may precede the interrogative clause.
b
Only facts for NCDSLs are reported in the table. According to Cooke and Adone (1994), YSL
differs from NCDSLs in a number of respects (e.g. influence of surrounding spoken language).
c
Reduplication is included here not because it is attested in both the spoken and the sign
languages, but because it only applies to nouns which are also pluralized by means of reduplica-
tion in the spoken language.
Abbreviations: E-b = English-based; G-QS = general question (wh-) sign; MA = manual alphabet;
NMM = non-manual markers; PR/A = phonological reduction and/or assimilation (characteristic
of sign language compounds); QM = question mark (preceding the question); QS = question sign;
sim. = simultaneous; SL = sign language.

scriptions that NCDSLs allow for complex communicative interaction (including sto-
rytelling). Finally, PISL exhibits most linguistic features and shows only little influence
from surrounding spoken languages ⫺ which is not surprising given that it was origi-
nally used as a lingua franca between speakers of different languages. Once again, I
want to stress that it is therefore highly problematic to classify PISL as a secondary
sign language.
As for the present-day use of these communicative systems, it seems likely that
Sawmill Sign Language is now extinct. Whether CisSL (or other monastic sign lan-
guages) are still used is unclear; however, given that monastic orders in which a law of
silence is imposed still exist, it is not improbable that the sign language is still used.
At least PISL and different Aboriginal sign languages are still in use, but PISL has lost
its function as a lingua franca.
23. Manual communication systems: evolution and variation 545

6. Conclusion
Manual communication systems exist in many different forms of varying complexity.
At the one end of the continuum, we find gestural codes that are only used in highly
specific contexts such as certain professions (e.g. crane driving, aviation, auctions) or
situations (e.g. diving, hunting); these codes typically have a very limited lexicon and
lack syntactic structure. At the other end, natural sign languages are situated, which
are adequate for all communicative purposes and are characterized by rich lexicons
and complex grammatical structure on all levels of linguistic description.
Interestingly, more complex systems may evolve from simpler ones. It has been
suggested that such a development may have played a key role in the phylogeny of
language, when gestural protolanguage ⫺ presumably a conglomerate of iconic ges-
tures ⫺ evolved into ‘proto-sign’, which in turn may have been the basis for the evolu-
tion of spoken language (remember, however, that alternative evolutionary scenarios
exist). In any case, such a development would probably have taken centuries if not
millennia. However, other developments along a continuum of complexity, which took
much less time, are attested, such as, for instance, the development of Sawmill Sign
Language from an originally purely technical manual code and the emergence of Nica-
raguan Sign Language from homesign.
Finally, the discussion also revealed that considerable variation exists even within
the most complex group of manual communication systems, the natural sign languages.
On the one hand, this variation may result from differences in sociolinguistic setting
(e.g. village sign languages) and context of use (e.g. tactile sign languages). On the
other hand, the attested grammatical variation reflects well-known typological patterns
known from the study of spoken languages. Surely, as more (types of) sign languages
enter the stage of sign language linguistics, we will learn more about the potentials and
limits of human languages, as well as about their evolution.

Acknowledgements: I am particularly grateful to Dany Adone and Jeffrey Davis for


helpful comments on parts of this chapter. Moreover, I am indebted to Dany Adone,
Anastasia Bauer, and Dan Slobin for providing invaluable information. Work on this
chapter was supported by the German Science Foundation (DFG) in the framework
of the Lichtenberg-Kolleg at the Georg-August-University, Göttingen.

7. Literature
Adone, Dany
2002 On the Use of Non-manuals in an Alternate Sign Language. Manuscript, University
of Düsseldorf.
Aitken, Stuart
2000 Understanding Deafblindness. In: Aitken, Stuart/Buultjens, Marianna/Clark, Catherine/
Eyre, Jane T./Pease, Laura (eds.), Teaching Children Who Are Deafblind: Contact, Com-
munication and Learning. London: Fulton Publishers, 1⫺34.
Arbib, Michael A.
2005 From Monkey-like Action Recognition to Human Language: An Evolutionary Frame-
work for Neurolinguistics. In: Behavioral and Brain Sciences 28, 105⫺167.
546 V. Communication in the visual modality

Armstrong, David F./Wilcox, Sherman


2003 Origins of Sign Languages. In: Marschark, Marc/Spencer, Patricia E. (eds.), Oxford
Handbook of Deaf Studies, Language, and Education. Oxford: Oxford University Press,
305⫺318.
Armstrong, David F./Wilcox, Sherman
2007 The Gestural Origin of Language. Oxford: Oxford University Press.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81(2), 301⫺344.
Baker, Charlotte
1977 Regulators and Turn-taking in American Sign Language. In: Friedman, Lynn A. (ed.),
On the Other Hand: New Perspectives on American Sign Language. New York: Aca-
demic Press, 215⫺236.
Balder, Anneke/Bosman, Irma/Roets, Lieve/Schermer, Trude/Stiekema, Ton
2000 Over Doofblindheid. Communicatie en Omgang. Utrecht: HvU, Seminarium voor Or-
thopedagogiek.
Barakat, Robert A.
1975 Cistercian Sign Language. In: Umiker-Sebeok, Jean/Sebeok, Thomas A. (eds.) (1987),
Monastic Sign Languages. Berlin: Mouton de Gruyter, 67⫺322.
Benedict of Nursia
1949 The Holy Rule of St. Benedict (1949 Edition; Translated by Rev. Boniface Verheyen).
[Available from: http://www.e-benedictine.com/go/index.php?page=3; Accessed on 3
November 2011].
Bickerton, Derek
2003 Symbol and Structure: A Comprehensive Framework for Language Evolution. In:
Christiansen, Morten/Kirby, Simon (eds.), Language Evolution. Oxford: Oxford Uni-
versity Press, 77⫺94.
Bonvillian, John D./Ingram, Vicky L./McCleary, Brendan M.
2009 Observations on the Use of Manual Signs and Gestures in the Communicative Interac-
tions Between Native Americans and Spanish Explorers of North America: The Ac-
counts of Bernal Díaz del Castillo and Álvar Núñez Cabeza de Vaca. In: Sign Language
Studies 9(2), 132⫺165.
Bouchard, Denis
1997 Sign Languages and Language Universals: The Status of Order and Position in Gram-
mar. In: Sign Language Studies 91, 101⫺160.
Bruce, Scott G.
2007 Silence and Sign Language in Medieval Monasticism. The Cluniac Tradition c. 900⫺
1200. Cambridge: Cambridge University Press.
Buyssens, Eric
1956 Le Langage par Gestes chez les Moines. In: Umiker-Sebeok, Jean/Sebeok, Thomas A.
(eds.) (1987), Monastic Sign Languages. Berlin: Mouton de Gruyter, 29⫺37.
Cecchetto, Carlo/Checchetto, Alessandra/Geraci, Carlo/Guasti, Maria Teresa
2010 Making up for the Disappearance of Non-manual Marking in Tactile Sign Languages.
Poster Presented at 10 th Conference on Theoretical Issues in Sign Language Research
(TISLR 10), West Lafayette, IN, October 2010.
Collins, Steven
2004 Adverbial Morphemes in Tactile American Sign Language. PhD Dissertation, Union
Institute and University, Cincinnati.
Collins, Steven/Petronio, Karen
1998 What Happens in Tactile ASL? In: Lucas, Ceil (ed.), Pinky Extension and Eye Gaze:
Language Use in Deaf Communities. Washington, DC: Gallaudet University Press, 18⫺37.
Comrie, Bernard
1989 Language Universals and Linguistic Typology (2nd Edition). Chicago: University of
Chicago Press.
23. Manual communication systems: evolution and variation 547

Cooke, Michael/Adone, Dany


1994 Yolngu Signing ⫺ Gestures or Language? CALL Working Papers, Centre for Aborigi-
nal Languages and Linguistics, Batchelor College, Northern Territory, Australia.
Corballis, Michael C.
2003 From Hand to Mouth: The Gestural Origins of Language. In: Christiansen, Morten/
Kirby, Simon (eds.), Language Evolution. Oxford: Oxford University Press, 201⫺218.
Corballis, Michael C.
2010 The Gestural Origins of Language. WIREs Cognitive Science 1(1) [http://
wires.wiley.com/cogsci].
Corbett, Greville G.
2006 Agreement. Cambridge: Cambridge University Press.
Crystal, David/Craig, Elma
1987 Contrived Sign Language. In: Schlesinger, Izchak M./Namir, Lila (eds.), Sign Language
of the Deaf: Psychological, Linguistic, and Sociological Perspectives. New York: Aca-
demic Press, 141⫺168.
Davis, Jeffrey
2005 Evidence of a Historical Signed Lingua Franca Among North American Indians. In:
Deaf Worlds 21(3), 47⫺72.
Davis, Jeffrey
2006 A Historical Linguistic Account of Sign Language Among North American Indians. In:
Lucas, Ceil (ed.), Multilingualism and Sign Languages. From the Great Plains to Aus-
tralia. Washington, DC: Gallaudet University Press, 3⫺35.
Davis, Jeffrey
2007 North American Indian Signed Language Varieties: A Comparative Historical Linguis-
tic Assessment. In: Quinto-Pozos, David (ed.), Sign Languages in Contact. Washington,
DC: Gallaudet University Press, 85⫺122.
Davis, Jeffrey
2010 Hand Talk: Sign Language Among American Indian Nations. Cambridge: Cambridge
University Press.
Divale, William T./Zipin, Clifford
1977 Hunting and the Development of Sign Language: A Cross-cultural Test. In: Journal of
Anthropological Research 33, 185⫺201.
Doupe, Allison J./Kuhl, Patricia K.
1999 Birdsong and Human Speech: Common Themes and Mechanisms. In: Annual Review
of Neuroscience 22, 567⫺631.
Dryer, Matthew S.
2007 Word Order. In: Shopen, Timothy (ed.), Language Typology and Syntactic Description.
Vol. I: Clause Structure (2 nd Edition). Cambridge: Cambridge University Press, 61⫺131.
Dryer, Matthew S.
2011 Order of Subject, Object and Verb. In: Dryer, Matthew S./Haspelmath, Martin (eds.),
The World Atlas of Language Structures Online. Munich: Max Planck Digital Library,
chapter 81 [Available at: http://wals.info/chapter/81; Accessed on 18 November 2011].
Farnell, Brenda M.
1995 Do You See What I Mean? Plains Indian Sign Talk and the Embodiment of Action.
Austin, TX: University of Texas Press.
Fitch, W. Tecumseh
2002 Comparative Vocal Production and the Evolution of Speech: Reinterpreting the De-
scent of the Larynx. In: Wray, Alison (ed.), The Transition to Language. Studies in the
Evolution of Language. Oxford: Oxford University Press, 21⫺45.
Fitch, W. Tecumseh
2005 The Evolution of Language: A Comparative Review. In: Biology and Philosophy 20,
193⫺230.
548 V. Communication in the visual modality

Fitch, W. Tecumseh
2006 The Biology and Evolution of Music: A Comparative Perspective. In: Cognition 100(1),
173⫺215.
Fitch, W. Tecumseh
2010 The Evolution of Language. Cambridge: Cambridge University Press.
Frankel, Mindy A.
2002 Deaf-blind Interpreting: Interpreters’ Use of Negation in Tactile American Sign Lan-
guage. In: Sign Language Studies 2(2), 169⫺181.
Frishberg, Nancy
1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Lan-
guage 51, 696⫺719.
Gardner, R. Allen/Gardner, Beatrix T./van Cantfort, Thomas E. (eds.)
1989 Teaching Sign Language to Chimpanzees. Albany, NY: State University of New York
Press.
Goddard, Ives
1979 The Languages of South Texas and the Lower Rio Grande. In: Campbell, Lyle/Mithun,
Marianne (eds.), The Languages of Native America: Historical and Comparative Assess-
ment. Austin, TX: University of Texas Press, 355⫺389.
Goldin-Meadow, Susan
2003 The Resilience of Language. What Gesture Creation in Deaf Children Can Tell Us About
How All Children Learn Language. New York: Psychology Press.
Hale, Ken
1983 Warlpiri and the Grammar of Non-configurational Languages. In: Natural Language
and Linguistic Theory 1, 5⫺47.
Hewes, Gordon W.
1973 Primate Communication and the Gestural Origins of Language. In: Current Anthropol-
ogy 14, 5⫺24.
Hewes, Gordon W.
1978 The Phylogeny of Sign Language. In: Schlesinger, Izchak M./Namir, Lila (eds.), Sign
Language of the Deaf: Psychological, Linguistic, and Sociological Perspectives. New
York: Academic Press, 11⫺56.
Jackendoff, Ray
1999 Possible Stages in the Evolution of the Language Capacity. In: Trends in Cognitive
Science 3(7), 272⫺279.
Johnson, Robert E.
1977 An Extension of Oregon Sawmill Sign Language. In: Current Anthropology 18(2),
353⫺354.
Kegl, Judy/Senghas, Ann/Coppola, Marie
1999 Creation through Contact: Sign Language Emergence and Sign Language Change in
Nicaragua. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creoli-
zation, Diachrony, and Development. Cambridge, MA: MIT Press, 179⫺237.
Kendon, Adam
1984 Knowledge of Sign Language in an Australian Aboriginal Community. In: Journal of
Anthropological Research 40, 556⫺576.
Kendon, Adam
1988 Parallels and Divergences between Warlpiri Sign Language and Spoken Warlpiri: Anal-
yses of Signed and Spoken Discourse. In: Oceania 58(4), 239⫺254.
Kendon, Adam
1989 Sign Languages of Aboriginal Australia: Cultural, Semiotic and Communicative Perspec-
tives. Cambridge: Cambridge University Press.
Kendon, Adam
2004 Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.
23. Manual communication systems: evolution and variation 549

Kimmelman, Vadim
2012 Word Order in Russian Sign Language: An Extended Report. In: Linguistics in Amster-
dam 5, 1⫺56 [http://www.linguisticsinamsterdam.nl/].
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Kroeber, Alfred L.
1958 Sign Language Inquiry. In: International Journal of American Linguistics 24(1), 1⫺19.
Kusters, Wouter
2003 Linguistic Complexity: The Influence of Social Change on Verbal Inflection. PhD Disser-
tation, University of Leiden. Utrecht: LOT.
Kwek, Joan
1991 Occasions for Sign Use in an Australian Aboriginal Community. In: Sign Language
Studies 71, 143⫺160.
Lewis, Jerome
2009 As Well as Words: Congo Pygmy Hunting, Mimicry, and Play. In: Botha, Rudolf/Knight,
Chris (eds.), The Cradle of Language. Oxford: Oxford University Press, 236⫺256.
Lewontin, Richard C.
1998 The Evolution of Cognition: Questions We Will Never Answer. In: Scarborough, Dan/
Sternberg, Saul (eds.), An Invitation to Cognitive Science (2nd Edition). Vol. 4: Methods,
Models, and Conceptual Issues. Cambridge, MA: MIT Press, 107⫺132.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Lieberman, Philip
1984 The Biology and Evolution of Language. Cambridge, MA: Harvard University Press.
Lillo-Martin, Diane/Meier, Richard P.
2011 On the Linguistic Status of ‘Agreement’ in Sign Languages. In: Theoretical Linguistics
37, 95⫺141.
Lupyan, Gary/Dale, Rick
2010 Language Structure is Partly Determined by Social Structure. In: Public Library of
Science ONE 5(1), 1⫺10.
MacDonald, Roderick J.
1994 Deaf-blindness: An Emerging Culture? In: Erting, Carol J./Johnson, Robert, C./Smith,
Dorothy L./Snider, Bruce C. (eds.), The Deaf Way: Perspectives from the International
Conference on Deaf Culture. Washington, DC: Gallaudet University Press, 496⫺503.
MacNeilage, Peter F.
2008 The Origin of Speech. Oxford: Oxford University Press.
Mallery, Garrick
2001 [1881] Sign Language Among North American Indians. Mineola, NY: Dover Publications.
Marler, Peter
1997 Three Models of Song Learning: Evidence from Behavior. In: Journal of Neurobiology
33, 501⫺516.
Marsaja, I Gede
2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen:
Ishara Press.
Meggitt, Mervyn J.
1954 Sign Language Among the Walbiri of Central Australia. In: Oceania 25, 2⫺16.
Meir, Irit/Padden, Carol/Aronoff, Mark/Sandler, Wendy
2007 Body as Subject. In: Journal of Linguistics 43, 531⫺563.
Meissner, Martin/Philpott, Stuart B.
1975a The Sign Language of Sawmill Workers in British Columbia. In: Sign Language Studies
9, 291⫺308.
550 V. Communication in the visual modality

Meissner, Martin/Philpott, Stuart B.


1975b A Dictionary of Sawmill Workers’ Signs. In: Sign Language Studies 9, 309⫺347.
Mesch, Johanna
2001 Tactile Sign Language: Turn Taking and Questions in Signed Conversations of Deaf-
blind People. Hamburg: Signum.
Mithun, Marianne
1992 Is Basic Word Order Universal? In: Payne, Doris L. (ed.), Pragmatics of Word Order
Flexibility. Amsterdam: Benjamins, 15⫺61.
Okanoya, Kazuo
2002 Sexual Display as a Syntactical Vehicle: The Evolution of Syntax in Birdsong and Hu-
man Language through Sexual Selection. In: Wray, Alison (ed.), The Transition to Lan-
guage. Studies in the Evolution of Language. Oxford: Oxford University Press, 46⫺63.
Quinto-Pozos, David
2002 Deictic Points in the Visual-gestural and Tactile-gestural Modalities. In: Meier, Richard
P./Cormier, Kearsy A./Quinto-Pozos, David G. (eds.), Modality and Structure in Signed
and Spoken Languages. Cambridge: Cambridge University Press, 442⫺467.
Padden, Carol
1988 The Interaction of Morphology and Syntax in American Sign Language. New York:
Garland Publishing.
Payne, John R.
1985 Negation. In: Shopen, Timothy (ed.), Language Typology and Syntactic Description.
Vol. 1: Clause Structure. Cambridge: Cambridge University Press, 197⫺242.
Perniss, Pamela/Pfau, Roland/Steinbach, Markus
2007 Can’t You See the Difference? Sources of Variation in Sign Language Structure. In:
Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative
Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 1⫺34.
Pfau, Roland
2008 The Grammar of Headshake: A Typological Perspective on German Sign Language Ne-
gation. In: Linguistics in Amsterdam 1, 37⫺74 [http://www.linguisticsinamsterdam.nl/].
Pfau, Roland/Steinbach, Markus
2006 Pluralization in Sign and in Speech: A Cross-modal Typological Study. In: Linguistic
Typology 10, 135⫺182.
Reed, Charlotte M./Delhorne, Lorraine A./Durlach, Nathaniel I./Fischer, Susan D.
1995 A Study of the Tactual Reception of Sign Language. In: Journal of Speech and Hearing
Research 38, 477⫺489.
Rijnberk, Gérard van
1953 Le Langage par Signes chez les Moines. In: Umiker-Sebeok, Jean/Sebeok, Thomas A.
(eds.) (1987), Monastic Sign Languages. Berlin: Mouton de Gruyter, 13⫺25.
Rizzolatti, Giacomo/Arbib, Michael A.
1998 Language Within Our Grasp. In: Trends in Neuroscience 21, 188⫺194.
Schick, Brenda
2003 The Development of ASL and Manually-Coded English Systems. In: Marschark, Marc/
Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education.
New York: Oxford University Press, 219⫺231.
Schuit, Joke
2007 The Typological Classification of Sign Language Morphology. MA Thesis, University
of Amsterdam.
Schuit, Joke/Baker, Anne/Pfau, Roland
2011 Inuit Sign Language: A Contribution to Sign Language Typology. In: Linguistics in
Amsterdam 4, 1⫺31 [http://www.linguisticsinamsterdam.nl/].
Schwartz, Sandrine
2009 Stratégies de Synchronisation Interactionnelle ⫺ Alternance Conversationnelle et Rétro-
action en Cours de Discours ⫺ Chez des Locuteurs Sourdaveugles Pratiquant la Langue
des Signes Française Tactile. PhD Dissertation, Université Paris 8.
23. Manual communication systems: evolution and variation 551

Slobin, Dan I.
accepted Typology and Channel of Communication: Where Do Signed Languages Fit in? In:
Bickel, Balthasar/Grenoble, Lenore/Peterson, David A./Timberlake, Alan (eds.),
What’s Where Why? Language Typology and Historical Contingency: A Festschrift to
Honor Johanna Nichols.
Stokoe, William C.
1978 Sign Language and the Monastic Use of Lexical Gestures. In: Semiotica 24, 181⫺194.
Taylor, Allan R.
1975 Nonverbal Communication in Aboriginal North America: The Plains Sign Language.
In: Umiker-Sebeok, Jean/Sebeok, Thomas A. (eds.) (1978), Aboriginal Sign Languages
of the Americas and Australia. Volume 2: The Americas and Australia. New York: Ple-
num, 223⫺244.
Tomkins, William
1969 [1931] Indian Sign Language. New York: Dover Publications.
Vargha-Khadem, Faraneh/Watkins, Kate/Alcock, Katherine/Fletcher Paul/Passingham, Richard
1995 Praxic and Nonverbal Cognitive Deficits in a Large Family with a Genetically-transmit-
ted Speech and Language Disorder. In: Proceedings of the National Academy of Science
92, 930⫺933.
Warner, W. Lloyd
1937 Murngin Sign Language. In: Umiker-Sebeok, Jean/Sebeok, Thomas A. (eds.) (1978),
Aboriginal Sign Languages of the Americas and Australia. Volume 2: The Americas and
Australia. New York: Plenum, 389⫺392.
Wilbur, Ronnie B.
2000 Phonological and Prosodic Layering of Nonmanuals in American Sign Language. In:
Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited. An Anthology
to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 213⫺
244.
Wurtzburg, Susan/Campbell, Lyle
1995 North American Indian Sign Language: Evidence of Its Existence Before European
Contact. In: International Journal of American Linguistics 61, 153⫺167.
Zeshan, Ulrike
2004 Hand, Head, and Face: Negative Constructions in Sign Languages. In: Linguistic Typol-
ogy 8, 1⫺58.
Zeshan, Ulrike
2008 Roots, Leaves and Branches ⫺ The Typology of Sign Languages. In: Quadros, Ronice
M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present and Future. 45
Papers and 3 Posters from the 9 th Theoretical Issues in Sign Language Research Confer-
ence. Petrópolis: Editora Arara Azul, 671⫺695.

Roland Pfau, Amsterdam (The Netherlands)


552 V. Communication in the visual modality

24. Shared sign languages


1. Introduction
2. Shared signing communities
3. Common features of shared signing communities
4. The linguistic structure of shared sign languages
5. Discussion
6. Summary and conclusion
7. Literature

Abstract
In communities with an unusually high incidence of deafness, sign languages shared by
both hearing and deaf community members are found to spontaneously develop. The
sociolinguistic setting of these shared sign languages, also known as “village sign lan-
guages”, differs considerably from the settings of the “macro-community sign languages”
studied so far. This chapter provides an overview of communities with a high incidence
of deafness around the globe, followed by an overview of the sociological and sociolin-
guistic features that characterize them. A description is then given of the structural fea-
tures in which shared sign languages appear to differ from the sign languages of large
Deaf communities. A discussion of the role of language age and language ecology in
shaping shared sign languages concludes this chapter.

1. Introduction
Scattered around the globe, a number of small communities with a high incidence of
hereditary deafness exist. The island of Martha’s Vineyard (Massachusetts, USA) is a
well-known example of such a community. Communities with a similarly high incidence
of deafness are found in Asia, Africa, and the Americas.
In all of the reported communities, a local sign language has spontaneously emerged
as a result of an incidence of deafness that is considerably higher than the 0.1 % inci-
dence estimated for developed countries. The sign languages that emerged in these
communities are used extensively by both deaf and hearing community members. As
such, they provide a unique opportunity to evaluate several key phenomena in (sign)
linguistics, including the evolution of conventionalization and structural complexity in
sign languages. In addition, they are likely to offer new insights into the correlation
between sign language structure and sociolinguistic setting.
Several labels and classifications have been proposed to refer to communities with
a high incidence of hereditary deafness. Labels that have been coined for these commu-
nities are isolated deaf communities (Washabaugh 1979), assimilating communities (Ba-
han/Nash 1995), assimilative societies (Lane et al. 2000), and integrated communities
(Woll/Ladd 2003). In her work on the Bedouin community of Al-Sayyid, Kisch (2008)
argues against classifying communities with a high incidence of deafness and wide-
24. Shared sign languages 553

spread sign language use as a type of Deaf community. Instead, she argues for a classifi-
cation of signing communities, rather than of d/Deaf communities, and proposes the
term shared signing community.
The sign languages of shared signing communities have also been classified. Wood-
ward (2003), for instance, proposes the term indigenous sign language. In this chapter,
I adopt Kisch’s term shared signing community to refer to the communities listed in
section 2 and, by analogy, use shared sign language to refer to their sign languages.
The chapter starts with an overview of the communities with a high incidence of
deafness reported in the literature to date in section 2. In section 3, I discuss a number
of sociological and sociolinguistic features these communities have in common. In sec-
tion 4, I address a number of linguistic features which are similar across different
shared sign languages. The last section contains a discussion of the phenomenon of
shared signing communities and their sign languages.

2. Shared signing communities

In this section, I present a compact overview of a number of shared signing communi-


ties from around the globe, as reported in the literature. All of them have an incidence
of deafness that is several times higher than 0.1 %, which is the estimated incidence of
deafness in developed countries (Martin et al. 1981). The overview includes informa-
tion on the percentage of deaf people in the respective community and references
to the linguistic and sociological studies concerning these communities and their sign
languages. The map in Figure 24.1 illustrates the geographical distribution of the com-
munities presented in this section.

Fig. 24.1: Communities around the world with a high incidence of deafness
554 V. Communication in the visual modality

2.1. North America

For North America, only two communities with a high incidence of deafness have been
described in some detail: the well-known case of Martha’s Vineyard in Massachusetts
(USA) and a Keresan village in Central New Mexico (USA). In addition to these two
cases, a high incidence of deafness is reportedly also found in the Amish and Mennon-
ite communities in Lancaster County, Pennsylvania (Mengel et al. 1967) as well as in
Inuit communities in Nunavut, Canada (Joke Schuit, personal communication).

2.1.1. Martha’s Vineyard

Martha’s Vineyard is an island off the coast of Massachusetts in the northeast of the
United States. Tracing the history of deafness on the island in the period between the
18th and the mid 20th century, Groce (1985) reconstructs attitudes towards deaf people
and sign language usage. The incidence of deafness on the island as a whole was 0.65 %.
In some communities on the island, however, the incidence was as high as 2 % (in
Tisbury) and 4 % (in Chilmark). Since the mid 20 th century, the incidence of deafness
on the island has decreased to an average incidence, resulting in the disappearance of
the local sign language. Poole-Nash (1976, 1983) tries to reconstruct what the sign
language looked like. The case of Martha’s Vineyard has fascinated deaf people and
scientists because of the high level of integration of deaf and hearing islanders de-
scribed by Groce (1985).

2.1.2. Keresan Pueblo in central New Mexico

In a small Keresan-speaking community in central New Mexico (USA), 15 out of 650


inhabitants (2.3 %) have severe to profound hearing loss. The local sign language is
believed to have developed in one family to communicate with deaf family members
(Kelley 2001). Nowadays, the sign language is used in two different ways: first, as a
primary language by deaf community members, and second, as an alternative to spoken
language by hearing community members. Interestingly, Keresan Pueblo Sign Lan-
guage is claimed to have an origin that sets it apart from Plains Indian Sign Language,
which primarily evolved for inter-tribal communication (see chapter 23 for details).
Keresan Pueblo Sign Language is endangered by a shift to American Sign Language
(ASL) or Signed English, due to Deaf education and acculturation (Kelley/McGregor
2003).

2.1.3. Nunavut (Canada)

In at least some Inuit communities in the large province of Nunavut (Canada), an


incidence of deafness of 0.5 % is attested. An indigenous sign language, Inuit Sign
Language, is used by some deaf Inuit, but most deaf Inuit now use ASL (Schuit/Baker/
Pfau 2011).
24. Shared sign languages 555

2.2. Central America and the Caribbean


Central America and the Caribbean show a relatively high number of communities
with a high incidence of deafness. Most of these communities have arisen in the frame
of the transatlantic slave trade.

2.2.1. Providence Island (Colombia)

Providence Island is an English Creole speaking Colombian Island in the Caribbean


Sea. Around 1985, 19 people out of a total island population of 2500⫺3000 were deaf
(0.67 %). The deafness is caused by the Waardenburg syndrome, in which disorders in
the pigmentation of the skin and hearing disorders go together. A local sign language,
Providence Island Sign Language (PISL), is used in communicating with deaf people.
Providence Island was one of the first communities with a high incidence of deafness
to be studied by linguists, in particular by William Washabaugh and ⫺ to a lesser
extent ⫺ by James Woodward. These linguists published an initial article together with
Susan DeSantis (Washabaugh/Woodward/DeSantis 1978), followed by three articles by
Woodward (1979, 1982, 1987) and a series of articles and a book by Washabaugh (1979,
1980a,b, 1981a, 1985, 1986, 1990).

2.2.2. Cayman Islands (United Kingdom)

A high incidence of deafness seems to have existed on Gran and Little Cayman (Cay-
man Islands) as well, together with a locally developed sign language (see Doran (1952)
for Little Cayman; Washabaugh (1981b) for Gran Cayman). Washabaugh (1981b) re-
constructs the history of the Deaf community and the sign language that evolved on
Gran Cayman. However, at the time of Washabaugh’s research, the use of Gran Cay-
man Sign Language was already in decline as most Deaf inhabitants of the island
were educated in the ASL-based Jamaican Sign Language, used in Deaf education
in Jamaica.

2.2.3. Saint Elisabeth’s (Jamaica)

In Jamaica, “country sign language is used by perhaps 200 deaf people living within a
few miles of each other on an isolated part of the island” (Dolman 1986, 235). That
part of the island is Saint Elisabeth’s parish, the total number of inhabitants of which
is not specified. Features of the sign language are described by Dolman (1986), who
also remarks that the sign language is in danger of extinction due to the increasing use
of the ASL-based Jamaican Sign Language. A recent presentation by Cumberbatch
(2006) confirms the increased endangered status of the language.

2.2.4. Surinam

In Surinam, several communities with a relatively high incidence of deafness seem to


exist. Groce (1985, 69) lists several communities that appear to have a relatively high
556 V. Communication in the visual modality

incidence of deafness and a resulting sign language. She quotes Tervoort (1978), who
mentioned a group of Deaf Indian villagers in Surinam.
Van den Bogaerde (2006) encountered a slightly higher than average incidence of
deafness in Kosindo, a Saramaccan-speaking community of African descent in the Suri-
namese jungle. She reports that of the 2000 inhabitants of the community, about 0.5 %
were deaf, and that the deaf people could communicate efficiently with hearing people
in a local sign language.

2.2.5. Yucatan (Mexico)

Two scholars report a high incidence of deafness for a Yucatec Maya village in Yucatan
(Mexico). Shuman (1980) describes the village of Nohya in Central Yucatan as having
about 300 inhabitants, 12 of whom are deaf, i.e. 4 %. Shuman finds that part of the
lexicon of the sign language in Nohya is based on the conventional gestures of the
larger Mayan culture. Shuman and Cherry-Shuman (1981) published an annotated list
of signs.
The second scholar, Johnson (1991, 1994) describes a village in north Central Yuca-
tan with 400 inhabitants, 13 of whom are deaf, i.e. 3.25 %. He also notes that: “We
found small populations of deaf people in most villages and we were told of at least
one village with an equally large proportion of deaf inhabitants” (Johnson 1991, 468).
Johnson does not mention the name of the community he is describing, but it is very
likely to be the same village that Shuman describes.
Both Johnson and Shuman find that hearing community members sign well. In fact,
the sign language in Nohya may be related to an indigenous sign language that is
shared by all Maya groups in Mexico and Guatemala (Fox Tree 2009). Despite the
extensive command of signing by hearing people, deaf people appear not to be fully
integrated. They have a lower marriage rate and do not have access to most of the
discourse, which is conducted in spoken Maya.
Currently, several scholars are conducting research on the sign language of Nohya,
including Gabriel Arellano at the Deafness, Cognition, and Language Research Centre
in London (UK), Olivier LeGuen at the Centro de Investigaciones y Estudios Superi-
ores en Antropología Social in Mexico City, and Ernesto Escobedo at the International
Centre for Sign Languages and Deaf Studies in Preston (UK).

2.2.6. Jicaque community (Honduras)

Lastly, a high incidence of deafness has been reported for a clan of Jicaque Indians in
Honduras who fled to the mountains in 1870 and established a community there (Chap-
man/Jacquard 1971). It is not known what the situation is like at present.

2.3. South America

2.3.1. Urubú-Kaapor (Brazil)

The Urubú or Kaapor, a people spread across several villages in the Brazilian Amazon,
use a sign language in communicating with deaf people (Kakumasu 1968; Ferreira
24. Shared sign languages 557

Brito 1984). In 1965, Kakumasu counted seven deaf people in a total of 500 Urubú-
Kaapor, i.e. 1.4 %. Almost 20 years later, Ferreira Brito (1984) counted five deaf people
in a total population of less than 500 people. No contemporary information is available.
Kakumasu (1968) describes selected linguistic features of the sign language, whereas
Ferreira Brito (1984) compares the language with Brazilian Sign Language.

2.4. Africa

Whereas quite a number of communities with a high incidence of deafness have been
described for the Americas, to date only one clear case of such a community has been
identified in Africa, i.e. the village of Adamorobe in Ghana.

2.4.1. Adamorobe (Ghana)

Adamorobe is an Akan-speaking village in Ghana with an incidence of deafness of


2 %: around 35 people out of a total population of 1400 are deaf (Nyst 2007a). The
incidence may have been higher in the past: David et al. (1971) mention a deaf popula-
tion of 40 in a total population of 400 (i.e. 10 %), while Frishberg (1987) reports an
incidence of 15 %. Amedofu et al. (1999), on the other hand, mention an incidence of
1.6 %. Deafness in the village has existed as long as anybody can remember. The differ-
ences in ascribed frequencies are likely to be the result of miscalculation rather than
reflecting actual variation in the incidence.
The village has been studied by audiologists and geneticists (Amedofu et al. 1999;
Meyer et al. 2002) and sign linguists (Frishberg 1987; Nyst 2007a,b). Recently, an an-
thropological study of ‘deaf space’ in Adamorobe was completed by Kusters (2012).

2.5. Asia

2.5.1. Ban Khor (Thailand)

The village of Ban Khor in Thailand has 2741 inhabitants; it also has a high incidence
of deafness (Nonaka 2007). With a total number of 16 deaf people, the incidence is
0.6 %. A local sign language emerged about 70 years ago. In her study of the signing
community, Nonaka investigates patterns of sign language acquisition and baby talk.
Ban Khor Sign Language is endangered, as a result of increased contact with Thai Sign
Language, among other reasons (Nonaka 2004).

2.5.2. Desa Kolok (Indonesia)

The northern part of Bali, Indonesia, has an increased incidence of deafness. In one
village, Desa Kolok, 47 people were found to be deaf in a total population of 2186, i.e.
2 %. Deafness has existed in the community for several generations (Branson et al.
558 V. Communication in the visual modality

1996; Marsaja 2008). In his monograph on the village and its sign language, Kata Kolok,
Marsaja describes the socio-cultural adaptations to deafness in the village and the eth-
nography of communication of Kata Kolok. De Vos (forthcoming) presents an exten-
sive study of the use of space in Kata Kolok.

2.5.3. India

Another Asian community with a high incidence of deafness is the community of And-
hra Pradesh in India (Majumdar 1972). Sibaji Panda of the University of Central Lan-
cashire (UK) is currently investigating the shared signing community of Alipur, in
southern India, which is a Shia Muslim enclave in a dominantly Hindu area. The com-
munity has an estimated 250 deaf people in a total population of several thousands.
The local sign language is endangered through increasing contact with Indian Sign Lan-
guage.

2.6. Middle East

2.6.1. Israel

The Bedouin community of Al-Sayyid in the Negev in Israel has an incidence of deaf-
ness that is as high as 3.2 %: 120 people out of a total population of 3700 are deaf
(Kisch 2008). Deafness emerged only 4⫺5 generations, i.e. around 80 years, ago (Kisch
2006, 2008). A similar high incidence of deafness is found in at least two other Bedouin
communities in the Negev. All three communities have developed their own sign lan-
guage (Kisch 2007).
The Al-Sayyid community is one of the few communities with a high incidence of
deafness that has been studied extensively from both an anthropological and a linguis-
tic perspective. Research on the linguistic structure of Al-Sayyid Bedouin Sign Lan-
guage (ABSL) is done by a team of four linguists (Sandler et al. 2005; Aronoff 2007;
Meir et al. 2007; Aronoff et al. 2008).
A high incidence of deafness has also been reported for ethnic enclaves in northern
Israel (Costeff/Dar 1980).

2.7. Europe

To the best of my knowledge, there are no contemporary reports on communities with


a high incidence of hereditary deafness in Europe. Historically, the village of Katwijk
in the Netherlands (Aulbers 1959), the commune of Ayent in Switzerland (Secretan
1954; Hanhart 1962), and a Scottish clan in a Jewish community in Britain (Fraser
1976) had a relatively high incidence of deafness.
24. Shared sign languages 559

3. Common features of shared signing communities

Shared signing communities provide a rare opportunity to investigate a relatively un-


known type of primary sign language and signing community. In quite a number of
cases, medical researchers have been the first to identify a high incidence of deafness
in a particular community. In a considerable number of communities, genetic research
has been undertaken as well. However, discussing the results of these studies falls
outside the scope of this paper.
In addition to medical researchers, linguists and sociologists started doing research
on shared signing communities and their sign languages several decades ago. However,
most of the descriptions published to date are preliminary. Actually, for most communi-
ties, only one or two publications are available which, in most cases, do not go beyond
giving a first impression of the sign language and its community. Some studies have
been conducted by linguists, others by sociologists. Despite the different perspectives
taken and the variation in detail of description, a number of interesting commonalities
come to the surface. A comparison reveals that the shared signing communities differ
from large Deaf communities in a range of features:

Proportion of hearing signers


In all of the shared signing communities, a large part of the population signs. This
includes hearing signers, who actually form the majority of regular signers. Marsaja
(2008) extensively studies the spread of sign language competence across the inhabit-
ants of Desa Kolok. He finds that 68 % of the community are regular sign language
users. Nonaka (2007) observes that 15⫺25 % of the hearing inhabitants of Ban Khor
sign regularly. The lower number of hearing signers in the latter village is in line with
the generally lower incidence of deafness in Ban Khor as compared to Desa Kolok.

Number and percentage of deaf signers


In the communities listed in section 2, the incidence of deafness ⫺ where indicated ⫺
ranges between 0.5 % and 4 %, with the total number of deaf signers ranging from 12
to 250. In some cases, a (much) higher incidence of deafness has been reported, but
these percentages seem to overstate the actual situation as far as can be logically per-
ceived and/or be deducted from the actual numbers. Thus, Frishberg (1987) mentions
an incidence of 15 % for Adamorobe, but it is very likely that this should be 1.5 %, in
view of the incidence of 2 % in 2004 (Nyst 2007a). Similarly, Washabaugh claims that
19 deaf people in a total population of 2500⫺3000 people equals an incidence of 6.7 %,
rather than 0.67 %. It seems likely that these miscalculations result from the specific
communication patterns observed in a community with an increased incidence of deaf-
ness and the impression these patterns leave in the outside observer.
An incidence of deafness of 0.5 % is clearly high compared to the incidence of
childhood deafness in Western Europe, estimated to be around 0.1 % (Martin et al.
1981). However, UNICEF (1985) estimates an incidence of 0.5 % for moderate-severe
hearing loss in children in developing countries. Although this estimate also includes
hard-of-hearing children (i.e. those with moderate hearing loss), the UNICEF report
suggests that some of the communities discussed in this chapter may have an incidence
of deafness that is only slightly higher than that of the wider area.
560 V. Communication in the visual modality

Language endangerment
In view of the restricted number of users of shared sign languages, the ecology that
triggered the spontaneous development of a sign language in these communities is
extremely fragile. Demographic changes may easily lead to a reduction of the incidence
of deafness. Thus, in Martha’s Vineyard, increased contact with the mainland, as a
result of Deaf education among other factors, resulted in a reduction of the incidence
of deafness to an average rate and consequently led to the extinction of the local sign
language. Just as in Martha’s Vineyard, most sign languages described in this chapter
differ significantly from the respective sign language used in deaf education at a na-
tional level. As such, when deaf children from “deaf villages” start attending school,
they typically become bilingual signers. Especially when such schools are boarding
schools, the national sign language may become the dominant language of these bilin-
gual signers. Once children are no longer fluent users of a language, the vitality of that
language is at stake. Dolman (1986, 241) states that “one feels a certain wistfulness
realising that with the school’s continued success a language and even a way of life are
likely to be lost forever.” Whereas Deaf education may be a factor increasing the
endangerment of shared sign languages, hearing signers may be a positive factor when
it comes to the vitality of shared sign languages. Nonaka (2007) points out that hearing
signers may be the key “keepers” of shared sign languages, as they have little if any
incentive to learn the national sign language that endangers the local sign language.
An endangered status has been explicitly claimed for Country Sign Language (Dol-
man 1986; Cumberbatch 2006), Ban Khor Sign Language (Nonaka 2004), Keresan
Pueblo Sign Language (Kelley/McGregor 2003), and Adamorobe Sign Language
(AdaSL, Nyst 2007a).

No Deaf community
There is usually no clearly distinct Deaf community. Most studies report that there are
virtually no activities that would single out the deaf inhabitants. Deaf people identify
themselves along social structures existing in the wider community/culture (Shuman
1980; Johnson 1991). A partial exception is Desa Kolok, where deaf community mem-
bers take on particular responsibilities at the village level, such as decorating and clean-
ing particular temples (Marsaja 2008). In Adamorobe, Desa Kolok, and the community
of Al-Sayyid, increased contact with educational and medical services as well as Deaf
people from outside the village triggers an emerging sense of Deafhood.

Transmission
The transmission of sign languages of Deaf communities is usually characterized by
peer-to-peer transmission. In contrast, the transmission of shared sign languages resem-
bles more closely the pattern common for spoken language transmission in that deaf
children acquire sign language in the presence of adult language models.

Attitudes towards deafness and sign language


Deaf and hearing people in shared signing communities tend to have a neutral to
positive attitude towards deaf people. In some communities, religious explanations are
given for the high incidence of deafness in the community. Thus, both Adamorobe and
Desa Kolok recognize the influence of a deaf god. In Ban Khor, a Buddhist community,
the deafness is considered to be the result of a karmic sin.
24. Shared sign languages 561

In all communities, sign language is straightforwardly accepted as the normal means


of communicating with deaf people. Still, the social position of deaf people is not always
fully equal to that of hearing people, which is most clearly reflected in a lower marriage
rate among deaf people in some villages (cf. Shuman (1980) for the Yucatan Maya
community; Washabaugh (1986) for Providence Island; Nyst (2007a) for Adamorobe).

The study of spoken languages has shown that the social setting of a language may
influence its linguistic structure significantly. This is most obvious in the case of creoles
and pidgins, whose structures reflect language contact and specific acquisition patterns.
Similarly, comparative research on the sign languages of large Deaf communities and
the signing of isolated home signers (see chapter 26) has demonstrated a pervasive
influence of social setting on sign language structure. Shared sign languages, evolving in
yet other circumstances, allow further investigation of the direct or indirect correlation
between signing community and sign language structure.

4. The linguistic structure of shared sign languages

Systematic descriptions of the linguistic structure of shared sign languages are ex-
tremely scarce. The task of distilling the linguistic features shared by some or all shared
sign languages is further complicated by the variety of methodologies and perspectives
taken by different researchers. In a number of cases, this leads to studies contradicting
each other. Creating accessible corpora of these languages seems imperative at this
stage. Such corpora are much needed, as most shared sign languages are intrinsically
fragile and often endangered. Shared sign language corpora could also provide a relia-
ble basis for comparative studies on this type of sign language.

4.1. Phonology

Almost all studies on shared sign languages address at least some basic aspects of the
sub-lexical or articulatory level of these languages.

4.1.1. Handshape

A few studies address the issue of handshapes in a given shared sign language. Washa-
baugh (1986), for instance, claims that Providence Island Sign Language has relatively
few handshapes, which are also unmarked. Comparing PISL with ASL, he finds that
the former has 10 distinctive handshapes and the latter 17. Nyst (2007a) finds 29 pho-
netic handshapes in AdaSL. Using the approach developed for Sign Language of the
Netherlands (NGT) by van der Kooij (2002), Nyst distills a total of only seven pho-
nemic handshapes from the 29 phonetic handshapes, as opposed to the 31 phonemic
handshapes described for NGT by van der Kooij (2002). A relatively small set of un-
marked handshapes has been claimed to be characteristic for home sign languages as
562 V. Communication in the visual modality

well (see e.g. Kendon (1980) for Enga Sign Language in Papua New Guinea). For Kata
Kolok, Marsaja (2008) lists 28 different phonemic handshapes, but it is not clear which
criteria were used to distinguish phonemic from non-phonemic handshapes.

4.1.2. Multi-channeledness

Multi-channeled signs are signs that are not only articulated by the hands, but also
involve non-manual articulators, such as the face, the mouth, the leg, or the body as a
whole. Some shared sign languages make relatively extensive use of non-manual el-
ements. PISL, for instance, has “a significant non-manual component” in 36.5 % of its
lexical signs, as compared to 1.9 % for ASL (Washabaugh (1986, 56); also see Dolman
(1986) for Country Sign Language). In AdaSL, a considerable number of signs are
made with body parts other than the hands, either alone or in unison with the hands.
These articulators may be the head (in lizard), the face, the mouth, the leg (insult),
and the arm/elbow (refuse, chase) (Nyst 2007a, 55).
Shared sign languages appear to differ from each other in their use of mouthings
(articulations based on spoken words). AdaSL, on the one hand, makes extensive use
of mouthings, for example, in distinguishing the manually identical signs for the colour
terms black, white, and red, as illustrated in Figure 24.2 (Nyst 2007a, 93). Kata Kolok,
on the other hand, does not use mouthings at all (Marsaja 2008, 157).

Fig. 24.2: The signs for black (a), white (b), and red (c) in Adamorobe Sign Language

4.1.3. Iconicity

The languages of shared signing communities are often described as being more iconic
than sign languages of large Deaf communities (e.g. Dolman (1986) for Country Sign
Language; Ferreiro-Brito (1984) for Urubú-Kaapor Sign Language; Washabaugh
(1986) for PISL). However, iconicity is notoriously difficult to assess and claims con-
cerning high levels of iconicity are consequently difficult to evaluate. AdaSL has an
unusual iconic feature, as it rarely depicts the outline of entities. Where possible, the
articulator stands for the entity as a whole, rather than tracing its outline (Nyst 2007a).
See, for example, the sign for bottle illustrated in Figure 24.3 (Nyst 2007a, 124).
24. Shared sign languages 563

Fig. 24.3: bottle in Adamorobe Sign Language

4.1.4. Location

Both in-depth and preliminary studies of shared sign languages never fail to comment
on the use of space and locations in the shared sign language under investigation. A
large signing space and a proliferation of locations, including locations not commonly
used in sign languages of large Deaf communities (e.g. below the waist or behind the
body), seem to be common to most shared sign languages. Thus, Kata Kolok has signs
that are located on the buttocks (injection, see Figure 24.4), on the tongue (salt, see
Figure 24.5), and on the crotch (offspring).

Fig. 24.4: injection in Kata Kolok Fig. 24.5: salt in Kata Kolok
(Marsaja 2008, 143). (Marsaja 2008, 143);
Copyright © 2008 by Ishara Press. Copyright © 2008 by Ishara Press.
Reprinted with permission. Reprinted with permission.

Similarly, AdaSL has signs that are articulated on the knee (a personal name sign),
the foot (insult), the crotch (krobo, an ethnic name sign), the buttocks (injection),
the thigh (summon, trousers), and the back (younger-sibling). Interestingly, a large
564 V. Communication in the visual modality

signing space and a proliferation of locations have also been described for earlier varie-
ties of sign languages of large Deaf communities (cf. Kegl et al. (1999, 183, 196) for
Nicaraguan Sign Language; Frishberg (1975) for Old French Sign Language).

4.2. Spatial morphology and syntax

A striking feature of some shared sign languages is the absence of spatial inflection on
verbs to mark agreement (see chapter 7). In these sign languages, transfer verbs typi-
cally move away from the body of the signer, allowing no directional modification (cf.
Washabaugh (1986) for PISL; Aronoff et al. (2004) for ABSL; Marsaja (2008) for Kata
Kolok). In contrast, AdaSL does allow spatial modification to mark agreement, e.g. in
the verbs marry and insult (see Schuit/Baker/Pfau (2011) for Inuit Sign Language).
Another striking feature of some shared sign languages is the virtual absence of
classifiers (see chapter 8) in intransitive verbs of motion and location (Washabaugh
(1986) for PISL; Nyst (2007a) for AdaSL). In Nyst (2007a), I argue that the absence
of such classifier verbs is directly related to the absence of a reduced signing space in
AdaSL. Instead of classifiers, AdaSL makes use of two types of serial verb construc-
tions, parallel to structures attested in the surrounding spoken language, Akan. Again,
the absence of classifier handshapes in intransitive verbs of motion is not a general
characteristic of shared sign languages, as Kata Kolok does make extensive use of such
classifiers (Marsaja 2008).
A third feature that has been described for several shared sign languages concerns
the use of pointing. In PISL, Kata Kolok, and Inuit Sign Language, pointing is directed
towards real world locations (absolute pointing), rather than towards metaphorical
locations in a reduced signing space (de Vos 2009; Schuit/Baker/Pfau 2011). It seems
that AdaSL observes a similar restriction on the use of pointing signs, but further
research is needed to verify this claim.
A final interesting feature concerns the use of simultaneous constructions or buoys
(Liddell 2003), in which the two hands sign semantic content more or less independ-
ently from each other. AdaSL hardly uses such structures (Nyst 2007b), which is quite
striking in view of the fact that they are abundantly used in sign languages of large
Deaf communities (Vermeerbergen et al. 2007).
All of the features described in this section as being absent in shared sign languages
are very common in sign languages of large Deaf communities. In fact, they are so
common that they have often been considered to be universal, modality-specific fea-
tures of sign languages. The studies on shared sign languages prove that stable sign
languages can do without these features, optionally developing alternative structures to
fulfil the communicative task otherwise fulfilled by such “modality-specific” features.

5. Discussion
A summary of the few linguistic descriptions available implies a significant difference
between shared sign languages and sign languages of large Deaf communities. In par-
ticular, at the articulatory level the shared sign languages seem to be special in their
24. Shared sign languages 565

use of relatively few, unmarked handshapes, a large signing space with a proliferation
of locations, and a high degree of multi-channeledness. At the morphosyntactic level,
striking features described for a few shared sign languages include the absence or
infrequent use of spatially modifiable agreement verbs, classifier verbs of motion and
location, pointing towards abstract locations for person reference, and simultaneous
constructions. In this section, I wish to address the question to what extent the specific
social setting of shared signing communities may affect sign language structure.

5.1. Social setting and sign language structure

In his works on PISL, Washabaugh (1986) describes the language as highly context-
dependent and “immature”. He ascribes the difference between PISL and sign lan-
guages of large Deaf communities to the absence of a distinct Deaf community. He
further argues that in the absence of a Deaf identity and a Deaf community, there is no
alternative community in which exclusive or predominant use is made of sign language.
Instead, deaf people tend to focus more on their hearing environment for their commu-
nicative needs. Hearing-deaf interaction, however, appears to be limited in both extent
and depth as compared to hearing-hearing interactions.
In Adamorobe, a distinct Deaf community seems to have started to develop only
recently and AdaSL is used extensively by deaf and hearing signers. Apparently, in
this case, it is not so much the absence of a Deaf community, but rather the large
proportion of hearing signers in the village which has significantly affected the local
sign language. Thus, several features of Akan, the dominant spoken language of hear-
ing signers, are visible in AdaSL, among others at the lexical and the syntactic level.
Sandler et al. (2005) ascribe several aspects in which ABSL is found to differ from
sign languages of large Deaf communities to the young age of the sign language, which
is only about 70 years old. Relating these features to age implies a developmental
perspective on the structure of ABSL. That is, the researchers assume that such fea-
tures will emerge if enough time passes. One has to keep in mind, however, that ABSL
shares most of these features with sign languages of other shared signing communities.
For example, it shares the lack of spatially modified agreement verbs with Kata Kolok,
which has a long history. Similarly, the very limited use of classifiers in ABSL is not
necessarily a sign of immaturity, since AdaSL, an old sign language, makes no system-
atic use of classifier handshapes in intransitive verbs of motion either.
The next section evaluates the explanatory force of a unidirectional developmental
perspective on the structure of shared sign languages.

5.2. Shared sign languages and the evolution of sign languages

Washabaugh (1986, 10) qualifies PISL as “immature”, specifying that it is not a “com-
plete and mature language”. Goldin-Meadow (2005, 2271) considers ABSL a unique
system between home sign and “fully formed sign languages” and states: “Homesign
tells us where ABSL may have started; fully formed sign languages tell us where it
is going.”
566 V. Communication in the visual modality

Conceiving diversity in sign language types along developmental lines has a number
of consequences. It implies among others that:

1) There is an ultimate stage of sign language development, a sort of ‘super sign lan-
guage’. Which sign language (type) represents or comes closest to such a ‘super
sign language’? Perhaps one might expect to find such an ultimately developed sign
language in a monolingual signing community, without hearing/speaking signers (a
“sublimation” of the Deaf community), and, due to the ample availability of adult
language models, without continuous recreolization.
2) All sign languages in the world will eventually move towards the ultimate stage of
development if given the opportunity.
3) There is a hierarchy among sign language types as to which sign language has ad-
vanced more on the developmental cline.

Although it would be interesting to hypothesize about what structures would be em-


ployed in a sign language used in a monolingual signing community with a “normal”
acquisition pattern, to date such communities have not been found to exist. All we
have are “sub-ideal” environments for ultimate sign language development, including
(i) large Deaf communities with a majority of dominant sign language users, but pre-
dominantly peer-to-peer acquisition in the absence of adult language models (except
for a minority of deaf and hearing children who grow up in families with Deaf adults);
(ii) small signing communities with a considerable number of deaf signers, a ‘normal’
acquisition pattern, but a majority of non-dominant SL users; and (iii) isolated deaf
signers in hearing environments.
To what extent is it coincidental that the sign languages of large Deaf communi-
ties ⫺ the sign languages most sign linguists are most familiar with ⫺ would present
the ultimate stage of sign language development? And, is there historical evidence that
sign languages of large Deaf communities were at one point in their history shared
sign languages used in shared signing communities?
It seems likely that both shared sign languages and sign languages of Deaf commu-
nities developed out of home sign languages. In the case of a shared sign language, the
first deaf persons in a community with hereditary deafness create a home sign language
to communicate with their environment. Consequently, deaf children born into that
community start acquiring that home sign language, which expands to a shared sign
language. In the case of sign languages of Deaf communities, the scenario is likely to
be like that of Nicaraguan Sign Language (Kegl et al. 1999; also see chapter 36 on
creolization). That is, home signers form a community, e.g. in the context of a Deaf
school, and from the collection of different home sign languages, a new sign language
evolves. However, there is no straightforward scenario for the transformation of a
shared signing community into a large Deaf community. In other words, it is not likely
that a sign language of a ‘deaf village’ will become a national sign language.
Situations that may resemble more closely the historical social setting of some large
sign languages are the social contexts of sign languages like Hausa Sign Language
(Schmaling 2000) or Malian Sign Language (Nyst 2008), which evolved in the absence
of Deaf education in urban centres in the Hausa-speaking areas in Nigeria and in Mali.
These areas have an average incidence of deafness and the two sign languages evolved
and are used outside the context of Deaf education.
24. Shared sign languages 567

A similar developmental perspective used to be taken on spoken language develop-


ment. Generally, pidgins have been assumed to turn into creoles, i.e. languages of
greater complexity, as soon as they become the first language of a group of speakers.
Nowadays, creolists have replaced the concept of a developmental cline with what
Mufwene (2001) calls an ecological perspective on language. In that view, particular
features of a language may appear or disappear in concert with the dynamics of the
sociolinguistic setting of that language. In that view, creole formation involves different
patterns of language acquisition and creation than pidgins, which affects the linguistic
structure of the creole. In contrast with a unidirectional developmental path, a lan-
guage ecology perspective recognizes that the ensemble of features of a given language
is influenced by the full set of diverse sociolinguistic features (contact, status, acquisi-
tion patterns, etc.) that make up its social setting.
Applying such a multidimensional and multidirectional perspective to sign lan-
guages provides a more adequate explanation for the structural differences found in
different sign language types, including home sign languages, shared sign languages,
and sign languages of large Deaf communities. Such an account may further explain
why stable, primary sign languages have developed in different ways. Indeed, further
study of shared sign languages may reveal the influence of the diverse dimensions that
figure in shared sign communities, such as the proportion of deaf and hearing signers,
the sheer number of deaf signers, the social position of deaf people, and language age.
Despite similarities between shared sign languages and both home sign languages
and sign languages of large Deaf communities, shared sign languages should not be
considered ‘half-way’ sign languages, trapped somewhere in the middle of a develop-
mental cline. The same holds for International Sign and alternate (secondary) sign
languages used by hearing signers, which are not developmental stages of an ultimate
visuo-gestural language, but rather examples of the different forms a visuo-gestural
language may take (see chapter 35 for International Sign and chapter 23 for secondary
sign languages). Shared sign languages constitute a type of sign language of their own
on a par with the sign languages used by large Deaf communities. By not having devel-
oped agreement verbs (as in the case of e.g. Kata Kolok) or classifier handshapes in
intransitive verbs of motion (as in AdaSL), shared sign languages prove that old, stable
primary sign languages are not obliged to develop the structures typically found in sign
languages of large Deaf communities. That is, the path towards developing agreement
verbs or classifier handshapes is an option, not a unidirectional developmental cline
that all mature sign languages inevitably go through. Shared sign languages are full-
fledged sign languages that are shaped by their sociolinguistic “ecology”. Rather than
being immature sign languages, they are maximally adjusted to the small size of the
community that uses them, to the various levels of language proficiency existing in
these communities, and to the multilingual setting in which they typically flourish (cf.
Jepson 1991).

5.3. Avenues for further research

Generally, the incidence of deafness in rural areas with little medical care tends to be
relatively high. In some areas, it is estimated to be as high as 0.4 %, which is close to the
incidence on Providence Island (0.65 %). What is the threshold in terms of incidence of
568 V. Communication in the visual modality

deafness for a community to become a shared signing community? Is there a threshold


at which there is a shifting balance or is it rather a matter of a sliding scale? Do other
factors play a role beyond the mere incidence of deafness?
Despite the similarities found between shared signing communities at the linguistic
and social level, the differences still outweigh them. To evaluate more closely the effect
of a high incidence of deafness on a community and its linguistic mosaic, it is necessary
to compare communities that resemble each other to a great extent and differ only in
one or two respects. For example, it might be revealing to study the differences be-
tween Desa Kolok, with 47 deaf people, and another village in northern Bali with only
around 16 deaf people, as mentioned by Branson et al. (1996).
Several studies point out that deafness in the area surrounding a shared signing
community may be increased as well (Kisch 2007; Marsaja 2008). For Al-Sayyid, Kisch
(2007) mentions that a similarly high incidence of deafness is found in at least two
neighbouring communities. However, the social adaptations that have been made in
these villages are quite different from those found in Al-Sayyid. As such, a comparison
of the sociolinguistic setting and the structure of the signed communication in these
three Bedouin communities can provide us with “cleaner” information about the rela-
tion between sociolinguistic setting and sign language structure, as the villages are
located in the same cultural, geographical, and socio-economic environment.
A related question is how shared sign languages compare to other sign languages
that have evolved and are still used outside the context of Deaf education, e.g. sign
languages used by Deaf communities in urban centres where no Deaf education is
available, or extensive family sign languages? Are the features in which shared sign
languages have been found to differ from the well-studied sign languages with a long
history of Deaf education really confined to shared sign languages or are they rather
found more generally in sign languages that have evolved outside the context of
Deaf schools?
Most of these types of sign languages have deliberately been left unstudied, under
the tacit assumption that it is difficult to tell whether or not these ‘sign systems’ are
true languages. The increasing number of in-depth studies into shared sign languages
is an important first step towards studying sign languages in the “grey area” between
home sign languages and sign languages of large Deaf communities. It is of utmost
importance to engage in this endeavour with an open mind, free of preconceived no-
tions like ‘sign systems’ and ‘full-fledged sign languages’. Only then we can begin to
understand which factors are relevant in the shaping of sign language structure.

6. Summary and conclusion

Scattered around the globe are communities with a high incidence of hereditary deaf-
ness, i.e. between 0.6 % and 4 % of the total population are deaf. In response to the
widespread occurrence of deafness, local sign languages have emerged that are shared
by both deaf and hearing community signers. Following Kisch (2008), I refer to these
communities as shared signing communities and to their sign languages as shared sign
languages. In most of these communities, there is no distinct Deaf community and
hearing people have neutral to positive attitudes towards deaf people and sign lan-
24. Shared sign languages 569

guage. Shared sign languages differ from sign languages of large Deaf communities in
acquisition pattern, given that adult language models are available to more or less all
children who acquire a shared sign language. Also, the majority of users of shared
sign languages are hearing, as opposed to a majority of deaf signers in large Deaf
communities.
Structurally, shared sign languages differ from sign languages of large Deaf commu-
nities at the sub-lexical level, having relatively small sets of (mostly unmarked) hand-
shapes, a high degree of multi-channeledness, a large signing space, and a proliferation
of locations. At the morphosyntactic level, some shared sign languages are character-
ized by the absence of modality-specific structures such as spatially modified agree-
ment verbs, classifier handshapes in intransitive verbs of motion, and simultaneous
constructions – structures that seem to be more or less universally present in sign
languages of large Deaf communities. The ensemble of linguistic structures found in
several shared sign languages is evaluated in relation to the social setting of these
languages. In the literature, three extralinguistic factors have been mentioned that may
influence the shape of a given shared sign language. These are (i) the absence of a
Deaf community on Providence Island, resulting in an ‘immature’ sign language (Wash-
abaugh 1986); (ii) a majority of hearing signers in Adamorobe, whose primary language
is a spoken language (Nyst 2007a); and (iii) the relatively young age of Al-Sayyid
Bedouin Sign Language (various studies by Padden, Sandler, Meir, and Aronoff). The
role of age in shaping the structure of a sign language implies a developmental perspec-
tive on shared sign languages. In the last section, I contrast a developmental perspec-
tive with an “ecology” perspective. I argue that shared sign languages show that differ-
ent sign language types identified to date should not be conceived of as taking different
points on a unidirectional developmental path. Rather, a multidimensional model is
needed that recognizes the relevance of the complete set of factors involved in shaping
a language in the visuo-gestural modality. From this viewpoint, shared sign languages
use the structures they do, not because they are not ‘full-fledged’ or ‘immature’, but
rather because this particular set of structures results from diverse factors at work in
the particular setting of a given shared sign language. I conclude that shared sign lan-
guages, like the sign languages of a large Deaf community, are maximally adjusted to
the sociolinguistic setting in which they are used.

7. Literature
Amedofu, K. Geoffrey/Brobby, George W./Ocansey, Grace
1999 Congenital Non-syndromal Deafness at Adamarobe, an Isolated Ghanaian Village:
Prevalence, Incidence and Audiometric Characteristics of Deafness in the Village
(Part I). In: Journal of the Ghana Science Association 1(2), 63⫺69.
Aronoff, Mark
2007 In the Beginning Was the Word. In: Language 83, 803⫺830.
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2008 The Roots of Linguistic Organisation in a New Language. In: Interaction Studies 9,
133⫺153.
Aronoff, Mark/Padden, Carol/Meir, Irit/Sandler, Wendy
2004 Morphological Universals and the Sign Language Type. In: Yearbook of Morphology
2004, 19⫺40.
570 V. Communication in the visual modality

Aulbers, Bernard J. M.
1959 Erfelijke Aangeboren Doofheid in Zuid-Holland. Delft: Waltman.
Bahan, Ben/Poole-Nash, Joan C.
1995 The Formation of Signing Communities: Perspectives from Martha’s Vineyard. In: Deaf
Studies IV, 1⫺26.
Bogaerde, Beppie van den
2006 Kajana Sign Language. Paper Presented at the Workshop on Sign Languages in Village
Communities, Max Planck Institute for Psycholinguistics, Nijmegen, April 2006.
Branson, Jan/Miller, Don/Marsaja, I Gede
1996 Everyone Here Speaks Sign Language, too: A Deaf Village in Bali, Indonesia. In:
Lucas, Ceil (ed.), Multicultural Aspects of Sociolinguistics in Deaf Communities. Wash-
ington, DC: Gallaudet University Press, 39⫺57.
Chapman, Anne M./Jacquard, Albert
1971 Un Isolat d’Amérique Central: les Indiens Jicaques du Honduras. In: Génétique et pop-
ulations. Hommage à Jean Sutter (Institut National d’Etudes Démographiques; Cahier
N°60). Paris: Presses Universitaires de France, 163⫺185.
Costeff, H./Dar, H.
1980 Consanguinity Analysis of Congenital Deafness in Northern Israel. In: American Jour-
nal of Human Genetics 32, 64⫺68.
Cumberbatch, Karen
2006 Country Signs. Paper Presented at the Workshop on Sign Languages in Village Commu-
nities, Max Planck Institute for Psycholinguistics, Nijmegen, April 2006.
David, John B./Edoo, Ben B./Mustaffah, J.F./Hinchcliffe, Ronald
1971 Adamarobe, a ‘Deaf’ Village. In: Sound 5, 70⫺72.
Dolman, David
1986 Sign Languages in Jamaica. In: Sign Language Studies 52, 235⫺242.
Doran, Edwin
1952 Inbreeding in an Isolated Island Community. In: Journal of Heredity 43, 263⫺266.
Ferreira-Brito, Lucinda
1984 Similarities and Differences in Two Brazilian Sign Languages. In: Sign Language Studies
42, 45⫺56.
Fox Tree, Erich
2009 Meemul Tziij: An Indigenous Sign Language Complex of Mesoamerica. In: Sign Lan-
guage Studies 9, 324⫺366.
Fraser, George R.
1976 The Causes of Profound Deafness in Childhood. Baltimore, MD: John Hopkins Univer-
sity Press.
Frishberg, Nancy
1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Lan-
guage 51, 696⫺719.
Frishberg, Nancy
1987 Ghanaian Sign Language. In: Van Cleve, John V. (ed.), Gallaudet Encyclopedia of Deaf
People and Deafness, Vol. 3. New York: McGraw-Hill, 78⫺79.
Goldin-Meadow, Susan
2005 Watching Language Grow. In: Proceedings of the National Academy of Sciences of the
United States of America 102(7), 2271⫺2272.
Groce, Nora Ellen
1985 Everyone Here Spoke Sign Language ⫺ Hereditary Deafness on Martha’s Vineyard.
Cambridge: Harvard University Press.
Hanhart, Ernst
1962 Die genealogische und otologische Erforschung des grossen Walliser Herdes von re-
zessiver Taubheit und Schwerhörigkeit im Laufe der letzten 30 Jahre (1933⫺1962).
Archiv der Julius Klaus-Stiftung für Vererbungsforschung, Sozialanthropologie und Ras-
senhygiene 37, 199⫺218.
24. Shared sign languages 571

Jepson, Jill
1991 Two Sign Languages in a Single Village in India. In: Sign Language Studies 20, 47⫺59.
Johnson, Robert E.
1991 Sign Language, Culture & Community in a Traditional Yucatec Maya Village. In: Sign
Language Studies 73, 461⫺474.
Johnson, Robert E.
1994 Sign Language and the Concept of Deafness in a Yucatec Mayan Village. In: Erting,
Carol/Johnson, Robert E./Smith, Dorothy/Snider, Bruce (eds.), The Deaf Way: Perspec-
tives from the International Conference on Deaf Culture. Washington, DC: Gallaudet
University Press, 102⫺109.
Kakumasu, Jim
1968 Urubú Sign Language. In: International Journal of American Linguistics 34, 275⫺281.
Kegl, Judy/Senghas, Ann/Coppola, Marie
1999 Creation Through Contact: Sign Language Emergence and Sign Language Change in
Nicaragua. In: DeGraff, Michael (ed.), Language Creation and Language Change: Cre-
olization, Diachrony, and Development. Cambridge, MA: MIT Press, 179⫺237.
Kelley, Walter P.
2001 Pueblo Individuals Who are D/deaf: Acceptance in the Home Community, the Dominant
Society, and the Deaf Community. PhD Dissertation, University of Texas at Austin.
Kelley, Walter P./McGregor, Tony L.
2003 Keresan Pueblo Indian Sign Language. In: Reyhner, Jon/Trujillo, Octaviana V./Car-
rasco, Roberto L./Lockard, Louise (eds.), Nurturing Native Languages. Flagstaff, AZ:
Northern Arizona University Press, 141⫺148.
Kendon, Adam
1980 A Description of a Deaf-mute Sign Language from the Enga Province of Papua New
Guinea with Some Comparative Discussion. Part I: The Formational Properties of Enga
Signs. In: Semiotica 32, 1⫺34.
Kisch, Shifra
2004 Negotiating (Genetic) Deafness in a Bedouin Community. In: Cleve, John V. van (ed.),
Genetics, Disability, and Deafness. Washington, DC: Gallaudet University Press, 148⫺
173.
Kisch, Shifra
2006 The Social Context of Sign Among the Al-Sayyid Bedouin. Paper Presented at the
Workshop on Sign Languages in Village Communities, Max Planck Institute for Psycho-
linguistics, Nijmegen, April 2006.
Kisch, Shifra
2007 Disablement, Gender and Deafhood Among the Negev Arab-Bedouin. In: Disability
Studies Quarterly 27(4).
Kisch, Shifra
2008 “Deaf Discourse”: The Social Construction of Deaf in a Bedouin Community. In: Medi-
cal Anthropology 27(3), 238⫺313.
Kooij, Els van der
2002 Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic
Implementation and Iconicity. PhD Dissertation, University of Leiden. Utrecht: LOT.
Kusters, Annelies
2012 Since Time Immemorial Until the End of Days – An Ethnographic Study of the Produc-
tion of Deaf Space in Adamorobe. PhD Dissertation, University of Bristol.
Lane, Harlan/Pillard, Richard/French, Mary
2000 Origins of the American Deaf World: Assimilating and Differentiating Societies and
Their Relation to Genetic Patterning. In: Emmorey, Karen/Lane, Harlan (eds.), The
Signs of Language Revisited. Mahwah, NJ: Lawrence Erlbaum, 77⫺100.
572 V. Communication in the visual modality

Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Majumdar, M. K.
1972 Preliminary Study on Consanguinity and Deaf Mutes. In: Journal of the Indian Medical
Association 58, 78.
Marsaja, I Gede
2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen:
Ishara Press.
Martin, J. A. M./Bentzen, O./Colley, J. R./Hennebert, D./Holm, C./Iurato, S./Jonge, G. A. de/
McCullen, O./Meyer, M. L./Moore, W. J./Morgon, A.
1981 Childhood Deafness in the European Community. In: Scandinavian Audiology 10,
165⫺174.
Meir, Irit/Padden, Carol/Aronoff, Mark/Sandler, Wendy
2007 Body as Subject. In: Journal of Linguistics 43, 531⫺563.
Mengel, M. C./Koningsmark, B. W./Berlin, C. I./McKusick, Victor A.
1967 Recessive Early-onset Neural Deafness. In: Acta Oto-laryngologica 64, 313⫺326.
Meyer, Christian G./Amedofu, Geoffrey K./Brandner, Johanna/Pohland, Dieter/Timmann, Chris-
tian/Horstmann, Rolf D.
2002 Selection for Deafness? In: Nature Medicine 8, 1332⫺1333.
Mufwene, S. Salikoko
2001 The Ecology of Language Evolution. Cambridge: Cambridge University Press.
Nonaka, Angela M.
2004 The Forgotten Endangered Languages: Lessons on the Importance of Remembering
from Thailand’s Ban Khor Sign Language. In: Language in Society 33(5), 737⫺767.
Nonaka, Angela M.
2007 Emergence of an Indigenous Sign Language and a Speech/Sign Community in Ban Khor,
Thailand. PhD Dissertation, University of California, Los Angeles.
Nyst, Victoria
2007a A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Nyst, Victoria
2007b Simultaneous Constructions in Adamorobe Sign Language (Ghana). In: Vermeerber-
gen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Lan-
guages: Form and Function. Amsterdam: Benjamins, 127⫺145.
Nyst, Victoria
2008 Creating the Corpus Langue des Signes Malienne (CLaSiMa). Poster Presented at the
3rd Workshop on the Representation and Processing of Sign Languages: Construction &
Exploitation of Sign Language Corpora at LREC 2008, Marrakech, Morocco, May 2008.
Padden, Carol/Meir, Irit/Sandler, Wendy/Aronoff, Mark
2010 Against All Expectations: Encoding Subjects and Objects in a New Language. In:
Gerdts, Donna/Moore, J./Polinsky, Maria (eds.), Hypothesis A/Hypothesis B: Linguistic
Explorations in Honor of David M. Perlmutter. Cambridge, MA: MIT Press, 383⫺400.
Poole-Nash, Joan C.
1976 A New Vineyard. Edgartown, MA: Dukes County Historical Society.
Poole-Nash, Joan C.
1983 A Preliminary Description of Martha’s Vineyard Sign Language. Paper Presented at
the 3rd International Symposium on Sign Language Research, Rome, Italy, June 1983.
Sandler, Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark
2005 The Emergence of Grammar: Systematic Structure in a New Language. In: Proceedings
of the National Academy of Sciences of the UnitedStates of America 102(7), 2661⫺2665.
24. Shared sign languages 573

Schmaling, Constanze
2000 Magannar Hannu ⫺ a Descriptive Analysis of Hausa Sign Language. Hamburg: Signum.
Schuit, Joke,/Baker, Anne/Pfau, Roland
2011 Inuit Sign Language: A Contribution to Sign Language Typology. In: Linguistics in
Amsterdam 4, 1⫺31
Secretan, J. P.
1954 De la surdi-mutité récessive et de ses rapports avec les autres formes de la surdi-mutité.
In: Archiv der Julius Klaus-Stiftung für Vererbungsforschung, Sozialanthropologie und
Rassenhygiene 29(1), 107⫺121.
Shuman, Malcolm K.
1980 The Sound of Silence in Nohya: A Preliminary Account of Sign Language Use by the
Deaf in a Maya Community in Yucatan, Mexico. In: Language Sciences 2, 144⫺173.
Shuman, Malcolm K./Cherry-Shuman, Mary M.
1981 A Brief Annotated Sign List of Yucatec Maya Sign Language. In: Language Sciences 3,
124⫺185.
Tervoort, Ben
1978 Bilingual Interference. In: Schlesinger, Izchak/Namir, Lila (eds.), Sign Language of the
Deaf. Psychological, Linguistic, and Sociological Perspectives. New York: Academic
Press, 169⫺239.
UNICEF
1985 UNICEF Report on Prevention of Deafness: Hearing Aids. London: UNICEF.
Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.)
2007 Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins.
Vos, Connie de
2009 What is the Point? A Semiotic Analysis of Pointing in Kata Kolok. Paper Presented at
the Max Planck Institute for Psycholinguistics, Nijmegen, May 2009.
Vos, Connie de
forthcoming Sign-spatiality in Kata Kolok: How a Village Sign Language of Bali Inscribes Its
Signing Space. PhD Dissertation, Max Planck Institute for Psycholinguistics, Nijmegen.
Washabaugh, William
1979 Hearing and Deaf Signers on Providence Island. In: Sign Language Studies 24, 191⫺
214.
Washabaugh, William
1980a The Manufacturing of a Language. In: Sign Language Studies 29, 291⫺330.
Washabaugh, William
1980b The Organization and Use of Providence Island Sign Language. In: Sign Language
Studies 26, 65⫺92.
Washabaugh, William
1981a Sign Language in its Social Context. In: Annual Review of Anthropology 10, 237⫺252.
Washabaugh, William
1981b The Deaf of Grand Cayman, B.W.I. In: Sign Language Studies 31, 117⫺134.
Washabaugh, William
1985 Language and Self-consciousness Among the Deaf of Providence Island, Colombia. In:
Stokoe, William C./Volterra, Virginia (eds.), SLR’ 83. Proceedings of the 3rd Interna-
tional Symposium on Sign Language Research. Rome/Silver Spring: CNR/Linstok Press,
324⫺333.
Washabaugh, William
1986 Five Fingers for Survival. Ann Arbor, MI: Karoma Publishers.
Washabaugh, William/Woodward, James/DeSantis, Susan
1978 Providence Island Sign: A Context-dependent Language. In: Anthropological Linguis-
tics 20, 95⫺109.
574 V. Communication in the visual modality

Woll, Bencie/Ladd, Paddy


2003 Deaf Communities. In: Marschark, Marc/Spencer, Patricia E. (eds.), Oxford Handbook
of Deaf Studies, Language, and Education. Oxford: Oxford University Press, 151⫺163.
Woodward, James
1979 The Selflessness of Providence Island Sign Language: Personal Pronoun Morphology.
In: Sign Language Studies 23, 167⫺174.
Woodward, James
1982 Beliefs About and Attitudes Towards Deaf People and Sign Language on Providence
Island. In: Woodward, James (ed.), How You Gonna Get to Heaven if You Can’t Talk
with Jesus ⫺ On Depathologizing Deafness. Silver Spring, MD: T.J. Publishers, 51⫺74.
Woodward, James
1987 Providence Island Sign Language. In: Van Cleve, John V. (ed.), Gallaudet Encyclopedia
of Deaf People and Deafness, Vol. 3. New York: McGraw-Hill, 103⫺104.
Woodward, James
2003 Sign Languages and Deaf Identities in Thailand and Viet Nam. In: Leila Monaghan/
Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be
Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet Univer-
sity Press, 283⫺301.

Victoria Nyst, Leiden (The Netherlands)

25. Language and modality


1. Introduction
2. Cautionary notes
3. Modality factors that may affect the production of signs and words
4. Modality factors that may affect the perception of signs and words
5. Language modality and linguistic resources
6. Modality effects in child language development
7. Conclusions
8. Literature

Abstract
Human language can be expressed in two transmission channels, or modalities: the vis-
ual-gestural modality of sign languages and the oral-aural modality of spoken languages.
This chapter examines ways in which the visual-gestural and oral-aural modalities may
shape linguistic organization. Relevant properties of production and perception in the
two modalities are reviewed. Those properties may constrain linguistic organization,
such that spoken languages favor sequential word-internal structure (e.g., affixal mor-
phology), whereas sign languages favor simultaneous word-internal structure (e.g., non-
concatenative morphology). The two modalities also offer different resources to signed
and spoken languages; for sign languages, those resources include the transparent space
25. Language and modality 575

in which signs are produced and the visual-gestural channel’s capacity for iconic repre-
sentation. Lastly, possible modality effects in child language development are examined;
those effects may arise from the differing articulatory constraints in sign and speech, the
visual attentional demands of signing, and the iconic resources of the visual-gestural mo-
dality.

1. Introduction

How do we map the range of possibilities that are allowed by the human language
capacity? Spoken languages share certain universal properties: hierarchical phrase
structure, structure dependency, and duality of patterning, among others. Other proper-
ties vary considerably across spoken languages: phonological inventory; syllable struc-
ture (constraints on consonant clusters, allowable codas, etc.); morphological structure
(prefixation vs. suffixation vs. non-concatenative morphology); basic word order; and
so on. If our sample of human languages is limited to those that are spoken, we can’t
know whether we have the whole terrain of the human language capacity in view, or
whether key landmarks are out of sight. On this account, the oral-aural modality of
spoken languages is a kind of filter that may allow the expression of only a subset of
the languages that would be consistent with the human language capacity. To date, our
understanding of the human language capacity has been significantly confounded by
the fact that so much of our knowledge of linguistic universals and of linguistic varia-
tion has been derived from the analysis of spoken languages (Meier 2008a).
Before we examine the ways in which modality may shape linguistic organization,
note first that many crucial properties of linguistic organization are common to lan-
guages in the two major modalities. For example, signed and spoken languages exhibit
duality of patterning, such that morphemes in signed and spoken languages are built
of meaningless units of form. In sign languages, these meaningless units of form are
phonological specifications of handshape, movement, place of articulation, and orienta-
tion (Stokoe 1960, and many subsequent references; see also chapter 3 on sign lan-
guage phonology). Like spoken languages, sign languages differ in their inventories of
phonological forms, such that ⫺ for example ⫺ certain handshapes are possible in
some sign languages, but not in others. A frequently-cited example is this: a fisted
handshape with just an extended middle finger is a possible handshape in certain sign
languages (e.g., Australian Sign Language (Auslan): Johnston/Schembri 2007), but not
in others, such as American Sign Language (ASL). Slips of the hand provide evidence
that meaningless sublexical units are important in the planning of signed utterances
(ASL: Klima/Bellugi 1979; German Sign Language: Hohenberger et al. 2002; see also
chapter 30 on language production), just as phonemes are important in the planning
of spoken utterances. Signed and spoken languages have a variety of means by which
they can augment their lexicons; those means include borrowing (Padden 1998), deri-
vational morphology (Klima/Bellugi 1979; Supalla/Newport 1978), and compounding
(Klima/Bellugi 1979). Syntactic rules in sign languages must refer to notions such as
grammatical subjects, just as in spoken languages (Padden 1983). Distributional evi-
dence shows that subordinate clauses must be distinguished from coordinate clauses
(Padden 1983). These properties, and many others, are hallmarks of linguistic organiza-
576 V. Communication in the visual modality

tion in sign and speech. A very important conclusion follows: these linguistic properties
are not the unique properties of the constraints of the speech modality or of the resour-
ces which that modality affords.
In the balance of this chapter, I examine the properties of the modalities (or trans-
mission channels) in which languages are produced, perceived, and learned; I discuss
the ways in which the properties of the transmission channels ⫺ and the resources
offered by the transmission channels ⫺ may shape linguistic structure and first lan-
guage acquisition. Sign languages offer a distinct perspective on the human language
capacity. We have known now for over 40 years that the oral-aural modality is not the
only possible channel for the expression and perception of language. In recent years,
we have gained a better understanding of sign languages as a class; no longer is our
understanding of them largely restricted to just one such language (i.e., ASL). There
may even be a third language modality: the tactile-gestural modality of deaf-blind
signing (although to date we know of no independently-evolved tactile-gestural lan-
guages). I focus here on the two major language modalities: the oral-aural modality of
spoken languages and the visual-gestural languages of the sign languages of the Deaf
(see chapter 23, Manual Communication Systems: Evolution and Variation, for sign
languages in the tactile modality). I argue that these language modalities may place
different constraints upon the grammars of human languages and may offer different
resources to those languages.
Spoken and sign languages are produced by very different articulators and are per-
ceived by very different sensory organs (see sections 3 and 4). Yet despite obvious
differences between the two language modalities, the two channels are not wholly
different: the speech and sign signals are both broadcast signals: sender and addressee
need not be in physical contact (unlike the signer and addressee in a tactile-gestural
conversation). Speech and sign articulation each demand the coordination of oscilla-
tory biphasic movements (e.g., MacNeilage/Davis 1993). Moreover, skilled perform-
ance in any domain may invoke common solutions to the problem of serializing behav-
ior (Lashley 1951), such that rhythmic and hierarchic structure may be likely outcomes,
in linguistic and non-linguistic action, in sign or in speech.
Nonetheless the differences between the oral-aural and visual-gestural modalities
are impressive. Some of those differences (e.g., apparent differences in rate of produc-
tion) may have consequences for how phonological and morphological structures are
organized (e.g., the relative balance of simultaneous vs. sequential structure). The con-
straints of the oral-aural modality may force spoken languages towards highly sequen-
tial structures; the constraints of the visual-gestural modality may favor relatively si-
multaneous structures. The differences between the two language modalities may also
(as discussed in section 5) offer different resources to individual languages in these
respective modalities, such as the differing potentials for iconic representation in sign
versus speech and the unique availability of spatial resources to sign languages.
Lastly, in section 6, I will consider the developmental constraints on children acquir-
ing signed and spoken languages. Certain motoric tendencies may be shared across the
modalities (e.g., an infant tendency toward repeated movement patterns). Other mo-
toric tendencies in development may be unique to a particular modality (e.g., a tend-
ency toward proximalized movement in early sign development). There may also be
interesting differences between the two modalities in the attentional constraints upon
the child. In the oral-aural modality, the speaking child can look at a toy and can also
25. Language and modality 577

listen while the mother labels it; the child need not shift his gaze. In the visual-gestural
modality, the child often must shift his attention from that toy to the mother’s visually-
presented label. The differing attentional demands of the two modalities may have
consequences for how signing mothers interact with their children and for how vocabu-
lary learning proceeds in the child.
Other potential effects of modality will not be discussed here: for example, I will
not examine the literature on whether the spatial organization of sign languages leads
to greater right hemisphere use than in spoken languages (for a review see Emmorey
2002; see also chapter 31, Neurolinguistics).

2. Cautionary notes

Before we begin, two cautionary notes are in order. Any generalization about the
properties of sign languages in general must be qualified by the fact that our sample
size is small; there are many fewer sign languages than spoken ones and to date only
a small number of sign languages have been well-described in the linguistics literature.
Although this problem is less severe than in the 1970s and 1980s when the research
literature on sign languages largely focused on just one language (ASL), it remains the
case that some infrequent properties of spoken languages may be unattested in sign
languages because of the small sample of sign languages with which we are working.
A second, more interesting issue pertains to the relative youth of sign languages.
Sign languages in general are young languages; languages such as ASL and French
Sign Language can be traced back to the 18th century. An extreme case is Nicaraguan
Sign Language, whose history can only be pushed back to the late 1970s (Kegl et al.
1999; Polich 2005). Young languages, whether spoken or signed, may be relatively uni-
form typologically (Newport/Supalla 2000), although young spoken languages (e.g.,
Creole languages) may evince different properties than young sign languages (Aronoff
et al. 2005; see also chapter 36 on creolization). Moreover, the demographics of signing
communities may serve to keep those languages young. Because the vast majority of
deaf children are born to hearing, non-signing parents, sign languages may more closely
reflect the linguistic biases of children than do spoken languages (Meier 1984). For
most deaf children, early language or communicative development may proceed in the
absence of effective adult signing models (Goldin-Meadow/Mylander 1990; Singleton/
Newport 2004; Senghas/Coppola 2001; see also chapter 26, Homesign).

3. Modality factors that may affect the production of signs


and words
The oral and manual articulators differ in impressive ways. Speech articulation is tightly
coupled to respiration; the build-up of subglottal air pressure drives the periodic open-
ing and closing of the vocal folds. This is the internal sound source that energizes the
column of air above the vocal folds, creating the acoustic signal in which spoken lan-
guage is encoded. In contrast, sign languages are encoded in reflected light emanating
578 V. Communication in the visual modality

from an external light source. Compare now the manual and oral articulators: the
manual articulators are paired, unlike the oral articulators. The manual articulators
comprise a series of jointed segments of the arms and hands, whereas the tongue has
no skeletal structure. The manual articulators are relatively massive and must execute
relatively long movement excursions, whereas the oral articulators are comparatively
small and execute short movements. I’ll discuss a few of these points in greater detail
below.
Nonetheless there are commonalities between the articulatory systems underlying
sign and speech. The motor control systems for sign and speech confront the problems
that Lashley (1951) observed in serializing behavior. In both language modalities,
rhythmic and hierarchic structure may allow solutions to the problems he observed;
one element of rhythmic structure in both modalities may be a syllable-like unit. More-
over, in speech and in sign, the articulators can only move so far, so fast. In fast rates
of signing and speaking, target places of articulation may not be achieved, leading to
the phenomenon of undershoot (Mauk et al. 2008; Tyrone/Mauk 2010). Undershoot
may be one source of phonetic assimilation.

3.1. The sign articulators are relatively massive and often execute
long movement excursions

Bellugi and Fischer (1972, published in revised form in Klima/Bellugi 1979) asked
hearing native signers (so-called CODAs or children of deaf adults) to tell a story in
ASL and in English. What they found was that the rate of signing, as measured by the
number of signs per second, was significantly lower than the rate at which these same
subjects produced English words per second. Yet, paradoxically, the rate at which prop-
ositions were transmitted was equivalent in ASL and English. Interestingly, Bellugi
and Fischer reported that an artificial sign system (Signing Exact English) that more
closely matches the typological properties of English shows a somewhat faster rate
of signing than ASL, but a significantly slower rate of propositions per second than
in ASL.
Why is the rate of signing apparently slower than the rate of speaking? One expla-
nation might lie in the fact that the sign articulators are much more massive than the
oral articulators; compare the size of the arms and hands to that of the jaw and tongue.
Large muscle groups in the shoulder and arm are required to overcome inertia and
move the sign articulators in space. The movement excursions that the sign articulators
execute can be long; for example, the ASL sign man moves from contact on the signer’s
forehead to contact at the center of the signer’s chest. In contrast, the movement
excursions executed by the tongue tip are much shorter. Transition movements be-
tween signs can also entail long movement excursions. In a sign sentence with the
successive ASL signs king and sick, the dominant hand executes a long transition
between the final location of king at the waist and the location of sick at the forehead.
Comparisons of the rate at which lexical items are produced in ASL versus English
do not reveal whether there is a difference in articulatory rate between signed and
spoken languages generally. Languages may differ in the length of words, as measured
by the number of syllables that typically comprise a word in a given language. Wilbur
25. Language and modality 579

and Nolen (1986) argued that, given their methods of identifying sign syllables, the
rate of production of sign syllables in various samples drawn from ASL was roughly
comparable to the rate at which spoken syllables are produced. On their criteria, a
change in movement direction in reduplicated signs and in signs with bidirectional
movement (but not in signs with circular movement) indicated the end of a syllable;
they report a mean sign syllable length of 294 ms (SD = 32 ms). They suggest that
mean syllable duration in English is roughly 250 ms. More recently, Dolata et al. (2008)
estimate a rate of about 5 syllables per second in adult speech; they argue that spoken
syllables are shorter in duration than signed ones. On their view, a spoken syllable is
strongly associated with a single open-close cycle of the mandible (as in the monosyl-
labic English word bad). One issue in comparing syllable production in sign and speech
is this: it’s unclear whether the sign and speech studies discussed here are probing
comparable articulatory units.
How did Bellugi and Fischer explain their paradoxical finding that sign rate was
slower than word rate in English but that proposition rates were comparable across
the languages? They suggest that relatively simultaneous organization may be favored
in the phonology, morphology, and syntax of ASL and other sign languages. So, sequen-
tial affixation may be disfavored, whereas the contrastive use of the sign space may be
advantaged in sign morphology. Likewise, the use of non-manual facial expressions
that overlay manual signs may be favored over the use of the separate function words
typical of English. On this view, the articulatory constraints and resources of the two
language modalities push signed and spoken languages in different typological direc-
tions, so that sequential, affixal morphology is favored in spoken languages (although
non-concatenative morphology is certainly attested), whereas non-concatenative, lay-
ered morphology is favored in sign (although affixation appears on occasion; see also
chapter 5 on word formation).

3.2. The sign articulators are paired

The manual articulators, unlike their oral counterparts, are paired: we have two arms
and hands. This poses an interesting and unique issue for the phonological analysis of
sign languages, inasmuch as the non-dominant hand can ⫺ as Stokoe (1960) observed
early in the history of work on sign languages ⫺ serve as both an articulator of signs
and as a place of articulation. For a very useful discussion of this issue, see Sandler
and Lillo-Martin (2006; see also chapter 3, Phonology). Lexical signs may be one-
handed (e.g., the ASL sign yellow), two-handed with both hands active (the ASL sign
play), or two-handed with the non-dominant hand being a static “base hand” that
serves as a place of articulation for the dominant hand (one version of the ASL sign
that). In the monomorphemic signs of ASL and other sign languages, there are signifi-
cant restrictions on the movement and handshape of the non-dominant hand, irrespec-
tive of whether the non-dominant hand is active (as in play) or passive (as in that);
see Battison (1978). Interestingly, the non-dominant hand has greater independence in
linguistic domains that extend beyond the phonological word, as demonstrated in a
number of sign languages by so-called “classifier” verbs. This argument is made partic-
580 V. Communication in the visual modality

ularly clearly by data on cliticization of points in Israeli Sign Language (Sandler/Lillo-


Martin 2006).
In development, the child must gain ability to inhibit the non-dominant hand when
the other is moving (see section 6.2).

3.3. The manual articulators are jointed

Unlike the tongue which has no skeletal structure, the arms and hands comprise a set
of jointed segments. These segments can be arranged on a scale from joints that are
proximal to the torso ⫺ that is, close to the torso ⫺ to joints that are distal from the
torso, as is illustrated in (1).

(1) Proximal to Torso Distal from Torso


<------------------------------------------------------------->
shoulder......elbow......radioulnar......wrist.....1st-knuckles......2nd-knuckles

At each of these joints, signs can be identified whose movement is largely restricted to
action at that joint (for ASL: Brentari 1998; Meier et al. 2008; for Sign Language of
the Netherlands (NGT): Crasborn 2001). Whether action at a particular joint should
be specified in the phonological representation of a sign is an open question (Brentari
1998; Crasborn 2001). Proximalization or distalization of movement may be character-
istic of particular sign registers: thus, proximalization may be typical of the enlarged
signing that parents address to their children (Holzrichter/Meier 2000), whereas whis-
pered signing may tend to be distalized. In the acquisition of ASL and other sign
languages as first languages, deaf infants show a tendency to proximalize movement in
their production of signs (Meier et al. 2008).

3.4. Spoken syllables, but not signed syllables, may be organized around
a single oscillator

Human action is characterized by sequences of oscillatory movements, whether in


walking, chewing, speaking, or signing. Rhythmic units such as the syllable may, in both
speech and sign, be a way by which such movements are organized. For example, by
positing sublexical, syllabic units in sign, generalizations about handshape sequences
in ASL can be described (Brentari 1998).
However, the motor underpinnings of syllables appear quite different in sign and
speech. MacNeilage and Davis (1993; see also MacNeilage 2008) have argued that the
open-close cycle of the mandible provides the ‘frame’ around which spoken syllables
are organized. Therefore, on their view, there is a single predominant oscillator in the
syllables of spoken languages. In ASL, signs can be identified whose movement is
restricted to each of the six joints of the arm and hand specified in (1) (Meier et al.
2008): for example, committee (shoulder), day (shoulder ⫺ specifically longitudinal
rotation of the arm at the shoulder), thank-you (elbow), black (radioulnar joints of
forearm), yes (wrist), bird (first knuckles), and bug (second knuckles). For illustrations
25. Language and modality 581

Shoulder: committee Shoulder Twist: day

Elbow: thank-you Forearm Twist: black

Wrist: yes K1: bird

K2: bug
Fig. 25.1: Examples of ASL signs articulated at the joints of the arm and hand. The abbreviations
K1 and K2 indicate the first and second knuckles respectively. Figure reproduced with
permission from Meier et al. (2008). Illustration copyright © Taylor & Francis Ltd.
(http://www.tandf.co.uk/journals).
582 V. Communication in the visual modality

of these signs, see Figure 25.1. No one oscillator underlies the different sign syllables
that constitute these signs. On this view, signed syllables might have a more varied
articulatory basis than do spoken syllables.
An interesting research question is this: the different oscillators mentioned above
have very different masses. What effects do differing articulator sizes have on articula-
tory rate? Are there differences in the duration of signs made at smaller, more distal
segments of the arm (e.g., bug at the second knuckles) versus signs made at more
massive, more proximal segments (e.g., thank-you or sick at the elbow)?

4. Modality factors that may affect the perception of signs


and words

The oral articulators are largely hidden from view, unlike the manual articulators which
move in a transparent three-dimensional space. In speech perception, hearing individu-
als will ⫺ under appropriate conditions ⫺ use visual information about the configura-
tion of the oral articulators; a classic example is the McGurk effect (McGurk/MacDo-
nald 1976). However, the fact that vision provides the addressee with quite limited
information about the movement and position of the oral articulators means that
speech reading is not sufficient for understanding spoken conversations. Instead, the
object of perception in understanding speech is a highly-encoded acoustic signal.
In contrast, the light reflected from a signer’s arm, hands, and face obviously does
provide sufficient information for understanding signed conversations. In sign unlike
in speech, it is the articulators themselves ⫺ that is, the movement and position of the
hands (and the other sign articulators) ⫺ that constitute the object of perception in
understanding sign. In speech, the addressee need not have the speaker in view, but
the addressee of a signed utterance must have the signer’s arms and hands in view.
Precise comparisons of vision and audition are difficult; it makes little sense to
compare the perception of color with the perception of pitch. As it turns out, however,
each sensory modality responds to spatial and temporal information (Welch/Warren
1986; reviewed in Meier 1993). The human visual system has much greater spatial
acuity than does the auditory system; locations that differ by only a minute of arc can
be discriminated visually, whereas auditory stimuli must be separated by a degree of
arc. In contrast, the auditory system shows better discrimination of the temporal dura-
tion of stimuli and in the perception of temporal rate. The visual channel also has
greater bandwidth ⫺ greater information-carrying capacity ⫺ than does the auditory
channel; witness the significant broadband capacity required to carry video signals to
our homes, as opposed to the limited capacity of a standard voice-only telephone line.
The differing properties of the human auditory and visual systems suggest that the
human sensorium is well equipped to process a temporally dynamic speech signal in
which information is arrayed sequentially (see, for example, Pinker/Bloom 1990). In
contrast, the human visual capacity may be consistent with the processing of a visual
signal that makes relatively fine-grained spatial distinctions and that arrays information
in a relatively simultaneous fashion. McBurney (2002) has made the interesting sugges-
tion that the visual-gestural modality allows sign language access to a transmission
25. Language and modality 583

channel (a ‘medium’ in her terminology) in which information can be arrayed in four


dimensions of space and time.
The sign and speech modalities may differ in subtle ways with respect to the feed-
back that the speaker/signer receives from his/her own productions. For Hockett
(1960), the availability of feedback is a fundamental design feature of human language;
the speaker can perceive ⫺ Hockett says ⫺ everything of linguistic relevance in the
speech he or she produces. Feedback from one’s speech may help the speaker to match
his/her output against a stored representation. Speakers receive auditory feedback
from the production of all words and, in non-noisy environments, from all phonemes
in the inventory of the language. Note, however, that auditory feedback through air
conduction is not the only source of feedback available to speakers. Speakers also
receive auditory feedback through bone conduction, which provides better information
on low frequency sounds than on high frequency ones. For speech, like sign, proprio-
ceptive feedback is available and may be used by speakers in monitoring their own
production (see Postma (2000) for a review of the sources of information that may
enable error correction). Some oral articulators, notably the velum which controls the
passage of air into the nasal cavities, provide little proprioceptive feedback (e.g., Stev-
ens et al. 1976). Consequently, providing therapy for hypernasality is difficult.
The situation is complicated in the visual-gestural modality (e.g., Emmorey et al.
2009). The availability of visual feedback varies by sign type. For those signs that con-
tact such places of articulation as the upper face (e.g., the ASL sign deer) or the top
of the head (e.g., hat), the signer cannot see his or her hands contact the target place
of articulation. Moreover, signers rarely look at their own hands, so peripheral vision
may be the primary source of visual feedback with respect to sign production. Crucially,
for certain aspects of the sign signal ⫺ notably grammaticized facial expressions and
oral movements ⫺ no visual feedback can be available. Limitations to visual feedback
may be overcome by the availability of proprioceptive feedback regarding the move-
ment and position of the sign articulators and tactile feedback regarding contact be-
tween the hands and with the body. The fact that visual feedback is conditioned by a
given sign’s articulatory properties raises the possibility that the relative availability of
such feedback from a signer’s own productions might affect how accurately young
children produce signs; see Orlansky and Bonvillian (1988) for a suggestion along
these lines.
Proprioceptive feedback ⫺ not visual feedback ⫺ may be the crucial source of
feedback to the signer about his/her sign production. Emmorey et al. (2009) observed
that blind signers can use proprioception to correct errors in their signing. In sum, it
appears that, in sign more than in speech, distinct sensory channels are used for the
transmission of language to the addressee as opposed to the monitoring of one’s own
language output. However, in both language modalities, it seems likely that the
speaker/signer integrates feedback from more than a single source. Each source will
provide different kinds of information.

5. Language modality and linguistic resources


The two language channels differ in the resources that they respectively make available
to signed and spoken languages. For example, various properties of the visual-gestural
584 V. Communication in the visual modality

modality mean that iconicity and spatial contrasts are rich resources for sign languages.
As we have noted, the hands move in a transparent three-dimensional space. The
hands are paired and can simultaneously sketch two sides of an object, or can indicate
the relative locations of a figure (e.g., a car) and its ground (e.g., a tree). Because the
hands are visible, they themselves are available to symbolize manual actions (e.g., Taub
2001). However, as we shall see, individual sign languages may not avail themselves of
all the resources that the visual-gestural modality would seem to offer.

5.1. Iconicity and arbitrariness

In certain signs and words, the pairing of form and meaning is non-arbitrary; that is,
the pairing of form and meaning is motivated. The iconic ASL sign cat looks like the
signer is sketching the whiskers of the animal; see Figure 25.2. In contrast, the form of
the English word cat is completely arbitrary in shape; nothing about the form of this
word resembles what it means. The apparently greater frequency of iconic signs than
of iconic words may yield more opportunities for iconic effects on the grammar, acqui-
sition, and processing of sign languages than is true in spoken languages. Signs may
also be indexically motivated; this type of motivation is characteristic of signed pro-
nouns and agreeing verbs that point to their referents. In such signs, the location of
the referent is reflected in the location and direction of movement of the sign.
Onomatopoetic words certainly exist in spoken languages (e.g., the English word
meow for a cat’s vocalization), but they seem sufficiently marginal that Saussure (1916)
could maintain that words are fundamentally arbitrary in shape and Hockett (1960)
could later claim arbitrariness as a design feature of human language. However, the
representational capacities of the visual-gestural modality and the incidence of moti-
vated forms in sign force us to reconsider the role of arbitrariness in language. A
significant fraction of the ASL vocabulary displays some iconicity; moreover, there is
little reason to think that the language systematically favors arbitrary forms over iconic
ones. What is instead important in the sign vocabulary, just as in spoken language
vocabularies, is that the pairings of forms and meaning are conventional. The result is
that, as demonstrated by Klima and Bellugi (1979), the sign for ‘tree’ is distinct in

Fig. 25.2: An iconic sign in ASL: the noun cat.


25. Language and modality 585

mother curious
Fig. 25.3: Two non-iconic signs in ASL: the signs mother and curious.

three different sign languages, yet in all three languages the sign is iconic. So, the ASL
sign tree seems to represent the branches of a tree swaying in the wind, whereas the
Danish sign suggests the crown and trunk of a tree and the Chinese sign suggests the
columnar shape of some trees (or of tree trunks). Thus, there are three different forms
in three different sign languages, yet each form is motivated in its own fashion and
each is conventional within its language.
In sum, the pervasive arbitrariness of spoken language vocabularies is likely a con-
sequence of the limited representational capacities of the speech modality. What is
crucial to the design of spoken and sign vocabularies is that form-meaning pairings are
conventional. Moreover, all languages ⫺ signed or spoken ⫺ must allow arbitrary signs
and words in order to express abstract concepts; thus, for example, ASL signs such as
mother and curious (see Figure 25.3) have abstract meanings and are fundamentally
arbitrary in shape.

5.2. Crosslinguistic similarity in sign vocabularies

Unrelated sign languages may display relatively high rates of similar signs across their
lexicons (Kyle/Woll 1985; Guerra Currie et al. 2002; Padden 2011). In analyses of small
samples of signs from Mexican, Spanish, French, and Japanese Sign Languages, Guerra
Currie et al. found that 23 % of the Mexican and Japanese signs were similar on their
criteria (i.e., a pair of signs were judged to be similar if their meanings were similar
and if they shared the same phonological values for two of the three major parameters
of sign formation). Number signs, body part signs, and personal pronouns were ex-
cluded from their analysis; body part names and pronouns were excluded because they
are likely to be pointing signs. There is no known historical link between Mexican and
Japanese Sign Language, so the high rate of similarity in signs for concepts such as
‘book’, ‘balloon’, and ‘fire’ likely reflects shared iconicity. Similarities in the lexicons
of unrelated sign languages complicate analyses of the historical ties amongst sign
languages; analysts will likely have to exclude certain substantial classes of signs from
586 V. Communication in the visual modality

their analyses (see chapter 35, Language Contact and Borrowing). For recent discus-
sion of methodological issues in crosslinguistic comparisons of sign vocabularies, see
Woodward (2011) and Meier (in press).

5.3. Iconicity and phonological structure

As already noted, sign languages display duality of patterning. William Stokoe’s nota-
tion system was the first systematic attempt to represent the phonological structure of
signs. However, his notation system could not fully describe the forms of ASL signs.
Although certain unspecified properties might be predictable (e.g., the occurrence of
A- (/) vs. S- (4) handshapes in non-initialized signs), other phonetic properties were
stipulated in Stokoe, Casterline, and Croneberg (1965). For example, Stokoe et al. (e.g.,
pp. 196 and 197) note parenthetically that various signs pertaining to the heart or to
emotions are articulated in the heart region. The heart region is not a phonologically
distinctive location in Stokoe’s system; on his analysis these signs are phonologically
specified as being articulated on the trunk. As van der Hulst and van der Kooij (2006)
note, one way to handle problems such as this would be to add additional place of
articulation values to the phonological model. However, they instead propose that
some place values are restricted to signs that share iconic and/or semantic properties.
Thus, in NGT, the only signs articulated in the lower part of the trunk refer to body
parts in that region of the body or to articles of clothing that cover that region. Van
der Hulst and van der Kooij conclude that, because of the limited distribution of this
place of articulation, the lower trunk is not a contrastive value within the phonology
of NGT, although signs with this place must be described in any phonetic analysis of
the language (also cf. van der Kooij 2002). Note that similar phenomena also occur in
spoken languages. For example, the cluster [vr] is restricted in English (but not French)
to onomatopoeia, as in vroom-vroom (‘the sound of a motorcycle’).

5.4. Iconicity and morphological structure

It has long been observed that morphological processes in sign languages sometimes
appear insensitive to the iconicity of the lexical sign being modulated. For example,
the intensive form of the ASL sign slow is produced with a short, fast movement, not
with a slow movement of long duration (Klima/Bellugi 1979). However, the fast, sharp
movement of the intensive morpheme itself seems motivated.
Aronoff et al. (2005) have argued that the resources for iconic representation avail-
able within the visual-gestural modality allow young sign languages to rapidly develop
rich morphological systems that would not be expected in young spoken languages
(that is, Creoles). The fact that sign languages ⫺ with the exception of some so-called
village sign languages (Sandler et al. 2005; see also chapter 24) ⫺ seem to be highly
uniform with respect to verb agreement, classifier constructions, and aspectual inflec-
tions and that this morphology is simultaneously-organized is attributed by Aronoff et
al. to the hypothesis that sign languages everywhere have drawn upon the same resour-
ces for iconicity. In contrast, they argue that the sporadic, sequential affixal morphol-
25. Language and modality 587

ogy that has been identified in sign languages is language-particular, takes time to
develop, and does not draw upon iconicity.
Certain properties of verb agreement in sign languages are typologically unusual;
see Lillo-Martin and Meier (2011) for recent discussion. For instance, sign languages
show a bias toward object agreement, such that object agreement is required for many
verbs, whereas subject agreement is either optional or impossible in those same verbs.
In contrast, spoken languages strongly favor subject agreement over object agreement.
Meir et al. (2007) argue that regularities in the lexical iconicity of agreement verbs
explain this unexpected property. Specifically, in these verbs the signer’s body repre-
sents the subject argument (and not any particular thematic role).

5.5. The use of space

It is in the use of the signing space that we find the most profound modality effects on
grammatical organization in sign languages. Within the monomorphemic sign vocabu-
lary of nouns, verbs, and adjectives (that is, within the so-called ‘frozen vocabulary’),
spatial distinctions are not contrastive, but in pronouns, in verb agreement, and in the
so-called classifier systems, spatial distinctions are crucial to the tracking of reference.
The spatial displacement of nouns made in neutral space can be used to establish
referential loci; for a recent discussion of this phenomenon in Quebec Sign Language
(LSQ), see Rinfret (2009). The signing space in sign languages is also used to describe
space. Emmorey (2002) has observed that ASL signers map the spatial arrangement
of the objects that they are describing onto the spatial arrangements of their hands,
rather than a vocabulary of prepositions akin to those of English (e.g., prepositions
signifying spatial arrangement such as in, on, near, above, or below). However, signers
have choices in how they map the described space onto the signing space, inasmuch as
signers may adopt different frames of reference for describing space and different
perspectives on the described space (see chapter 19, Use of Sign Space, for details).
Deictic pronouns in sign languages are generally pointing signs to locations associ-
ated with the referents of those pronouns; such points are not iconic but are instead
indexically-motivated. Whereas spoken languages show enormous variation in the pho-
nological form of deictic pronouns and substantial crosslinguistic variation in the se-
mantic categories marked by those pronouns, there appears to be impressive typologi-
cal uniformity across sign languages in the form and meaning of deictic pronouns
(McBurney 2002; see also chapter 11). Note, however, that there is nonetheless some
crosslinguistic variation (Meier 1990; Meier/Lillo-Martin 2010). For example, whereas
the first person deictic pronoun in most sign languages is a point to the center of the
signer’s chest, Japanese Sign Language (NS; Japan Sign Language Research Institute
1997) and Taiwan Sign Language (Smith/Ting 1979) also allow a point to the signer’s
nose as the first person pronoun.
Spatial contrasts are also meaningful in the set of signed verbs that have been vari-
ously referred to as directional, pointing, indicating, or agreeing verbs. Verbs such as
the ASL verb ask are directional in that the hand moves between, or is oriented to-
wards, locations associated with the verb’s subject and object. This process has been
considered a kind of verb agreement, in that such verbs are seen as being marked for
features of the subject and object (Padden 1983). For other researchers, this process
588 V. Communication in the visual modality

has been viewed as a form of pointing or indicating, and therefore as being fundamen-
tally gestural (Liddell 2000). Whatever the appropriate analysis, it is clear that the
phenomenon of verbs moving between locations associated with their referents is lin-
guistically constrained and has important syntactic consequences (see also chapter 7,
Agreement). Agreement applies to specific verbs and does so in ways that are some-
times idiosyncratic (e.g., the ASL verb tell allows only object agreement, whereas the
similar verb inform allows both subject and object agreement). The set of agreement
verbs varies across languages (e.g., the NS verb like allows agreement in one major
dialect but not in another, as noted in Fischer 1996). All conventional sign languages
examined to date have a class of non-agreeing, plain verbs (Padden 1983), but those
languages vary in how subject and object are marked when the main verb is a plain
one. In ASL and British Sign Language (BSL), word order is used, but in some sign
languages (e.g., Taiwan Sign Language: Smith 1990; NGT: Bos 1994; Brazilian Sign
Language: Quadros 1999) an auxiliary verb is introduced to carry agreement (see chap-
ter 10 for discussion).
The fundamental point is this: the spatial resources available to sign languages yield
relative uniformity in the pronominal and agreement systems of sign languages, albeit
with interesting linguistic variation. However, the use of space is not inevitable in sign
languages: not only are there artificial sign systems that do not use space contrastively
(e.g., Signing Exact English ⫺ see Supalla 1991), but there are also village sign lan-
guages that do not mark spatial distinctions on verbs (e.g., Al-Sayyid Bedouin Sign
Language, as described by Sandler et al. 2005).

6. Modality effects in child language development


I will organize the discussion of modality effects in child language development around
four issues: 1) developmental milestones; 2) articulatory factors; 3) factors pertaining
to visual attention; and 4) iconicity.

6.1. Developmental milestones

There is good reason to think that the human vocal tract and human language have
co-evolved; the result is that humans, but not the great apes, have a vocal tract that
can articulate the range of speech sounds characteristic of spoken languages (Lieber-
man 1984). Moreover, the ubiquity of spoken languages amongst hearing communities
might lead one to hypothesize that the use of speech and hearing for language reflects
a developmental bias written into whatever innate linguistic component may guide
children’s language learning. Considerations such as these might lead one to expect
that the acquisition of sign languages would be generally delayed vis-à-vis the acquisi-
tion of spoken languages. Nonetheless, the most crucial finding of research on the
acquisition of sign languages as a first language is that signed and spoken languages
are acquired on much the same developmental schedule (Newport/Meier 1985; see also
chapter 28 on acquisition). There is no evidence that signing children suffer any delay
vis-à-vis their speaking counterparts. To the contrary, the only evidence of differences
25. Language and modality 589

between signing and speaking children in the ages at which developmental milestones
are achieved suggests the possibility that speaking children may be delayed in the
production of their first words, as compared to signing children’s production of their
first signs (Orlansky/Bonvillian 1985; Anderson/Reilly 2002). This controversial claim
of an early sign advantage has been extensively discussed elsewhere (Meier/Newport
1990; Petitto 1988; Volterra/Iverson 1995) and will not be further reviewed here.

6.2. Articulatory factors in development

As noted earlier, the oral articulators are largely hidden inside the mouth. In contrast,
the sign articulators are visible and, to some extent, manipulable. This has consequences
for the input that parents offer their children. On occasion, signing parents exhibit
teacher-like behaviors in which they mold their child’s hands in order to produce a
facsimile of some sign. For example, the deaf mother of one deaf child (Noel, 17 months)
observed by Pizer and Meier (2008; also Pizer/Meier/Shaw 2011) twisted her daughter’s
forearm to produce a facsimile of the ASL sign blue. There are no data as to whether
this phenomenon has specific effects on children’s acquisition of signs.
In the course of language development, children often make errors in how they
articulate words or signs. Many of those errors may be due to immature control over
the articulators. Some characteristic properties of infant motor control may affect both
sign and speech development in infancy ⫺ for example, infants show a tendency toward
repetitive, cyclic movement patterns in movement stereotypies of the arms and legs
(Thelen 1979), in manual babbles (Meier/Willerman 1995), in vocal babbling (MacNeil-
age/Davis 1993), and in their production of silent mandibular oscillations (‘jaw wags’)
that are sometimes intermixed with phonated babbling (Meier et al. 1997). A frequent
pattern for early production of spoken disyllables is reduplication; so a word such as
‘bottle’ may become [baba]. In early sign production, infants generally preserve re-
peated movement when such movement is characteristic of the target sign. However,
when the target is monocyclic, infants frequently add repetition. This result has been
reported for both ASL (Meier et al. 2008) and BSL (Morgan et al. 2007). For example
one child reported by Meier et al. produced 7 multicyclic tokens of the single-cycle
sign black between the ages of 12 and 14 months. Note that one complexity in inter-
preting these results is that mothers sometimes over-repeat signs as well (Launer 1982;
Holzrichter/Meier 2000).
Other articulatory factors in development may be unique to a given modality. For
example, because many two-handed signs demand a static non-dominant hand, the
child must learn to inhibit the non-dominant hand when the other hand is moving. But
the young infant often fails; instead, the non-dominant hand may mirror the movement
of the dominant hand. For example, one child at age 16 months produced a version of
the ASL sign fall in which both hands executed identical downward path movements
(Cheek et al. 2001); in the adult target sign, the non-dominant hand remains static
while the dominant V-hand (W) rotates from initial finger-tip contact on the palm to
final contact on the back of the hand; see Figure 25.4. Infant failures to inhibit the
non-dominant hand when the other is active may be rooted in problems of motor
control that are not specific to language (Fagard 1994; Wiesendanger/Wicki/Rouiller
590 V. Communication in the visual modality

Fig. 25.4: The ASL verb fall: From left to right, the photographs show the initial, medial, and
final positions of the sign.

1994). However, the problem may be complicated by the cognitive load of producing
a lexical item.
Another pattern in early sign production that appears to be rooted in tendencies of
infant motor development is the proximalization of movement. In linguistic and non-
linguistic movement of the limbs, children tend to use a more proximal joint (i.e., a
joint closer to the torso) in situations in which an adult might use a more distal joint
(i.e., a joint farther from the torso). This tendency appears characteristic of the devel-
opment of walking, of writing, and of signing. Meier et al. (2008) examined the errors
in early sign production produced by four deaf children of Deaf parents, aged 8⫺17
months. An analysis of the children’s substitution errors showed that all four children
favored proximal substitutions over distal ones; for three of the four children this pat-
tern was highly reliable. For example, at almost 12 months, Susie produced the ASL
sign horse with a movement of the wrist, rather the expected movement at the first
knuckles.
Meier et al. (2008) not only analyzed these children’s errors in early sign production;
they also examined children’s accuracy in producing sign movement at each of the
joints of the arm and hand. When children erred they showed robust tendencies toward
proximalization. But the analysis also revealed that children were relatively reliable on
two joints, the elbow and the first knuckles. The authors concluded that two or more
syllable types may be available early in development: path movement produced at
the elbow (and/or the shoulder) and hand-internal movements produced at the first
knuckles.

6.3. Visual attention and visual perspective-taking in language


development

Descriptions of child-directed signing have proved quite consistent across a number of


different sign languages; see, among other references, Erting et al. (1990) for ASL;
Masataka (1992) for NS; Harris et al. (1989) for BSL; Mohay et al. (1998) for Auslan;
and van den Bogaerde (2000) for NGT. In order to perceive the sign input available
to them, signing children must look at their interlocutors; hearing children may listen
to their speaking parents without looking at them. How do deaf parents accommodate
25. Language and modality 591

the visual demands that signing places on their infants? One result is somewhat surpris-
ing: Spencer and Harris (2006) have suggested that deaf parents present less linguistic
input to their deaf children than hearing mothers do to their hearing children. Their
explanation is straightforward: deaf parents almost never sign to their children when
their children aren’t looking at them. What’s most important here is this: despite these
apparent differences in the quantity of the input they receive, signing children acquire
sign languages on much the same developmental schedule as do speaking children, as
discussed above. Spencer and Harris raise the possibility that one explanation for this
is that the child-directed signing of deaf mothers is carefully tuned to the attentional
capacities of their children. So, on this account, it’s quality of input ⫺ not quantity of
input ⫺ that matters.
The relative freedom of the hands to move within the three-dimensional space in
front of the signer means that the signer can displace a sign from its expected, citation-
form place of articulation. As already noted, the spatial displacement of nouns that
are made in neutral space can be used in adult sign languages to establish referential
loci. Another reason to displace signs in space is to accommodate the visual attention
of the addressee. Parents may move a sign into the child’s line of regard, so that it is
visible to the child; see Harris et al. (1989) and Spencer and Harris (2006) for discus-
sions of this phenomenon in child-directed signing.
Often signer and addressee are located opposite each other, but other arrangements
are possible, for example when signer and addressee share bonds of intimacy. Thus, a
child may be seated on the mother’s lap, facing away from her. In such instances,
mother and child occupy the same signing space. How does the mother make her signs
perceptible to her child in such situations? One strategy that parents adopt is to sign
on the child or in front of the child. Signing on the child may offer one potential
advantage to that child; the child receives tactile information about the place of articu-
lation of signs. Pizer and Meier (2008; also Meier 2008b) observed three deaf mothers
of young deaf children; all three mothers produced instances of signing on their child.
Meier (2008b) cites an example in which Noel (17 months) was seated on her mother’s
lap. The mother was labeling the colors of blocks located on the floor in front of Noel.
The ASL signs yellow, blue, and green are all produced in the neutral signing space
in front of the signer. Noel’s mother could easily sign these signs in front of her child.
But what about the sign orange? The citation-form ASL sign has a repeated hand-
internal closing movement of a fisted hand (an S-hand: 4) that is executed at the
mouth; see Figure 25.5. If Noel’s mother produced this sign at her own mouth it would
not have been visible to her daughter; instead she produced three tokens of this sign
on her daughter’s mouth. Noel thereby received visual and tactile information about
this color sign.
An interesting problem presented by sign languages is that some signs appear quite
different to the addressee than they do to the signer. In ASL, the palm of the possessive
pronoun is oriented toward the possessor. Thus, when a signer produces the possessive
sign your, the signer sees the back of his/her own hand but the addressee sees the
signer’s open palm. A sign such as cat is produced on the ipsilateral side of the signer’s
face, where ‘ipsilateral’ means the same side as the signer’s active hand. For a right-
handed signer, the sign is produced on his/her right, but ⫺ from the perspective of an
addressee standing opposite the signer ⫺ this sign appears on the addressee’s left.
Learners must represent the place of articulation of a sign such as cat as being the
592 V. Communication in the visual modality

yellow orange
Fig. 25.5: The ASL color terms yellow and orange.

ipsilateral side of the signer’s face, not the right side of the signer’s face, and not the
left side of the signer’s face from the addressee’s perspective. Shield (2010) has recently
argued that representing the form of signs therefore requires visual perspective abilities
(specifically, ‘self-other mapping’) that are known to be impaired in autistic children,
leading to the hypothesis that in sign ⫺ but not in speech ⫺ autism may yield an
interesting class of phonological errors (see also chapter 32, Atypical Signing). For
example, one seven-and-a-half year-old autistic deaf boy crossed the midline to pro-
duce the ASL sign girl on the contralateral cheek (that is, on his left, mirroring what
he would see if he viewed a right-handed signer’s production of girl). Other native-
signing deaf children with autism made errors in which they reversed the palm orienta-
tion of ASL signs (changing inward palm orientation to outward, or vice versa).

6.4. Iconicity and language development

The frequency of motivated signs in sign languages allows tests of the role of iconicity
in child language development that would be difficult in spoken languages. We’ll look
here at children’s acquisition of indexically-motivated signs ⫺ that is, signs that point
to their referents ⫺ as well as their acquisition of iconically-motivated signs that mani-
fest an imagistic relationship between form and meaning.
Let’s look first at early sign development. Orlansky and Bonvillian (1984) analyzed
diary data on the representation of iconic signs in the vocabularies of young signers.
They argued that iconic signs were not over-represented in children’s early vocabular-
ies. Meier et al. (2008) wondered whether signing children might seek to enhance the
iconicity of form-meaning mappings and whether such a bias might account for chil-
dren’s errors in early sign production. Meier et al. judged the iconicity of 605 sign
tokens produced by four, third-generation deaf children, aged 8⫺17 months. Most to-
kens were judged to be as iconic as the adult target sign, or less iconic than the adult
target. Only 5 % were judged to show increased iconicity vis-à-vis the adult target form.
Launer (1982) examined the iconicity of the sign tokens produced by two deaf children
25. Language and modality 593

of deaf parents; during the study the children’s ages ranged from 12 to 24 months.
Launer found that approximately 15 % of the sign tokens displayed enhanced iconicity.
Meier et al. (2008) concluded that factors other than iconicity explain the preponder-
ance of children’s errors; in particular, they suggest that articulatory constraints account
for most of children’s errors in early sign production, just as articulatory factors account
for the bulk of speaking children’s errors in producing words.
The acquisition of pointing signs is a domain where we might expect to find pro-
found effects of motivated form-meaning relationships, inasmuch as personal pronouns
translated as ‘I’, ‘you’, and ‘he/she/it’ are index-finger points to the individual being
referred to. These pronouns would not seem to present the problem of shifting refer-
ence that is posed by deictic pronouns in spoken languages. A longitudinal study of
the development of personal pronouns in ASL in two deaf children of deaf parents
(Petitto 1987) reported the use of points (including points to people) from ages 10⫺12
months, followed by a six-month period in which points to persons were absent. During
this later period, the children would sometimes refer to themselves by a name or by a
common noun (e.g., girl). Points to people re-emerged at 21⫺23 months; during this
period, one child produced reversal errors in which she used the sign you to refer to
herself. Both children demonstrated correct usage of ASL personal pronouns at 25⫺
27 months. As in spoken languages, the acquisition of deictic pronouns may be subject
to considerable individual differences; for example, an extensive case study of how one
deaf child of deaf parents acquired Greek Sign Language (Hatzopolou 2008) revealed
no errors in pronoun usage, no use of name signs in lieu of points, and limited evidence
of a period from 16 to 20 months in which personal pronouns were infrequent (but not
absent). The data from Petitto (1987) indicate that some signing children may be insen-
sitive to the motivated properties of pointing signs; one child’s reversal errors suggested
that she analyzed the pointing sign you as being an ordinary lexical sign, specifically a
name for herself.
Some of the most strikingly iconic signs in ASL have been analyzed as being mor-
phologically complex. The movement and orientation of agreeing verbs in ASL indi-
cate locations associated with arguments of those verbs. An agreeing verb such as
give is particularly transparent inasmuch as it resembles the action associated with
transferring an object from one individual to another. Meier (1982, 1987) examined
the acquisition of verb agreement by three deaf children of deaf parents; he reported
naturalistic and experimental data on children’s use of agreement with referents that
were present in the immediate environment. On his account, the children showed mas-
tery of agreement with these real-world locations between age 3 and 3½. The most
characteristic error type was the omission of agreement; these omissions often yielded
signs that were less iconic than the adult target. Casey (2003) re-examined the acquisi-
tion of verb agreement in ASL. Like Meier (1982), she reported errors of omission,
but she also argued that verb agreement is a developmental outgrowth of children’s
action gestures. These gestures, like agreeing verbs, are directional. Longitudinal data
on one native learner’s acquisition of BSL indicated that apparent omissions of agree-
ment were frequent through 2;9 (Morgan/Barrière/Woll 2006). More recently, however,
there has been an emerging controversy with respect to how verb agreement is ac-
quired: Quadros and Lillo-Martin (2007) have questioned whether children ungram-
matically omit verb agreement at any age in the acquisition of ASL and Brazilian Sign
Language. The absence of frequent errors on agreement would be consistent, in their
594 V. Communication in the visual modality

view, with evidence of the optionality of agreement in the adult languages and with
the need for better description of what linguistic contexts require agreement.
Lastly, so-called classifier verbs, which are complex predicates used to describe the
motion and position of objects, are a highly iconic domain within ASL and other sign
languages. Recently, Slobin et al. (2003) have argued that iconicity of, in particular,
handling classifiers facilitates their early production, perhaps even by age 3. However,
analyses of children’s production of classifier signs have reported errors in which chil-
dren appear to separate out component morphemes (Newport 1981). In general, classi-
fier signs are ⫺ despite their impressive iconicity ⫺ not early acquisitions (Schick 1990
and, for a review, Emmorey 2002).
The clearest effects of iconicity on language development appear in the homesign-
ing of deaf children born into non-signing hearing families; see Goldin-Meadow (2003)
for a recent overview of the literature on homesign (also see chapter 26). Homesigners
innovate gestural systems with many language-like properties, including word order
regularities that distinguish the arguments of verb-like action gestures. The vocabular-
ies of homesigning children comprise two types of gestures: pointing gestures and
highly iconic gestures that Goldin-Meadow and her colleagues have called “character-
izing signs”. In homesigning systems, iconicity is crucial, inasmuch as the transparency
of homesigns allows children to be understood by their parents.

7. Conclusions

I have argued here that the two major language modalities offer different resources to
individual human languages and place differing constraints on those languages. The
visual-gestural modality offers sign languages greater opportunities for imagistic repre-
sentation than does the oral-aural modality of spoken languages. Consistent with this,
many signs are to some degree iconic, yet those signs are thoroughly conventional. The
frequency of iconic signs in languages such as ASL suggests that the arbitrariness of
spoken words is in significant measure an artifact of the limited imagistic resources of
the oral-aural modality. Thus, words and signs need not be arbitrary; nonetheless, hu-
man languages must allow arbitrary words and signs in order to express abstract, non-
picturable concepts. The iconic resources of the visual-gestural modality also allow the
rapid innovation of signed vocabularies, whether the innovators be adults or homesign-
ing children. Iconicity jumpstarts the emergence of new sign languages.
The signing space is a resource available to sign languages which has no obvious
counterpart in speech itself, but has clear counterparts in the gestures that accompany
speech (Liddell 2003). This spatial resource may yield considerable uniformity across
many, but perhaps not all, sign languages in certain key areas of their grammars ⫺
uniformity in their pronominal system, in verb agreement, and in the classifier systems
of sign languages.
Sign languages are articulated in a three-dimensional space that is unavailable to
spoken languages. But, in their articulation, sign languages are also constrained by
what appears to be a slower rate of signing than of speaking, perhaps because of the
large size of the manual articulators. The differing rates of production of signed and
spoken languages may push signed and spoken languages in different typological direc-
25. Language and modality 595

tions, resulting in simultaneous morphological structures being typical of sign lan-


guages, whereas sequential morphological structures are most typical of spoken lan-
guages.
Although signed and spoken languages are differently constrained and avail them-
selves of differing resources, they are in many fundamental respects much the same ⫺
much the same in the efficiency with which they express information, much the same
in the schedules on which they are acquired, and much the same in many fundamental
aspects of grammatical structure. In this sense, signed and spoken languages are each
eloquent expressions of a human language capacity that is remarkable in its plasticity.

Acknowledgements: The author thanks the following individuals for their assistance:
Claude Mauk was the photographer for Figure 1; Annie Marks was the photographer
for Figures 2⫺5. The models were Christopher Moreland and Jilly Kowalsky. Aslı
Özyürek, Roland Pfau, and Julie Rinfret made helpful comments on a draft of this
chapter. The author’s research was supported in part by NSF grant BCS-0447018.

8. Literature

Anderson, Diane/Reilly, Judy


2002 The MacArthur Communicative Development Inventory: Normative Data for Ameri-
can Sign Language. In: Journal of Deaf Studies and Deaf Education 7, 83⫺106.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344.
Bellugi, Ursula/Fischer, Susan
1972 A Comparison of Sign Language and Spoken Language. In: Cognition 1, 173⫺200.
Bogaerde, Beppie van den
2000 Input and Interaction in Deaf Families. PhD Dissertation, University of Amsterdam.
Utrecht: LOT.
Bos, Heleen
1994 An Auxiliary Verb in Sign Language of the Netherlands. In: Ahlgren, Inger/Bergman,
Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure: Papers from the
Fifth International Symposium on Sign Language Research. Durham: International Sign
Linguistics Association, 37⫺53.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Casey, Shannon K.
2003 “Agreement” in Gestures and Sign Languages: The Use of Directionality to Indicate
Referents Involved in Actions. PhD Dissertation, University of California, San Diego.
Cheek, Adrianne/Cormier, Kearsy/Repp, Ann/Meier, Richard P.
2001 Prelinguistic Gesture Predicts Mastery and Error in the Production of First Signs. In:
Language 77, 292⫺323.
Crasborn, Onno
2001 Phonetic Implementation of Phonological Categories in Sign Language of the Nether-
lands. PhD Dissertation, University of Utrecht. Utrecht: LOT.
Dolata, Jill K./Davis, Barbara L./MacNeilage, Peter F.
2008 Characteristics of the Rhythmic Organization of Vocal Babbling: Implications for an
Amodal Linguistic Rhythm. In: Infant Behavior & Development 31, 422⫺431.
596 V. Communication in the visual modality

Emmorey, Karen
2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Emmorey, Karen/Korpics, Franco/Petronio, Karen
2009 The Use of Visual Feedback During Signing: Evidence from Signers with Impaired
Vision. In: Journal of Deaf Studies and Deaf Education 14, 99⫺104.
Erting, Carol J./Prezioso, Carlene/O’Grady Hynes, Maureen
1990 The Interactional Context of Mother-infant Communication. In: Volterra, Virginia/Ert-
ing, Carol J. (eds.), From Gesture to Language in Hearing and Deaf Children. Berlin:
Springer, 97⫺106.
Fagard, Jacqueline
1994 Manual Strategies and Interlimb Coordination During Reaching, Grasping, and Manip-
ulating Throughout the First Year of Life. In: Swinnen, Stephan P./Heuer, H. H./Mas-
sion, Jean/Casaer, P. (eds.), Interlimb Coordination: Neural, Dynamical, and Cognitive
Constraints. San Diego: Academic Press, 439⫺460.
Fischer, Susan D.
1996 The Role of Agreement and Auxiliaries in Sign Language. In: Lingua 98, 103⫺119.
Goldin-Meadow, Susan
2003 The Resilience of Language. New York: Psychology Press.
Goldin-Meadow, Susan/Mylander, Carolyn
1990 Beyond the Input Given: The Child’s Role in the Acquisition of Language. In: Lan-
guage 66, 323⫺355.
Guerra Currie, Anne-Marie P./Meier, Richard P./Walters, Keith
2002 A Crosslinguistic Examination of the Lexicons of Four Sign Languages. In: Meier, Rich-
ard P./Cormier, Kearsy A./Quinto-Pozos, David G. (eds.), Modality and Structure in
Signed and Spoken Languages. Cambridge: Cambridge University Press, 224⫺236.
Harris, Margaret/Clibbens, John/Chasin, Joan/Tibbitts, Ruth
1989 The Social Context of Early Sign Language Development. In: First Language 9, 81⫺97.
Hatzopoulou, Marianna
2008 Acquisition of Reference to Self and Others in Greek Sign Language. PhD Dissertation,
Department of Linguistics, Stockholm University.
Hohenberger, Annette/Happ, Daniela/Leuninger, Helen
2002 Modality-dependent Aspects of Sign Language Production: Evidence from Slips of the
Hands and Their Repairs in German Sign Language. In: Meier, Richard P./Cormier,
Kearsy A./Quinto-Pozos, David G. (eds.), Modality and Structure in Signed and Spoken
Languages. Cambridge: Cambridge University Press, 112⫺142.
Holzrichter, Amanda S./Meier, Richard P.
2000 Child-directed Signing in American Sign Language. In: Chamberlain, Charlene/Mor-
ford, Jill P./Mayberry, Rachel (eds.), Language Acquisition by Eye. Mahwah, NJ: Law-
rence Erlbaum, 25⫺40.
Hulst, Harry van der/Kooij, Els van der
2006 Phonetic Implementation and Phonetic Pre-specification in Sign Language Phonology.
In: Goldstein, Louis/Whalen, D./Best, Catherine (eds.), Laboratory Phonology 8. Ber-
lin: Mouton de Gruyter, 265⫺286.
Japan Sign Language Research Institute (Nihon Syuwa Kerkyuusho) (ed.)
1997 Japanese Sign Language Dictionary (Nihongo-syuwa Diten). Tokyo: Japan Federation
of the Deaf.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
Kegl, Judy/Senghas, Ann/Coppola, Marie
1999 Creation through Contact: Sign Language Emergence and Sign Language Change in
Nicaragua. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creoli-
zation, Diachrony, and Development. Cambridge, MA: MIT Press, 179⫺237.
25. Language and modality 597

Klima, Edward S./Bellugi, Ursula


1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Kooij, Els van der
2002 Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic
Implementation and Iconicity. PhD Dissertation, University of Utrecht. Utrecht: LOT.
Kyle, Jim G./Woll, Bencie
1985 Sign Language: The Study of Deaf People and Their Language. Cambridge: Cambridge
University Press.
Lashley, Karl S.
1951 The Problem of Serial Order in Behavior. In: Jeffress, Lloyd A. (ed.), Cerebral Mecha-
nisms in Behavior. New York: Wiley, 112⫺136.
Launer, Patricia B.
1982 “A Plane” Is Not “to Fly”: Acquiring the Distinction Between Related Nouns and Verbs
in American Sign Language. PhD Dissertation, City University of New York.
Liddell, Scott K.
2000 Indicating Verbs and Pronouns: Pointing Away from Agreement. In: Emmorey, Karen/
Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula
Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 303⫺320.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lieberman, Philip
1984 The Biology and Evolution of Language. Cambridge, MA: Harvard University Press.
Lillo-Martin, Diane/Meier, Richard P.
2011 On the Linguistic Status of ‘Agreement’ in Signed Languages. In: Theoretical Linguistics
37, 95⫺141.
MacNeilage, Peter F./Davis, Barbara L.
1993 Motor Explanations of Babbling and Early Speech Patterns. In: Boysson-Bardies, Béné-
dicte de/Schonen, Scania de/Jusczyk, Peter/MacNeilage, Peter/Morton, John (eds.), De-
velopmental Neurocognition: Speech and Face Processing in the First Year of Life. Dor-
drecht: Kluwer, 341⫺352.
MacNeilage, Peter F.
2008 The Origin of Speech. Oxford: Oxford University Press.
Masataka, Nobuo
1992 Motherese in a Sign Language. In: Infant Behavior and Development 15, 453⫺460.
Mauk, Claude/Lindblom, Björn/Meier, Richard P.
2008 Undershoot of ASL Locations in Fast Signing. In: Quer, Josep (ed.), Signs of the Time:
Selected Papers from TISLR 2004. Hamburg: Signum, 3⫺23.
McBurney, Susan
2002 Pronominal Reference in Signed and Spoken Language: Are Grammatical Categories
Modality-dependent? In: Meier, Richard P./Cormier, Kearsy A./Quinto-Pozos, David G.
(eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge
University Press, 329⫺69.
McGurk, Harry/MacDonald, John
1976 Hearing Lips and Seeing Voices. In: Nature 264, 746⫺748.
Meier, Richard P.
1982 Icons, Analogues, and Morphemes: The Acquisition of Verb Agreement in American
Sign Language. PhD Dissertation, University of California, San Diego.
Meier, Richard P.
1984 Sign as Creole. In: Behavioral and Brain Sciences 7, 201⫺202.
Meier, Richard P.
1987 Elicited Imitation of Verb Agreement in American Sign Language: Iconically or Mor-
phologically Determined? In: Journal of Memory and Language 26, 362⫺376.
598 V. Communication in the visual modality

Meier, Richard P.
1990 Person Deixis in American Sign Language. In: Fischer, Susan D./Siple, Patricia (eds.),
Theoretical Issues in Sign Language Research, Volume 1: Linguistics. Chicago: Univer-
sity of Chicago Press, 175⫺90.
Meier, Richard P.
1993 A Psycholinguistic Perspective on Phonological Segmentation in Sign and Speech. In:
Coulter, Geoffrey R. (ed.), Phonetics and Phonology. Vol. 3: Current Issues in American
Sign Language Phonology. San Diego: Academic Press, 169⫺188.
Meier, Richard P.
2008a Channeling Language. In: Natural Language & Linguistic Theory 26, 451⫺466.
Meier, Richard P.
2008b Modality and Language Acquisition: Resources & Constraints in Early Sign Learning.
In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past,
Present, and Future [Proceedings of the 9 th International Conference on Theoretical Is-
sues in Sign Language Research, Florianopolis, Brazil]. Petrópolis, Brazil: Editora Arara
Azul, 325⫺346. [www.editora-arara-azul.com.br/EstudosSurdos.php]
Meier, Richard P.
in press Review of Deaf Around the World: The Impact of Language, ed. by Gaurav Mathur
and Donna Jo Napoli. To appear in: Language.
Meier, Richard P./Lillo-Martin, Diane
2010 Does Spatial Make It Special? On the Grammar of Pointing Signs in American Sign
Language. In: Gerdts, Donna B./Moore, John/Polinsky, Maria (eds.), Hypothesis A/Hy-
pothesis B: Linguistic Explorations in Honor of David M. Perlmutter. Cambridge, MA:
MIT Press, 345⫺360.
Meier, Richard P./Mauk, Claude/Cheek, Adrianne/Moreland, Christopher J.
2008 The Form of Children’s Early Signs: Iconic or Motoric Determinants? In: Language
Learning & Development 4, 63⫺98.
Meier, Richard P./McGarvin, Lynn/Zakia, Renée A. E./Willerman, Raquel
1997 Silent Mandibular Oscillations in Vocal Babbling. In: Phonetica 54, 153⫺171.
Meier, Richard P./Willerman, Raquel
1995 Prelinguistic Gesture in Deaf and Hearing Children. In: Emmorey, Karen/Reilly, Judy
(eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 391⫺409.
Meir, Irit/Padden, Carol A./Aronoff, Mark/Sandler, Wendy
2007 Body as Subject. In: Journal of Linguistics 43, 531⫺563.
Mohay, Heather/Milton, Leonie/Hindmarsh, Gabrielle/Ganley, Kay
1998 Deaf Mothers as Language Models for Hearing Families with Deaf Children. In: Weisel,
Amatzia (ed.), Issues Unresolved: New Perspectives on Language and Deafness. Wash-
ington, DC: Gallaudet University Press, 76⫺87.
Morgan, Gary/Barrière, Isabelle/Woll, Bencie
2006 The Influence of Typology and Modality on the Acquisition of Verb Agreement Mor-
phology in British Sign Language. In: First Language 26, 19⫺43.
Morgan, Gary/Barrett-Jones, Sarah/Stoneham, Helen
2007 The First Signs of Language: Phonological Development in British Sign Language. In:
Applied Psycholinguistics 28, 3⫺22.
Newport, Elissa L./Meier, Richard P.
1985 The Acquisition of American Sign Language. In: Slobin, Dan I. (ed.), The Crosslinguis-
tic Study of Language Acquisition. Volume 1: The Data. Hillsdale, NJ: Lawrence Erl-
baum, 881⫺938.
Newport, Elissa L./Supalla, Ted
2000 Sign Language Research at the Millenium. In: Emmorey, Karen/Lane, Harlan (eds.),
The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward
Klima. Mahwah, NJ: Lawrence Erlbaum, 103⫺114.
25. Language and modality 599

Orlansky, Michael D./Bonvillian, John D.


1984 The Role of Iconicity in Early Sign Language Acquisition. In: Journal of Speech and
Hearing Disorders 49, 287⫺292.
Orlansky, Michael D./Bonvillian, John D.
1985 Sign Language Acquisition: Language Development in Children of Deaf Parents and
Implications for Other Populations. In: Merrill-Palmer Quarterly 31, 127⫺143.
Orlansky, Michael D./Bonvillian, John D.
1988 Early Sign Language Acquisition. In: Smith, Michael D./Locke, John L. (eds.), The
Emergent Lexicon: The Child’s Development of a Linguistic Vocabulary. San Diego:
Academic Press, 263⫺292.
Padden, Carol A.
1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation,
University of California, San Diego.
Padden, Carol A.
1998 The ASL Lexicon. In: Sign Language & Linguistics 1, 39⫺60.
Padden, Carol
2011 Sign Language Geography. In: Mathur, Gaurav/Napoli, Donna Jo (eds.), Deaf Around
the World. The Impact of Language. New York: Oxford University Press, 19–37.
Petitto, Laura A.
1987 On the Autonomy of Language and Gesture: Evidence from the Acquisition of Per-
sonal Pronouns in American Sign Language. In: Cognition 27, 1⫺52.
Petitto, Laura A.
1988 “Language” in the Pre-linguistic Child. In: Kessel, Frank S. (ed.), The Development of
Language and Language Researchers. Hillsdale, NJ: Lawrence Erlbaum, 187⫺221.
Pinker, Steven/Bloom, Paul
1990 Natural Language and Natural Selection. In: Behavioral and Brain Sciences 13, 707⫺
784.
Pizer, Ginger/Meier, Richard P.
2008 Child-directed Signing in ASL and Children’s Development of Joint Attention. In:
Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past,
Present, and Future [Proceedings of the 9 th International Conference on Theoretical
Issues in Sign Language Research, Florianopolis, Brazil]. Petrópolis, Brazil: Editora
Arara Azul, 459⫺474. [www.editora-arara-azul.com.br/EstudosSurdos.php]
Pizer, Ginger/Meier, Richard P./Shaw, Kathleen
2011 Child-directed Signing as a Linguistic Register. In: Channon, Rachel/Hulst, Harry van
der (eds.), Formational Units in Signed Languages. Nijmegen/Berlin: Ishara Press/Mou-
ton de Gruyter, 65⫺83.
Polich, Laura
2005 The Emergence of the Deaf Community in Nicaragua. Washington, DC: Gallaudet Uni-
versity Press.
Postma, Albert
2000 Detection of Errors During Speech Production: A Review of Speech Monitoring
Models. In: Cognition 77, 97⫺131.
Quadros, Ronice M. de
1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifícia Universidade
Católica do Rio Grande do Sul.
Quadros, Ronice M. de/Lillo-Martin, Diane
2007 Gesture and the Acquisition of Verb Agreement in Sign Languages. In: Caunt-Nul-
ton, Heather/Kulatilake, Samantha/Woo, I-hao (eds.), BUCLD 31: Proceedings of the
31st Annual Boston University Conference on Language Development, 520⫺531.
Rinfret, Julie
2009 L’Association Spatiale du Nom en Langue des Signes Québécoise: Formes, Fonctions et
Sens. PhD Dissertation, Université du Québec à Montréal.
600 V. Communication in the visual modality

Sandler, Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark


2005 The Emergence of Grammar: Systematic Structure in a New Language. In: Proceedings
of the National Academy of Sciences 102, 2661⫺2665.
Saussure, Ferdinand de
1916/1959 Course in General Linguistics. New York: Philosophical Library.
Schick, Brenda
1990 The Effects of Morphosyntactic Structure on the Acquisition of Classifier Predicates in
ASL. In: Lucas, Ceil (ed.), Sign Language Research: Theoretical Issues. Washington,
DC: Gallaudet University Press, 358⫺371.
Senghas, Ann/Coppola, Marie
2001 Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial
Grammar. In: Psychological Science 12, 323⫺328.
Shield, Aaron
2010 The Signing of Deaf Children with Autism: Lexical Phonology and Perspective-taking
in the Visual-gestural Modality. PhD Dissertation, University of Texas at Austin.
Singleton, Jenny/Newport, Elissa L.
2004 When Learners Surpass Their Models: The Acquisition of American Sign Language
from Inconsistent Input. In: Cognitive Psychology 49, 370⫺407.
Slobin, Dan I./Hoiting, Nini/Kuntze, Marlon/Lindert, Reyna/Weinberg, Amy/Pyers, Jennie/An-
thony, Michelle/Biederman, Yael/Thumann, Helen
2003 A Cognitive/Functional Perspective on the Acquisition of “Classifiers”. In: Emmorey,
Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ:
Lawrence Erlbaum, 271⫺296.
Smith, Wayne
1990 Evidence for Auxiliaries in Taiwan Sign Language. In: Fischer, Susan D./Siple, Patricia
(eds.), Theoretical Issues in Sign Language Research. Volume 1: Linguistics. Chicago:
University of Chicago Press, 211⫺228.
Smith, Wayne/Ting, Li-Fen
1979 Shou Neng Sheng Chyau, Vol. 1 (Your Hands Can Become a Bridge). Taipei: Deaf Sign
Language Research Association of the Republic of China.
Spencer, Patricia E./Harris, Margaret
2006 Patterns and Effects of Language Input to Deaf Infants and Toddlers from Deaf and
Hearing Mothers. In: Schick, Brenda/Marschark, Marc/Spencer, Patricia E. (eds.), Ad-
vances in the Sign Language Development of Deaf Children. Oxford: Oxford University
Press, 71⫺101.
Stevens, K. N./Nickerson, R. S./Boothroyd, A./Rollins, A. M.
1976 Assessment of Nasalization in the Speech of Deaf Children. In: Journal of Speech and
Hearing Research 19, 393⫺416.
Stokoe, William C.
1960 Sign Language Structure. Studies in Linguistics Occasional Papers 8. Buffalo: University
of Buffalo Press.
Stokoe, William C./Casterline, Dorothy C./Croneberg, Carl G.
1965 A Dictionary of American Sign Language on Linguistic Principles. Washington, DC:
Gallaudet University Press.
Supalla, Samuel J.
1991 Manually Coded English: The Modality Question in Sign Language Development. In:
Siple, Patricia/Fischer, Susan (eds.), Theoretical Issues in Sign Language Research. Vol. 2:
Psychology. Chicago: University of Chicago Press, 85⫺109.
Supalla, Ted/Newport, Elissa L.
1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign
Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language
Research. New York: Academic Press, 181⫺214.
26. Homesign: gesture to language 601

Taub, Sarah
2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cam-
bridge: Cambridge University Press.
Thelen, Esther
1979 Rhythmical Stereotypies in Normal Hearing Infants. In: Animal Behaviour 27, 699⫺
715.
Tyrone, Martha E./Mauk, Claude E.
2010 Sign Lowering and Phonetic Reduction in American Sign Language. In: Journal of
Phonetics 38, 317–328.
Volterra, Virgina/Iverson, Jana M.
1995 When Do Modality Factors Affect the Course of Language Acquisition? In: Emmorey,
Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erl-
baum, 371⫺390.
Wiesendanger, Mario/Wicki, Urs/Rouiller, Eric
1994 Are There Unifying Structures in the Brain Responsible for Interlimb Coordination?
In: Swinnen, Stephan P./Heuer, H. H./Massion, Jean/Casaer, P. (eds.), Interlimb Coordi-
nation: Neural, Dynamical, and Cognitive Constraints. San Diego: Academic Press,
179⫺207.
Wilbur, Ronnie B./Nolen, Susan B.
1986 The Duration of Syllables in American Sign Language. In: Language and Speech 29,
263⫺280.
Woodward, James
2011 Response: Some Observations on Research Methodology in Lexicostatistical Studies of
Sign Language. In: Mathur, Gaurav/Napoli, Donna Jo (eds.), Deaf Around the World.
The Impact of Language. New York: Oxford University Press, 38–53.

Richard P. Meier, Austin, Texas (USA)

26. Homesign: gesture to language


1. Introduction: What is homesign?
2. The properties of homesign
3. The input to homesign
4. From co-speech gesture to homesign
5. Literature

Abstract

Deaf children whose hearing losses are so severe that they cannot acquire the spoken
language that surrounds them and whose hearing parents have not exposed them to sign
language lack a usable model for language. If a language model is essential to activate
whatever skills children bring to language-learning, deaf children in these circumstances
ought not communicate in language-like ways. It turns out, however, that these children
602 V. Communication in the visual modality

do communicate and they use their hands to do so. They invent gesture systems, called
“homesigns”, that have many of the properties of natural language. The chapter begins
by describing properties of language that have been identified in homesign ⫺ the fact
that it has a stable lexicon, has both morphological and syntactic structure, and is used
for many of the functions language serves. Although homesigners are not exposed to a
conventional sign language, they do see the gestures that their hearing parents produce
when they talk. The second section argues that these gestures do not serve as a full-blown
model for the linguistic properties found in homesign. The final section then explores
how deaf children transform the gestural input they receive from their hearing parents
into homesign.

1. Introduction: What is homesign?

Deaf children born to deaf parents and exposed to sign language from birth learn that
language as naturally as hearing children learn the spoken language to which they are
exposed (Lillo-Martin 1999; Newport/Meier 1985; see also chapter 28 on acquisition).
Children who lack the ability to hear thus have no deficits whatsoever when it comes
to language learning and will exercise their language learning skills if exposed to usable
linguistic input. However, most deaf children are born, not to deaf parents, but to
hearing parents who are unlikely to know a conventional sign language. If the chil-
dren’s hearing losses are severe, the children are typically unable to learn the spoken
language that their parents use with them, even when given hearing aids and intensive
instruction. If, in addition, the children’s hearing parents do not choose to expose them
to sign language, the children are in the unusual position of lacking usable input from
a conventional language. Their language-learning skills are intact, but they have no
language to apply those skills to.
What should we expect from children in this situation? A language model might be
essential to activate whatever skills children bring to language-learning. If so, deaf
children born to hearing parents and not exposed to conventional sign language ought
not communicate in language-like ways. If, however, a language model is not necessary
to catalyze a child’s language-learning, these deaf children might be able to communi-
cate and might do so in language-like ways. If so, we should be able to get a clear
picture of the skills that children, deaf or hearing, bring to language-learning from the
communication systems that deaf children develop in the absence of a conventional
language model. This chapter describes the home-made communication systems, called
‘homesigns’, that deaf children develop when not exposed to a usable model for lan-
guage.
Homesign systems arise when a deaf child is unable to acquire spoken language
and is not exposed to sign language. A defining feature of the homesign systems de-
scribed in this chapter is that they are not shared in the way that conventional commu-
nication systems are shared. The deaf children produce gestures to communicate with
the hearing individuals in their homes. But the children’s hearing parents are commit-
ted to teaching their children to talk and use speech whenever communicating with
them. The parents gesture, of course, as do all hearing speakers (McNeill 1992; Goldin-
Meadow 2003a), but only when they talk. Their gestures form an integrated system
26. Homesign: gesture to language 603

with the speech they produce (see chapter 27 for details) and thus are not free to take
on the properties of the deaf child’s gestures. As a result, although the parents respond
to their child’s gestures, they do not adopt the gestures themselves (nor do they typi-
cally acknowledge that the child even uses gesture to communicate). The parents pro-
duce co-speech gestures, not homesigns. It is in this sense that homesign differs from
conventional sign languages and even from village sign languages, whose users produce
the same types of signs as they receive (see chapter 24, Shared Sign Languages). Home-
signers produce homesigns but receive co-speech gestures in return.
The disparity between co-speech gesture and homesign is of interest because of its
implications for language-learning. To the extent that the properties of homesign are
different from the properties of co-speech gesture, the deaf children themselves must
be imposing these particular properties on their communication systems.
The chapter begins in section 2 by describing the properties of natural languages
that have been identified in homesign thus far. Homesigners’ gestures form a lexicon.
These lexical items are themselves composed of parts, akin to a morphological system.
Moreover, the lexical items combine to form structured sentences, akin to a syntactic
system. In addition, homesigns contain lexical markers that modulate the meanings of
sentences (negation and questions), as well as grammatical categories (nouns/verbs,
subjects/objects). Finally, homesign is used not only to make requests of others, but
also to comment on the present and non-present (including the hypothetical) world ⫺
to serve the functions that all languages, signed or spoken, serve.
Section 3 explores whether the linguistic properties found in homesign can be traced
to the gestures that the homesigners’ hearing parents produce when they talk. Al-
though homesigners are not exposed to input from a conventional sign language, they
are exposed to the gestures that hearing people produce when they talk. These gestures
could serve as a model for the deaf children’s homesign systems. However, co-speech
gestures are not only different from homesign in function (they work along with speech
to communicate rather than assuming the full burden of communication, as homesign
does), they are also different in form ⫺ gesture relies on mimetic and analog represen-
tation to convey information; homesign (like conventional sign languages) relies on
segmented forms that are systematically combined to form larger wholes. Thus, the
gestures that homesigners see their hearing parents produce are different from the
gestures that they themselves produce. The section ends by asking why this is the case.
The final section explores how deaf children transform the co-speech gestural input
they receive from their hearing parents into homesign, and ends with a discussion
of the implications of this transformation for language-learning and the creation of
sign languages.

2. The properties of homesign

Homesigns are created by deaf children raised in circumstances where a sign language
model is not available. In Western cultures, these children are typically born to hearing
parents who have chosen to educate their child in an oral school. These children are
likely to learn a conventional sign language at some later point in their lives, often
around adolescence. However, in many places throughout the world, homesigners con-
604 V. Communication in the visual modality

tinue to use the gesture systems they create as children as their sole means of communi-
cation (for example, Coppola/Newport 2005; Coppola/Senghas 2010; Jepson 1991;
Spaepen et al. 2011), and these systems typically undergo structural changes as the
children enter adolescence and adulthood (see, for example, Fusellier-Souza 2006;
Morford 2003; Kuschel 1973; Yau 1992).
The homesigners who are the focus of this chapter are deaf children born to hearing
parents in a Western culture. They have not succeeded at mastering spoken language
despite intensive oral education and, in addition, have not been exposed to a conven-
tional sign language by their hearing parents. Do deaf children in this situation turn to
gesture to communicate with the hearing individuals in their worlds? And if so, do the
children use gestures in the same way that the hearing speakers who surround them
do (i.e., as though they were co-speech gestures), or do they refashion their gestures
into a linguistic system reminiscent of the sign languages of deaf communities?
There have been many reports of deaf children who are orally trained using their
hands to communicate (Fant 1972; Lenneberg 1964; Mohay 1982; Moores 1974; Ter-
voort 1961). Indeed, it is not all that surprising that deaf children in these circumstances
exploit the manual modality for the purposes of communication ⫺ after all, it is the
only modality that is readily accessible to them and they see gesture used in communi-
cative contexts all the time when their hearing parents talk to them. However, it is
surprising that the deaf children’s homesigns turn out to be structured in language-like
ways, with structure at a number of different levels.

2.1. Lexicon
Like hearing children at the earliest stages of language-learning, deaf children who
have not yet been exposed to sign language use both pointing gestures and iconic
gestures to communicate. Their gestures, rather than being mime-like displays, are
discrete units, each of which conveys a particular meaning. Moreover, the gestures are
non-situation-specific ⫺ a twist gesture, for instance, can be used to request someone
to twist open a jar, to indicate that a jar has been twisted open, to comment that a jar
cannot be twisted open, or to tell a story about twisting open a jar that is not present
in the room. In other words, the homesigner’s gestures are not tied to a particular
context, nor are they even tied to the here-and-now (Morford/Goldin-Meadow 1997).
In this sense, the gestures warrant the label “sign”.
But can a pointing gesture really be considered a sign? Points are not prototypical
words ⫺ the point directs a communication partner’s gaze toward a particular person,
place, or thing, but doesn’t specify anything about that entity. Despite this fundamental
difference, points function for homesigners just like object-referring words (nouns and
pronouns) do for hearing children learning a conventional spoken language and deaf
children learning a conventional sign language. They do so in three ways:

⫺ Homesigners use their points to refer to precisely the same range of objects that
young hearing and deaf children refer to with their words and signs ⫺ and in pre-
cisely the same distribution. (Feldman/Goldin-Meadow/Gleitman 1978, 380)
⫺ Homesigners combine their points with other points and with iconic signs just as
hearing and deaf children combine their object-referring words with other words
and signs. (Goldin-Meadow/Feldman 1977; Goldin-Meadow/Mylander 1984)
26. Homesign: gesture to language 605

⫺ Homesigners use their points to refer to objects that are not visible in the room just
as hearing and deaf children use words and signs for this function. For example, a
homesigner points at the chair at the head of the dining room table and then signs
‘sleep’; this chair is where the child’s father typically sits, and the child is telling us
that his father (denoted by the chair) is currently asleep.
(see Figure 26.1; Butcher/Mylander/Goldin-Meadow 1991)

Fig. 26.1: Pointing at the present to refer to the non-present. The homesigner points at the chair
at the head of the dining room table in his home and then produces a ‘sleep’ gesture
to tell us that his father (who typically sits in that chair) is asleep in another room.
He is pointing at one object to mean another and, in this way, manages to use a gesture
that is grounded in the present to refer to someone who is not in the room at all.

Iconic signs also differ from words. The form of an iconic sign captures an aspect
of its referent. The form of a word does not. Interestingly, although iconicity is present
in many of the signs of American Sign Language (ASL), deaf children learning ASL
do not seem to notice. Most of their early signs are either not iconic (Bonvillian/
Orlansky/Novack 1983) or, if iconic from an adult’s point of view, not recognized as
iconic by the child (Schlesinger 1978). In contrast, deaf individuals inventing their own
homesigns are forced by their social situation to create signs that not only begin trans-
parent but remain so. If they didn’t, no one in their world would be able to take any
meaning from the signs they create. Homesigns therefore have an iconic base (see
Fusellier-Souza (2006), Kuschel (1973), and Kendon (1980b) for evidence of iconicity
in the signs used by older homesigners in other cultures).
Despite the fact that the signs in a homesign system need to be iconic to be under-
stood, they form a stable lexicon. Homesigners could create each sign anew every time
they use it, as hearing speakers seem to do with their gestures (McNeill 1992). If so,
we might still expect some consistency in the forms the signs take simply because the
signs are iconic and iconicity constrains the set of forms that can be used to convey a
meaning. However, we might also expect a great deal of variability around a prototypi-
cal form ⫺ variability that would crop up simply because each situation is a little
different, and a sign created specifically for that situation is likely to reflect that differ-
ence. In fact, it turns out that there is relatively little variability in the set of forms a
homesigner uses to convey a particular meaning. The child tends to use the same form,
606 V. Communication in the visual modality

Fig. 26.2: Homesigns are stable in form. The homesigner is shown producing a break gesture.
Although this gesture looks like it should be used only to describe snapping long thin
objects into two pieces with the hands, in fact, all of the children used the gesture to
refer to objects of a variety of sizes and shapes, many of which had not been broken by
the hands.

say, two fists breaking apart in a short arc to mean ‘break’, every single time that child
signs about breaking, no matter whether it’s a cup breaking, or a piece of chalk break-
ing, or a car breaking (see Figure 26.2; Goldin-Meadow et al. 1994). Thus, the home-
signer’s signs adhere to standards of form, just as a hearing child’s words or a deaf
child’s signs do. The difference is that the homesigner’s standards are idiosyncratic to
the creator rather than shared by a community of language users.

2.2. Morphology

Modern languages (both signed and spoken) build up words in combination from a
repertoire of a few dozen smaller meaningless units (see chapter 3 for word formation).
We do not yet know whether homesign has phonological structure (but see Brentari
et al. 2012). However, there is evidence that homesigns are composed of parts, each
of which is associated with a particular meaning; that is, they have morphological
structure (Goldin-Meadow/Mylander/Butcher 1995; Goldin-Meadow/Mylander/Frank-
lin 2007). The homesigners could have faithfully reproduced in their signs the actions
that they actually perform. They could have, for example, created signs that capture
the difference between holding a balloon string and holding an umbrella. But they
don’t. Instead, the children’s signs are composed of a limited set of handshape forms,
each standing for a class of objects, and a limited set of motion forms, each standing
26. Homesign: gesture to language 607

for a class of actions. These handshape and motion components combine freely to
create signs, and the meanings of these signs are predictable from the meanings of their
component parts. For example, a hand shaped like an ‘O’ with the fingers touching the
thumb (?), that is, an OTouch handshape form, combined with a Revolve motion form
means ‘rotate an object < 2 inches wide around an axis’, a meaning that can be trans-
parently derived from the meanings of its two component parts (OTouch = handle an
object < 2 inches wide; Revolve = rotate around an axis).
Importantly, in terms of arguing that there really is a system underlying the chil-
dren’s signs, the vast majority of signs that each deaf child produces conform to the
morphological description for that child and the description can be used to predict new
signs that the child produces. Thus, homesigns exhibit a simple morphology, one that
is akin to the morphologies found in conventional sign languages. Interestingly, it is
much more difficult to impose a coherent morphological description that can account
for the gestures that the children’s hearing parents produce (Goldin-Meadow/Mylan-
der/Butcher 1995; Goldin-Meadow/Mylander/Franklin 2007), suggesting that morpho-
logical structure is not an inevitable outgrowth of the manual modality but is instead
a characteristic that deaf children impose on their communication systems.

2.3. Syntax
Homesigns are often combined with one another to form sentence-like strings. For
example, a homesigner combined a point at a toy grape with an ‘eat’ sign to comment
on the fact that grapes can be eaten, and at another time combined the ‘eat’ sign with
a point at a visitor to invite her to lunch with the family. The same homesigner com-
bined all three gestures into a single sentence to offer the experimenter a snack (see
Figure 26.3).

Fig. 26.3: Homesign sentences follow a consistent order. The homesigner is holding a toy and
uses it to point at a tray of snacks that his mother is carrying = snack (the tray is not
visible) [patient]. Without dropping the toy, he jabs it several times at his mouth = eat
[act]. Finally, he points with the toy at the experimenter sprawled on the floor in front
of him = you [actor]. This is a typical ordering pattern for this particular homesigner
(i.e., patient-act-actor).

Interestingly, homesign sentences convey the same meanings that young children
learning conventional languages, signed or spoken, typically convey with their senten-
ces (Goldin-Meadow/Mylander 1984). In addition, homesign sentences are structured
in language-like ways, as described in the next four sections.
608 V. Communication in the visual modality

2.3.1. Predicate frames

Sentences in natural language are organized around verbs. The verb conveys the action,
which determines the thematic roles (θ-roles) of arguments that underlie the sentence.
Do frameworks of this sort underlie homesign sentences? Homesign sentences are
structured in terms of underlying predicate frames just like the early sentences of
children learning conventional languages (Goldin-Meadow 1985). For example, the
framework underlying a sentence about giving contains three arguments ⫺ the giver
(actor), the given (patient), and the givee (recipient). In contrast, the framework un-
derlying a sentence about eating contains two arguments ⫺ the eater (actor) and the
eaten (patient). Homesigners (like all children, Bloom 1970) rarely produce all of the
arguments that belong to a predicate in a single sentence. What then makes us think
that the entire predicate frame underlies a sentence? Is there evidence, for example,
that the recipient and actor arguments underlie the homesign sentence cookie⫺give
even though the patient cookie and the act give are the only elements that appear in
the sentence? In fact, there is evidence and it comes from production probability. Pro-
duction probability is the likelihood that an argument will be signed when it can be.
Although homesigners could leave elements out of their sentences haphazardly, in fact
they are quite systematic in how often they omit and produce signs for various argu-
ments in different predicate frames.
Take the actor as an example. If we are correct in attributing predicate frames to
homesign sentences, the actor in a give predicate should be signed less often than the
actor in an eat predicate simply because there is more competition for slots in a
3-argument frame (e.g., give predicate) than in a 2-argument frame (eat predicate). The
giver has to compete with the act, the given, and the givee. The eater has to compete
only with the act and the eaten. This is exactly the pattern homesign displays. Both
American and Chinese homesigners are less likely to produce an actor in a sentence
with a 3-argument underlying predicate frame (e.g., the giver) than an actor in a sen-
tence with a 2-argument underlying predicate frame (e.g., the eater). Following the
same logic, an eater should be signed less often than a dancer, and indeed it is in the
utterances of both American and Chinese homesigners (Goldin-Meadow 2003a).
In general, production probability decreases systematically as the number of argu-
ments in the underlying predicate frame increases from 1 to 2 to 3, not only for actors
but also for patients ⫺ homesigners are less likely to produce a sign for a given apple
than for an eaten apple simply because there is more competition for slots in a
3-argument give predicate than in a 2-argument eat predicate; that is, they are more
likely to sign apple⫺eat than apple⫺give, signing instead give⫺palm to indicate that
mother should transfer the apple to the palm of the child’s hand.
Importantly, it is the underlying predicate frame that dictates actor production prob-
ability in the homesigner’s sentences, not how easy it is to guess from context who the
actor of a sentence is. If predictability in context were the sole factor dictating action
production, 1st and 2nd person actors should be omitted regardless of underlying predi-
cate frame because their identities can be easily inferred from the context (both per-
sons are on the scene); and 3rd person actors should be signed quite often regardless
of underlying predicate frame because they are less easily guessed from the context.
However, the production probability patterns described above hold for 1st, 2nd, and
26. Homesign: gesture to language 609

3rd person actors when each is analyzed separately (Goldin-Meadow 1985). The predi-
cate frame underlying a sentence is indeed an essential factor in determining how often
an actor will be signed in that sentence.

2.3.2. Devices for marking who does what to whom

In addition to being structured at underlying levels, homesign sentences are also struc-
tured at surface levels. They display (at least) three devices that mark ‘who does what
to whom’ found in the early sentences of children learning conventional language
(Goldin-Meadow/Mylander 1984, 1998; Goldin-Meadow et al. 1994).
Firstly, homesigners indicate the thematic role of a referent by preferentially pro-
ducing (as opposed to omitting) signs for referents playing particular roles. Homesign-
ers in both America and China are more likely to produce a sign for the patient (e.g.,
the eaten cheese in a sentence about eating) than to produce a sign for the actor (e.g.,
the eating mouse) (Goldin-Meadow/Mylander 1998). Two points are worth noting. The
first point is that homesigners’ patterns convey probabilistic information about who is
the doer and who is the done-to in a two-sign sentence. If, for example, a homesigner
produces the sign sentence ‘boy hit’, our best guess is that the boy is the hittee (patient)
and not the hitter (actor) precisely because homesigners tend to produce signs for
patients rather than transitive actors. Indeed, languages around the globe tend to fol-
low this pattern; in languages where only a single argument is produced along with the
verb, that argument tends to be the patient rather than the actor in transitive sentences
(DuBois 1987). The second point is that the omission/production pattern found in the
homesigners’ sentences tends to result in two-sign sentences that preserve the unity of
the predicate ⫺ that is, patient C act transitive sentences (akin to OV in conventional
systems) are more frequent in the signs than actor C act transitive sentences (akin to
SV in conventional systems).
Secondly, homesigners indicate the thematic role of a referent by placing signs for
objects playing particular roles in set positions in a sentence. In other words, they use
linear position to indicate who does what to whom (Feldman/Goldin-Meadow/Gleit-
man 1978; Senghas et al. 1997). Surprisingly, homesigners in America and China use
the same particular linear orders in their sign sentences despite the fact that each child
is developing his or her system alone without contact with other deaf children and in
different cultures (Goldin-Meadow/Mylander 1998). The homesigners tend to produce
signs for patients in the first position of their sentences, before signs for verbs (cheese⫺
eat) and before signs for endpoints of a transferring action (cheese⫺table). They also
produce signs for verbs before signs for endpoints (give⫺table). In addition, they pro-
duce signs for intransitive actors before signs for verbs (mouse-run). Interestingly, at
least one of these patterns ⫺ placing patients before verbs ⫺ is found in older home-
signs in a variety of cultures (Britain: MacLeod 1973; Papua New Guinea: Kendon
1980c), although as they grow older, homesigners display more different types of word
orders in their systems than younger homesigners do (Senghas et al. 1997).
Third, homesigners indicate the thematic role of a referent by displacing verb signs
toward objects playing particular roles, as opposed to producing them in neutral space
(at chest level). These displacements are reminiscent of inflections in conventional sign
languages (Padden 1983, 1990). In ASL, signs can be displaced to agree with their
610 V. Communication in the visual modality

noun arguments. For example, the sign give is moved from the signer to the addressee
to mean ‘I give to you’ but from the addressee to the signer to mean ‘You give to me’
(see chapter 7, Verb Agreement). Homesigners tend to displace their signs toward
objects that are acted upon and thus use their inflections to signal patients. For exam-
ple, displacing a twist sign toward a jar signals that the jar (or one like it) is the object
to be acted upon (Goldin-Meadow et al. 1994). These inflections are sensitive to the
underlying predicate frame, as we might expect since they are marked on the verb ⫺
3-argument verbs are more likely to be inflected than 2-argument verbs. Indeed, inflec-
tion appears to be obligatory in 3-argument verbs but optional in 2-argument verbs
where it trades off with lexicalization. For example, verbs in sentences containing an
independent sign for the patient are less likely to be inflected than verbs in sentences
that do not contain a sign for the patient (Goldin-Meadow et al. 1994).
Thus, homesign sentences adhere to simple syntactic patterns marking who does
what to whom.

2.3.3. Recursion

Homesigners combine more than one proposition within the bounds of a single sen-
tence, that is, they produce complex sentences. A complex sentence is the conjunction
of two propositions (see chapter 16). Importantly, there is evidence that the two propo-
sitions in a complex sentence are subordinate to a higher node, and are not just propo-
sitions that have been sequentially juxtaposed. The frame underlying such a sentence
ought to reflect this unification ⫺ it ought to be the sum of the predicate frames for
the two propositions. For example, a sentence about a soldier beating a drum (proposi-
tion 1) and a cowboy sipping a straw (proposition 2) ought to have an underlying
frame of 6 units ⫺ 2 predicates (beat, sip), 2 actors (soldier, cowboy), and 2 patients
(drum, straw). If the homesigners’ complex sentences are structured at an underlying
level as their simple sentences are, we ought to see precisely the same pattern in their
complex sentences as we saw in their simple sentences ⫺ that is, we should see a
systematic decrease in, say, actor production probability as the number of units in the
conjoined predicate frames increases.
This is precisely the pattern we find (Goldin-Meadow 1982, 2003b). There is, how-
ever, one caveat. We find this systematic relation only if we take into account whether
a semantic element is shared across propositions. Sometimes when two propositions
are conjoined, one element is found in both propositions. For example, in the English
sentence ‘Elaine cut apples and Mike ate apples’, the patient argument apples is shared
across the two propositions (the second apples could be replaced by them and the
pronoun would then mark the fact that the element is shared). The homesigners’ com-
plex sentences exhibit this type of redundancy, and at approximately the same rate as
the sentences produced by children learning language from conventional models
(Goldin-Meadow 1987, 117). For example, one child produced climb⫺sleep⫺horse to
comment on the fact that the horse climbs the house (proposition 1) and the horse
sleeps (proposition 2). There are three units underlying the first proposition (actor,
act, object ⫺ horse, climb, house) and two in the second (actor, act ⫺ horse, sleep), but
one of those units (horse) is shared across the two propositions. The question is
whether the shared element appears once or twice in the underlying predicate frame
26. Homesign: gesture to language 611

of the conjoined sentence. If horse appears twice ⫺ [(horse climbs house) & (horse
sleeps)] ⫺ the sentence will have an underlying frame of five units. If horse appears
once ⫺ horse [(climbs house) & (sleeps)] ⫺ the sentence will have an underlying frame
of four units. In fact, it turns out that production probability (the probability that a
gesture for a particular semantic element will be produced in sentences where that
element ought to be produced) decreases systematically with increases in underlying
predicate frame only if we take shared elements into account when calculating the size
of a predicate frame ⫺ in particular, only if we assign shared elements one slot (rather
than two) in the underlying frame (Goldin-Meadow 1982).
The homesigner is likely to be attributing two roles to the climbing and sleeping
horse at some, perhaps semantic or propositional, level. However, the production prob-
ability patterns underlying complex sentences make it clear that we need a level be-
tween this semantic/propositional level and the surface level of the sentence ⫺ a level
in which dual-role elements appear only once. This underlying level is necessary to
account for the surface properties of the complex sentences. Moreover, in order to
account for the production probability patterns in the complex sentences, we need to
consider overlaps (i.e., redundancies) across the propositions. In other words, because
the underlying frame must take into account whether a semantic element is shared
across the propositions contributing to that frame, it cannot reflect mere juxtaposition
of two predicate frames ⫺ we need to invoke an overarching organization that encom-
passes all of the propositions in the sentence to account for the production probability
patterns. Thus, the homesigner’s complex sentences result from the unification of two
propositions under a higher node and, in this sense, display hierarchical organization.
There is further evidence for hierarchical organization in homesign. At times, a
collection of signs functions as an elaborated version of a single sign, that is, the collec-
tion substitutes for a single sign and functions as a phrase. For example, rather than
point at a penny and then at himself (that⫺me) to ask someone to give him a penny,
the homesigner produces an iconic sign for penny along with a point at the penny
([penny-that]-me); both signs thus occupy the patient slot in the sentence and, in this
sense, function like a single unit, a nominal constituent (Hunsicker/Mylander/Goldin-
Meadow 2009; Hunsicker/Goldin-Meadow 2011). This is a crucial design feature of
language, one that makes expressions with hierarchical embedding possible.

2.3.4. Negation, questions, past, and future

Homesign also contains at least two forms of sentence modification, negation and
questions. Young homesigners express two types of negative meanings: rejection (e.g.,
when offered a carrot, the homesigner shakes his head, indicating that he doesn’t want
the object) and denial (e.g., the homesigner points to his chest and then signs school
while shaking his head, to indicate that he is not at school). In addition, they express
three types of questions: where (e.g., the homesigner produces a two-handed flip when
searching for a key), what (e.g., the homesigner produces the flip when trying to figure
out which object his mother wants), and why (e.g., the homesigner produces the flip
when trying to figure out why the orange fell). As these examples suggest, different
forms are used to convey these two different meanings ⫺ the side-to-side headshake
612 V. Communication in the visual modality

for negative meanings, the manual flip for question meanings. These signs are obviously
taken from hearing speakers’ gestures but are used by the homesigners as sentence
modulators and, as such, occupy systematic positions in those sentences: headshakes
appear at the beginning of sentences, flips at the end (Franklin/Giannakidou/Goldin-
Meadow 2011; see also Jepson 1991).
Homesign also includes ways of referring to the past and future (Morford/Goldin-
Meadow 1997). For example, one homesigner produced a sign, not observed in the
gestures of his hearing parents, to refer to both remote future and past events ⫺ need-
ing to repair a toy (future) and having visited Santa (past). The sign is made by holding
the hand vertically near the chest, palm out, and making an arcing motion away from
the body (see Figure 26.4).

Fig. 26.4: Homesign has markers for the past and future. The homesigner is shown using a gesture
that he created to refer to non-present events ⫺ the ‘away’ gesture which the child uses
to indicate that what he is gesturing about is displaced in time and space (akin to the
phrase ‘once upon a time’ used to introduce stories).

Another homesigner invented a comparable sign to refer only to past events. In


addition to these two novel signs, homesigners have been found to modify a conven-
tional gesture to use as a future marker. The gesture, formed by holding up the index
finger, is typically used to request a brief delay or time-out and is glossed as wait one
minute. The homesigners used the form for its conventional meaning but they also use
it to identify their intentions, that is, to signal the immediate future. For example, one
homesigner produced the sign and then pointed at the toy bag to indicate that he was
going to go retrieve a new toy. Hearing speakers use wait to get someone’s attention,
never to refer to the immediate future. The form of the sign is borrowed from gesture
but it takes on a meaning of its own.
26. Homesign: gesture to language 613

2.4. Grammatical categories

Young homesigners use their morphological and syntactic devices to distinguish nouns
and verbs (Goldin-Meadow et al. 1994). For example, if the child uses twist as a verb,
that sign would likely be produced near the jar to be twisted open (i.e., it would be
inflected); it would not be abbreviated (it would be produced with several twists rather
than one); and it would be produced after a pointing sign at the jar (that⫺twist). In
contrast, if the child uses that same form twist as a noun to mean ‘jar’, the sign would
likely be produced in neutral position near the chest (i.e., it would not be inflected); it
would be abbreviated (produced with one twist rather than several); and it would occur
before the pointing sign at the jar (jar⫺that). Thus, the child distinguishes nouns from
verbs morphologically (nouns are abbreviated, verbs inflected) and syntactically
(nouns occur in initial position of a two-sign sentence, verbs in second position). Inter-
estingly, adjectives sit somewhere in between, as they often do in natural languages
(Thompson 1988) ⫺ they are marked like nouns morphologically (broken is abbrevi-
ated but not inflected) and like verbs syntactically (broken is produced in the second
position of a two-sign sentence).
Older homesigners also have the grammatical category subject (possibly younger
ones do, too, but this has not been investigated yet). Grammatical subjects do not have
a simple semantic correlate. Also, no fixed criteria exist to categorically identify a noun
phrase as a subject, but a set of common, multi-dimensional criteria can be applied
across languages (Keenan 1976). A hallmark of subject noun phrases cross-linguisti-
cally is the range of semantic roles they display. While the subject of a sentence will
likely be an agent (one who performs an action), many other semantic roles can be
the subject. For example, the theme or patient can be a subject (The door opened), as
can an instrument (The key opened the door) or instigator (The wind opened the
door). Older homesigners studied in Nicaragua used the same grammatical device
(clause-initial position) to mark agent and non-agent noun phrases in their gestured
responses, thus indicating that their systems include the category subject (Coppola/
Newport 2005).

2.5. The uses to which homesign is put

Homesign is used to comment not only on the here-and-now but also on the distant
past, the future, and the hypothetical (Butcher/Mylander/Goldin-Meadow 1991; Mor-
ford/Goldin-Meadow 1997). The homesigners use their system to make generic state-
ments so that they can converse about classes of objects (Goldin-Meadow/Gelman/
Mylander 2005), to tell stories about real and imagined events (Phillips/Goldin-
Meadow/Miller 2001; Morford 1995), to talk to themselves (Goldin-Meadow 2003b),
and to talk about language (Goldin-Meadow 1993).
Thus, not only do homesigners structure their signs according to the patterns of
natural languages, but they also use those signs for the functions natural languages
serve. Structure and function appear to go hand-in-hand in the deaf children’s home-
signs. But the relation between the two is far from clear. The functions to which the
deaf children put their signs could provide the impetus for building a language-like
614 V. Communication in the visual modality

structure. Conversely, the structures that the deaf children develop in their signs could
provide the means by which more sophisticated language-like functions can be fulfilled.
More than likely, structure and function complement one another, with small develop-
ments in one domain furthering additional developments in the other.
In this regard, it is interesting to note that language-trained chimpanzees are less
accomplished than the deaf children in terms of both structure and function. Not only
do the chimps fail to display most of the structural properties found in the deaf chil-
dren’s sign systems, they also use whatever language they do develop for essentially
one function ⫺ to get people to give them objects and perform actions (see, for exam-
ple, Greenfield/Savage-Rumbaugh 1991).

3. The input to homesign


Homesigners, by definition, are not exposed to a conventional sign language and thus
could not have fashioned their sign systems after such a model. They are, however,
exposed to the gestures that their hearing parents use when they talk to them. Al-
though the gestures that hearing speakers typically produce when they talk are not
characterized by language-like properties (McNeill 1992), it is possible that hearing
parents alter their gestures when communicating with their deaf child. Perhaps the deaf
children’s hearing parents introduce language-like properties into their own gestures. If
so, these gestures could serve as a model for the structure in their deaf children’s
homesigns. We explore this possibility in this section.

3.1. The hearing parents’ gestures do not exhibit the properties


of homesign

Hearing parents gesture when they talk to young children (Bekken 1989; Shatz 1982;
Iverson et al. 1999) and the hearing parents of homesigners are no exception. As
mentioned earlier, the deaf children’s parents are committed to teaching their children
to talk and send them to oral schools. These schools advise the parents to talk to their
children as often as possible. And when they talk, they gesture. The question is whether
the parents’ gestures display the language-like properties found in homesign, or
whether they look just like any hearing speaker’s gestures.
To find out, Goldin-Meadow and Mylander (1983, 1984) analyzed the gestures that
the mothers of six American homesigners produced when talking to their deaf children.
In each case, the mother was the child’s primary caretaker. Goldin-Meadow and My-
lander used the analytic tools developed to describe the deaf children’s homesigns to
describe the mothers’ gestures ⫺ they turned off the sound and coded the mothers’
gestures as though they had been produced without speech. In other words, they at-
tempted to look at the gestures through the eyes of a child who cannot hear.
Not surprisingly, all six mothers used both pointing and iconic gestures when they
talked to their children. Moreover, the mothers used pointing and iconic gestures in
roughly the same distribution as their children. However, the mothers’ use of gestures
did not resemble their children’s homesigns along many dimensions.
26. Homesign: gesture to language 615

First, the mothers produced fewer different types of iconic gestures than their chil-
dren, and they also used only a small subset of the particular iconic gestures that their
children used (Goldin-Meadow/Mylander 1983, 1984).
Second, the mothers produced very few gesture combinations. That is, like most
English-speakers (McNeill 1992), they tended to produce one gesture per spoken
clause and rarely combined several gestures into a single, motorically uninterrupted
unit. Moreover, the very few gesture combinations that the mothers did produce did
not exhibit the same structural regularities as their children’s homesigns (Goldin-
Meadow/Mylander 1983, 1984). The mothers thus did not appear to have structured
their gestures at the sentence level.
Nor did the mothers structure their gestures at the word level. Each mother used
her gestures in a more restricted way than her child, omitting many of the handshape
and motion morphemes that the child produced (or using the ones she did produce
more narrowly than the child), and omitting completely a very large number of the
handshape/motion combinations that the child produced. Indeed, there was no evi-
dence at all that the mothers’ gestures could be broken into meaningful and consistent
parts (Goldin-Meadow/Mylander/Butcher 1995).
Finally, the hearing mothers’ iconic gestures were not stable in form and meaning
over time while their deaf children’s homesigns were. Moreover, the hearing mothers
did not distinguish between gestures serving a noun role and gestures serving a verb
role. As argued in section 2.4, the deaf children made this distinction in their homesigns
(Goldin-Meadow et al. 1994).
Did the deaf children learn to structure their homesign systems from their mothers?
Probably not ⫺ although it may have been necessary for the children to see hearing
people gesturing in communicative situations in order to get the idea that gesture can
be appropriated for the purposes of communication. But in terms of how the children
structure their homesigns, there is no evidence that this structure came from the chil-
dren’s hearing mothers. The hearing mothers’ gestures do not have structure when
looked at with tools used to describe the deaf children’s homesigns (although they do
when looked at with tools used to describe co-speech gestures, that is, when they are
described in relation to speech).

3.2. Why don’t the hearing parents gestures look like homesign?

The hearing mothers interacted with their deaf children on a daily basis. Therefore we
might have expected that their gestures would eventually have come to resemble their
children’s homesigns (or vice versa). But they didn’t. The question emerges why the
hearing parents didn’t display language-like properties in their gestures? The parents
were interested in teaching their deaf children to talk, not gesture. They therefore
produced all of their gestures with speech ⫺ in other words, their gestures were co-
speech gestures and had to behave accordingly. The gestures had to fit, both temporally
and semantically, with the speech they accompanied. As a result, the hearing parents’
gestures were not ‘free’ to take on language-like properties.
In contrast, the deaf homesigners had no such constraints. They had no productive
speech and thus always produced gesture on its own, without talk. Moreover, because
the manual modality was the only means of communication open to the children, it
616 V. Communication in the visual modality

had to take on the full burden of communication. The result was language-like struc-
ture. Although the homesigners may have used their hearing parents’ gestures as a
starting point, it is very clear that they went well beyond that point. They transformed
the co-speech gestures they saw into a system that looks very much like language.
But what would have happened if the children’s hearing parents had refrained from
speaking as they gestured? Once freed from the constraints of speech, perhaps the
parents’ gestures would have become more language-like in structure, assuming the
segmented and combinatorial form that characterized their children’s homesigns. In
other words, the mothers might have been more likely to use gestures that mirrored
their children’s homesigns if they kept their mouths closed. Goldin-Meadow, McNeill
and Singleton (1996) tested this prediction by asking hearing speakers to do just that ⫺
use their hands and not their mouths to describe a series of events.
The general hypothesis is that language-like properties crop up in the manual mo-
dality when it takes on the primary burden of communication, not when it shares the
burden of communication. To test the hypothesis, Goldin-Meadow and colleagues
(1996) examined hearing adults’ gestures when those gestures were produced with
speech (sharing the communicative burden) and when they were produced instead of
speech (shouldering the entire communicative burden). As expected, the gestures the
adults produced without speech displayed properties of segmentation and combination
and thus were distinct from the gestures the same adults produced with speech.
When they produced gesture without speech, the adults frequently combined those
gestures into strings and these strings were consistently ordered, with gestures for cer-
tain semantic elements occurring in particular positions in the string; that is, there was
structure across the gestures at the sentence level (Goldin-Meadow/McNeill/Singleton
1996; see also Gershkoff-Stowe/Goldin-Meadow 2002). In addition, the verb-like ac-
tion gestures that the adults produced could be divided into handshape and motion
parts, with the handshape of the action gesture frequently conveying information about
the objects in its semantic frame; that is, there was structure within the gesture at the
word level (although the adults did not develop a system of contrasts within their
gestures, that is, they did not develop the morphological system characteristic of
homesign (Goldin-Meadow/Gelman/Mylander 2005; Goldin-Meadow/Mylander/Frank-
lin 2007). Thus, the adults produced gestures characterized by segmentation and combi-
nation and did so with essentially no time for reflection on what might be fundamental
to language-like communication.
Interestingly, when hearing speakers of a variety of languages (English, Chinese,
Turkish, and Spanish) are asked to describe a series of events using only their hands,
they too produce strings of segmented gestures and their gesture strings are character-
ized by consistent order. Moreover, they all create the same gesture order, despite the
fact that they use different orders (the predominant orders of their respective lan-
guages) when describing the same scenes in speech (Goldin-Meadow et al. 2008). Inter-
estingly, this gesture order is SOV ⫺ precisely the order that we see young Chinese
and American homesigners use (OV, with the S omitted, Goldin-Meadow/Mylander
1998) and also the order that has been found in a newly emerging sign language devel-
oped in a Bedouin community in Israel (Al-Sayyid Bedouin Sign Language; Sandler
et al. 2005; see also chapter 24 on shared sign languages). This particular order may
reflect a natural sequencing that humans exploit when creating a communication sys-
tem over short and long timespans.
26. Homesign: gesture to language 617

The appearance of segmentation and combination in the gestures hearing adults


produce without speech is particularly striking given that these properties are not
found in the gestures hearing adults produce with speech (Goldin-Meadow/McNeill/
Singleton 1996). Co-speech gestures are not used as building blocks for larger sentence
or word units and are used, instead, to imagistically depict the scenes described in the
accompanying speech.

4. From co-speech gesture to homesign


Homesigners are not exposed to a model of a conventional language to which they
can apply their language-learning skills, but they are exposed to the gestures that the
hearing speakers who surround them use when they communicate. The question is how
deaf children transform the input they do receive, co-speech gesture, into a system of
communication that has many of the properties of language, that is, into homesign.

4.1. Examining homesign around the globe

How can we learn more about the process by which co-speech gesture is transformed
into homesign? The fact that hearing speakers across the globe gesture differently
when they speak (Özyürek/Kita 1999; Kita 2000; see also chapter 27) affords us with
an excellent opportunity to explore if ⫺ and how ⫺ deaf children make use of the
gestural input that their hearing parents provide. We can thus observe homesign
around the globe and examine the relation between the co-speech gestures homesign-
ers see as input and the communication systems they produce as output. There are, in
fact, descriptions of homesigns created by individuals from a variety of different coun-
tries: Bangladesh (Morford 1995); Belgium (Tervoort 1961); Great Britain (MacLeod
1973); the Netherlands (Tervoort 1961); Nicaragua (Coppola/Newport 2005; Senghas
et al. 1997); Papua New Guinea (Kendon 1980a,b,c); Rennell Island (Kuschel 1973);
United States (Goldin-Meadow 2003b); and the West Indies (Morford 1995). However,
these homesign systems have not been described along the same dimensions, nor have
the co-speech gestures that might have served as input to the systems been studied.
Selecting languages that vary along a particular dimension, with co-speech gestures
that vary along that same dimension, is an ideal way to explore whether co-speech
gesture serves as a starting point for homesign. For example, the gestures that accom-
pany Spanish and Turkish look very different from those that accompany English and
Mandarin (see chapter 27 for details). As described by Talmy (1985), Spanish and
Turkish are verb-framed languages, whereas English and Mandarin are satellite-framed
languages. This distinction depends primarily on the way in which the path of a motion
is packaged. In a satellite-framed language, path is encoded outside of the verb (e.g.,
down in the sentence ‘he flew down’) and manner is encoded in the verb itself (flew).
In contrast, in a verb-framed language, path is bundled into the verb (e.g., sale in the
Spanish sentence ‘sale volando’ = exits flying) and manner is outside of the verb (vol-
ando). One effect of this typological difference is that manner is often omitted from
Spanish sentences (Slobin 1996).
618 V. Communication in the visual modality

However, McNeill (1998) has observed an interesting compensation ⫺ although


manner is omitted from Spanish-speakers’ talk, it frequently crops up in their gestures.
Moreover, and likely because Spanish-speakers’ manner gestures do not co-occur with
a particular manner word, their gestures tend to spread through multiple clauses
(McNeill 1998). As a result, Spanish-speakers’ manner gestures are longer and may be
more salient to a deaf child than the manner gestures of English- or Mandarin-speak-
ers. Turkish-speakers also produce gestures for manner relatively frequently. In fact,
Turkish-speakers commonly produce gestures that convey only manner (e.g., fingers
wiggling in place = manner alone vs. fingers wiggling as the hand moves forward =
manner C path; Özyürek/Kita 1999; Kita 2000). Manner-only gestures are rare in Eng-
lish- and Mandarin-speakers.
These four cultures ⫺ Spanish, Turkish, American, and Chinese ⫺ thus offer an
excellent opportunity to examine the effects of hearing speakers’ gestures on the home-
sign systems developed by deaf children. If deaf children in all four cultures develop
homesign systems with the same structure despite differences in the gestures they see,
we will have strong evidence of the biases children themselves must bring to a commu-
nication situation. If, however, the children differ in the homesign systems they con-
struct, we will be able to explore how a child’s construction of a language-like system
is influenced by the gestures she sees. We know from previous work that American
deaf children exposed only to the gestures of their hearing English-speaking parents
create homesign systems that are very similar in structure to the homesign systems
constructed by Chinese deaf children exposed to the gestures of their hearing Manda-
rin-speaking parents (Goldin-Meadow/Gelman/Mylander 2005; Goldin-Meadow/My-
lander 1998; Goldin-Meadow/Mylander/Franklin 2007; Zheng/Goldin-Meadow 2002).
The question for future work is whether these children’s homesign systems differ from
those created by Spanish and Turkish deaf children of hearing parents.
As a first step in this research program, Özyürek et al. (2011) presented vignettes
designed to elicit descriptions of path and manner to Turkish-speaking adults and chil-
dren and to Turkish homesigners. They found that, although the Turkish-speakers men-
tioned both path and manner in their speech, very few produced both in their gestures;
they preferred instead to produce only gestures for path along with their speech. In
contrast, the Turkish homesigners frequently produced both path and manner within
the same sentence. This outcome makes sense since the manual modality was the sole
means of communication available to the homesigners; the speakers could (and did)
use both gesture and speech. To determine whether the fact that the manual modality
was the homesigners’ only means of communication led to their production of both
path and manner in gesture, they asked the adult speakers to describe the vignettes
again, this time using only their hands and not their mouths. In this condition, the
adults produced gestures for both path and manner, just as the homesigners did. Impor-
tantly, however, the form that the hearing adults used to express path and manner
differed from the homesigners’ form. The hearing adults tended to conflate path and
manner into a single gesture (e.g., rotating the index finger while moving it forward),
whereas the homesigners produced separate signs for path and manner (e.g., rotating
the index finger; then moving the index finger forward). Thus, communicative pressure
led the homesigners and the hearing adults to explicitly mention both path and manner
with their hands. However, it did not dictate the form ⫺ the homesigners segmented
the two meanings into separate signs; the hearing adults conflated them into a single
26. Homesign: gesture to language 619

gesture. This same pattern has been found in comparisons of co-speech gesture and
the early stages of a newly emerging sign language (Nicaraguan Sign Language
(ISN)) ⫺ the signers segmented path and manner into separate signs; the gesturers
conflated them (Senghas/Kita/Özyürek 2004; see also chapter 27).
Although the Turkish results underscore once again that co-speech gesture cannot
serve as a straightforward model for homesign, they do not tell us whether the gestures
have any influence at all on the homesigns. To address this issue, we need to compare
homesigners who see gestures produced by speakers of a satellite-framed language
(e.g., English-speakers who tend to conflate path and manner into a single gesture) to
homesigners who see gestures produced by speakers of a verb-framed language (e.g.,
Turkish-speakers who conflate path and manner less often than English-speakers). If
co-speech gesture is influencing homesign, we would expect American homesigners to
segment their path and manner gestures, but to do so less often than Turkish homesign-
ers. Future work is needed to address this question.
In one sense, we ought not expect big differences in homesigns as a function of the
co-speech gestures that surround them. After all, co-speech gestures have a great deal
in common. No matter what language they speak, hearing speakers tend to produce
gestures one at a time, rarely combining their gestures into connected strings. More-
over, they all produce gestures for the same semantic elements (elements central to
action relations) and in the same distribution. Aside from a few differences in the way
that speakers of typologically distinct languages package path and manner in gesture
(differences that have the potential to influence the amount of sequencing the deaf
children introduce into their gesture systems), the gestures that hearing speakers use
are remarkably similar. However, when hearing speakers are asked to abandon speech
and use only their hands to communicate, their gestures change and take on a variety
of language-like properties (e.g., the gestures are likely to appear in connected strings;
the strings are characterized by order). What would happen if a homesigner were
exposed to gestural input of this sort?
Most of the homesigners who have been extensively studied thus far were being
educated orally. Their hearing parents had been advised to use speech with their chil-
dren and, as a result, the gestures the parents produced were almost always produced
with speech. If, however, there were no oral education available for deaf children and
no pressure put on parents to speak to their deaf children, hearing parents of deaf
children might talk less and gesture more. This appears to be the case in rural Nicara-
gua. Hearing parents frequently produce gestures without any talk at all when attempt-
ing to communicate with their deaf children (Coppola/Goldin-Meadow/Mylander
2006). These children (who have not been exposed to ISN) thus routinely see gestures
produced without speech. Will this gestural input, which is likely to be more language-
like in structure than the gestural input received by homesigners who are being edu-
cated orally (see section 3.2), lead to the construction of a more linguistically sophisti-
cated homesign system? Future work is needed to address this question and, in so
doing, tell us if and how homesigners use the gestural input they see in constructing
their communication systems.
620 V. Communication in the visual modality

4.2. Implications for language learning and the creation of


sign languages
The homesigns described in this chapter are created by individual children without the
support of a community, indeed, without the support of a partner who knows and uses
the system. Nonetheless, homesigns contain many, although not all, of the properties
of natural languages, suggesting that these properties are fundamental to human com-
munication and do not need to be handed down from generation to generation in the
form of a codified system. They can instead be invented de novo by a child who lives
in a community but does not share her communication system with that community.
The properties of language that are found in homesign are truly resilient (Goldin-
Meadow 1982, 2003).
It is worth noting that compositional structure, one of the defining features of lan-
guage found in homesign and, in this sense, resilient, does not arise in human communi-
cation in all circumstances. Selten and Warglien (2007) asked hearing adults to commu-
nicate with one another using a computer. The adults’ task was to develop a common
code referring to geometrical figures that differed from one another by up to three
features. The code had to be made up of a limited repertoire of letters, and each letter
had a cost. The interesting result from the point of view of the present discussion is
that compositional structure was created only in an environment that had novelty, that
is, only when the adults were forced to communicate about new figures that had not
been described before. Homesigners are, in a sense, always in a situation where they
must express novelty ⫺ thoughts for which they do not have a previously established
code. Selten and Warglien’s (2007) results suggest that such a situation leads naturally,
perhaps inexorably, to compositional structure in human communicators.
The properties of homesign may also hold a special place in the analysis of sign
languages. It is likely that many, if not all, current day sign languages have their roots
in homesign (Fusellier-Souza 2006). Homesigns appear to have much in common even
if developed in very different circumstances around the globe. These shared properties
may reflect linguistic capacities that all human beings possess, or perhaps constraints
imposed by the manual modality itself. Whatever the origin of the commonalities that
characterize homesign, charting the differences between conventional sign languages
and homesign can offer insight into the pressures that move languages away from their
original starting point. Languages respond to, and are likely shaped by, a variety of
pressures; for example, the need to be semantically clear, to be processed efficiently,
to be rhetorically interesting (Slobin 1977). Homesign may rely on patterns that have
the virtue of semantic clarity, for both producer and receiver. But as a language com-
munity grows and the language functions become more complex, additional pressures
may exert their influence on language form, in some cases pushing it away from its
homesign roots. Homesign thus offers us a glimpse into the most fundamental proper-
ties of language and provides an anchor point against which to examine the trajectories
sign languages (and perhaps all languages) take as they evolve.

Acknowledgements: This research was supported by grant no. R01 DC00491 from
NIDCD. Thanks to all of my many collaborators (beginning with Lila Gleitman and
Heidi Feldman in Philadelphia and Carolyn Mylander in Chicago) for their invaluable
help in uncovering the structure of homesign, and to the children and their families
for welcoming us into their homes.
26. Homesign: gesture to language 621

5. Literature
Bekken, Kaaren
1989 Is there “Motherese” in Gesture? PhD Dissertation, University of Chicago.
Bloom, Lois
1970 Language Development: Form and Function in Emerging Grammars. Cambridge, MA:
MIT Press.
Bonvillian, John D./Orlansky, Michael D./Novack, Lesley L.
1983 Developmental Milestones: Sign Language Acquisition and Motor Development. In:
Child Development 54, 1435⫺1445.
Brentari, Diane/Coppola, Marie/Mazzoni, Laura/Goldin-Meadow, Susan
2012 When Does a System Become Phonological? Handshape Production in Gesturers, Sign-
ers, and Homesigners. In: Natural Language and Linguistic Theory 30(1), 1–31.
Butcher, Cynthia/Mylander, Carolyn/Goldin-Meadow, Susan
1991 Displaced Communication in a Self-styled Gesture System: Pointing at the Non-present.
In: Cognitive Development 6, 315⫺342.
Coppola, Marie/Newport, Elissa L.
2005 Grammatical ‘Subjects’ in Home Sign: Abstract Linguistic Structure in Adult Primary
Gesture Systems Without Linguistic Input. In: Proceedings of the National Academy of
Sciences 102, 19249⫺19253.
Coppola, Marie/Senghas, Ann
2010 Deixis in an Emerging Sign Language. In: Brentari, Diane (ed.), Sign Languages (Cam-
bridge Language Surveys). Cambridge: Cambridge University Press, 543⫺569.
Coppola, Marie/Goldin-Meadow, Susan/Mylander, Carolyn
2006 How Do Hearing Parents Communicate with Deaf Children? Comparing Parents’
Speech and Gesture Across Five Cultures. Poster Presented at the Society for Research
on Child Language Disorders, Madison, WI.
DuBois, Jack W.
1987 The Discourse Basis of Ergativity. In: Language 63, 805⫺855.
Fant, Louis J.
1972 Ameslan: An Introduction to American Sign Language. Silver Springs, MD: National
Association of the Deaf.
Feldman, Heidi/Goldin-Meadow, Susan/Gleitman, Lila
1978 Beyond Herodotus: The Creation of Language by Linguistically Deprived Deaf Chil-
dren. In: Lock, Andrew (ed.), Action, Symbol, and Gesture: The Emergence of Lan-
guage. New York: Academic Press, 351⫺414.
Franklin, Amy/Giannakidou, Anastasia/Goldin-Meadow, Susan
2011 Negation, Questions, and Structure Building in a Homesign System. In: Cognition
118(3), 398⫺416.
Fusellier-Souza, Ivani
2006 Emergence and Development of Sign Languages: From a Semiogenetic Point of View.
In: Sign Language Studies 7(1), 30⫺56.
Gershkoff-Stowe, Lisa/Goldin-Meadow, Susan
2002 Is There a Natural Order for Expressing Semantic Relations? In: Cognitive Psychology
45(3), 375⫺412.
Goldin-Meadow, Susan
1982 The Resilience of Recursion: A Study of a Communication System Developed Without
a Conventional Language Model. In: Wanner, Eric/Gleitman, Lila (eds.), Language
Acquisition: The State of the Art. New York: Cambridge University Press, 51⫺77.
Goldin-Meadow, Susan
1985 Language Development under Atypical Learning Conditions: Replication and Implica-
tions of a Study of Deaf Children of Hearing Parents. In: Nelson, Katherine (ed.),
Children’s Language, Vol. 5. Hillsdale, NJ: Lawrence Erlbaum, 197⫺245.
622 V. Communication in the visual modality

Goldin-Meadow, Susan
1987 Underlying Redundancy and Its Reduction in a Language Developed Without a Lan-
guage Model: The Importance of Conventional Linguistic Input. In: Lust, Barbara (ed.),
Studies in the Acquisition of Anaphora: Applying the Constraints, Vol. II. Boston, MA:
D. Reidel Publishing Company, 105⫺133.
Goldin-Meadow, Susan
1993 When Does Gesture Become Language? A Study of Gesture Used as a Primary Com-
munication System by Deaf Children of Hearing Parents. In: Gibson, Katherine R./
Ingold, Timothy (eds.), Tools, Language and Cognition in Human Evolution. New York:
Cambridge University Press, 63⫺85.
Goldin-Meadow, Susan
2003a Hearing Gesture: How Our Hands Help Us Think. Cambridge, MA: Harvard Univer-
sity Press.
Goldin-Meadow, Susan
2003b The Resilience of Language. Philadelphia, PA: Taylor & Francis.
Goldin-Meadow, Susan/Butcher, Cynthia/Mylander, Carolyn/Dodge, Mark
1994 Nouns and Verbs in a Self-styled Gesture System: What’s in a Name? In: Cognitive
Psychology 27, 259⫺319.
Goldin-Meadow, Susan/Feldman, Heidi
1977 The Development of Language-like Communication Without a Language Model. In:
Science 197, 401⫺403.
Goldin-Meadow, Susan/Gelman, Susan/Mylander, Carolyn
2005 Expressing Generic Concepts with and Without a Language Model. In: Cognition 96,
109⫺126.
Goldin-Meadow, Susan/McNeill, David/Singleton, Jenny
1996 Silence Is Liberating: Removing the Handcuffs on Grammatical Expression in the Man-
ual Modality. In: Psychological Review 103, 34⫺55.
Goldin-Meadow, Susan/Mylander, Carolyn
1983 Gestural Communication in Deaf Children: The Non-effects of Parental Input on Lan-
guage Development. In: Science 221, 372⫺374.
Goldin-Meadow, Susan/Mylander, Carolyn
1984 Gestural Communication in Deaf Children: The Effects and Non-effects of Parental
Input on Early Language Development. In: Monographs of the Society for Research in
Child Development 49, 1⫺121.
Goldin-Meadow, Susan/Mylander, Carolyn
1998 Spontaneous Sign Systems Created by Deaf Children in Two Cultures. In: Nature 91,
279⫺281.
Goldin-Meadow, Susan/Mylander, Carolyn/Butcher, Cynthia
1995 The Resilience of Combinatorial Structure at the Word Level: Morphology in Self-
styled Gesture Systems. In: Cognition 56, 195⫺262.
Goldin-Meadow, Susan/Mylander, Carolyn/Franklin, Amy
2007 How Children Make Language out of Gesture: Morphological Structure in Gesture
Systems Developed by American and Chinese Deaf Children. In: Cognitive Psychology
55, 87⫺135.
Goldin-Meadow, Susan/So, Wing-Chee/Özyürek, Aslı/Mylander, Carolyn
2008 The Natural Order of Events: How Speakers of Different Languages Represent Events
Nonverbally. In: Proceedings of the National Academy of Sciences 105(27), 9163⫺9168.
Greenfield, Patricia M./Savage-Rumbaugh, E. Sue
1991 Imitation, Grammatical Development, and the Invention of Protogrammar by an Ape.
In: Krasnegor, Norman A./Rumbaugh, Duane M./Schiefelbusch, Richard L./Studdert-
Kennedy, Michael (eds.), Biological and Behavioral Determinants of Language Develop-
ment. Hillsdale, NJ: Lawrence Erlbaum, 235⫺262.
26. Homesign: gesture to language 623

Hunsicker, Dea/Mylander, Carolyn/Goldin-Meadow, Susan


2009 Are There Noun Phrases in Homesign? In: Proceedings of GESPIN (Gesture and
Speech in Interaction), Vol. 1.
Hunsicker, Dea/Goldin-Meadow, Susan
2011 Hierarchical Structure in a Self-created Communication System: Building Nominal
Constituents in Homesign. Under review.
Iverson, Jana M./Capirci, Olga/Longobardi, Emiddia/Caselli, M. Cristina
1999 Gesturing in Mother-child Interaction. In: Cognitive Development 14, 57⫺75.
Jepson, Jill
1991 Two Sign Languages in a Single Village in India. In: Sign Language Studies 20, 47⫺59.
Keenan, Edward
1976 Towards a Universal Definition of Subject. In: Li, Charles (ed.), Subject & Topic. New
York: Academic Press, 303⫺333.
Kendon, Adam
1980a A Description of a Deaf-mute Sign Language from the Enga Province of Papua New
Guinea with Some Comparative Discussion. Part I: The Formational Properties of Enga
Signs. In: Semiotica 31(1/2), 1⫺34.
Kendon, Adam
1980b A Description of a Deaf-mute Sign Language from the Enga Province of Papua New
Guinea with Some Comparative Discussion. Part II: The Semiotic Functioning of Enga
Signs. In: Semiotica 32(1/2), 81⫺117.
Kendon, Adam
1980c A Description of a Deaf-mute Sign Language from the Enga Province of Papua New
Guinea with Some Comparative Discussion. Part III: Aspects of Utterance Construc-
tion. In: Semiotica 32(3/4), 245⫺313.
Kita, Sotaro
2000 How Representational Gestures Help Speaking. In: McNeill, David (ed.), Language
and Gesture: Window Into Thought and Action. Cambridge: Cambridge University
Press, 162⫺185.
Kuschel, Rolf
1973 The Silent Inventor: The Creation of a Sign Language by the Only Deaf-mute on a
Polynesian Island. In: Sign Language Studies 3, 1⫺27.
Lenneberg, Eric H.
1964 Capacity for Language Acquisition. In: Fodor, Jerry A./Katz, Jerrold J. (eds.), The Struc-
ture of Language: Readings in the Philosophy of Language. Englewood Cliffs, NJ: Pren-
tice-Hall, 570⫺603.
Lillo-Martin, Diane
1999 Modality Effects and Modularity in Language Acquisition: The Acquisition of Ameri-
can Sign Language. In: Ritchie, William C./Bhatia, Tej K. (eds.), The Handbook of
Child Language Acquisition. New York: Academic Press, 531⫺567.
MacLeod, Catriona
1973 A Deaf Man’s Sign Language ⫺ Its Nature and Position Relative to Spoken Languages.
In: Linguistics 101, 72⫺88.
McNeill, David
1992 Hand and Mind: What Gestures Reveal About Thought. Chicago, IL: University of Chic-
ago Press.
McNeill, David
1998 Speech and Gesture Integration. In: Iverson, Jana M./Goldin-Meadow, Susan (eds.),
The Nature and Functions of Gesture in Children’s Communications (New Directions
for Child Development Series, No. 79). San Francisco: Jossey-Bass, 11⫺28.
Mohay, Heather
1982 A Preliminary Description of the Communication Systems Evolved by Two Deaf Chil-
dren in the Absence of a Sign Language Model. In: Sign Language Studies 34, 73⫺90.
624 V. Communication in the visual modality

Moores, Donald F.
1974 Nonvocal Systems of Verbal Behavior. In: Schiefelbusch, Richard L./Lloyd, Lyle L.
(eds.), Language Perspectives: Acquisition, Retardation, and Intervention. Baltimore:
University Park Press, 377⫺417.
Morford, Jill P.
1995 How to Hunt an Iguana: The Gestured Narratives of Non-signing Deaf Children. In:
Bos, Heleen/Schermer, Trude (eds.), Sign Language Research 1994: Proceedings of the
Fourth European Congress on Sign Language Research. Hamburg: Signum, 99⫺115.
Morford, Jill P.
2003 Grammatical Development in Adolescent First-language Learners. In: Linguistics
41(4), 681⫺721.
Morford, Jill P./Goldin-Meadow, Susan
1997 From Here to There and Now to Then: The Development of Displaced Reference in
Homesign and English. In: Child Development 68, 420⫺435.
Newport, Elissa L./Meier, Richard P.
1985 The Acquisition of American Sign Language. In: Slobin, Dan I. (ed.), The Cross-Lin-
guistic Study of Language Acquisition, Volume 1: The Data. Hillsdale, NJ: Lawrence
Erlbaum, 881⫺938.
Özyürek, Aslı/Furman, Reyhan/Kita, Sotaro/Goldin-Meadow, Susan
2011 Emergence of Segmentation and Sequencing in Motion Event Representations Without
a Language Model: Evidence from Turkish Homesign. Under review.
Özyürek, Aslı/Kita, Sotaro
1999 Expressing Manner and Path in English and Turkish: Differences in Speech, Gesture,
and Conceptualization. In: Proceedings of the Cognitive Science Society 21, 507⫺512.
Padden, Carol
1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation,
University of California, San Diego [Published 1988 by Garland Outstanding Disserta-
tions in Linguistics, New York].
Padden, Carol
1990 The Relation Between Space and Grammar in ASL Verb Morphology. In: Lucas, Ceil
(ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet Univer-
sity Press, 118⫺132.
Phillips, Sarah/Goldin-Meadow, Susan/Miller, Peggy
2001 Enacting Stories, Seeing Worlds: Similarities and Differences in the Cross-cultural Nar-
rative Development of Linguistically Isolated Deaf Children. In: Human Development
44, 311⫺336.
Sandler, Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark
2005 The Emergence of Systematic Grammatical Structure in a New Language. In: Proceed-
ings of the National Academy of Science 102, 2661⫺2665.
Schlesinger, Hilde
1978 The Acquisition of Bimodal Language. In: Schlesinger, Izchak (ed.), Sign Language of
the Deaf: Psychological, Linguistic, and Sociological Perspectives. New York, NY: Aca-
demic Press, 57⫺93.
Selten, Reinhard/Warglien, Massimo
2007 The Emergence of Simple Languages in an Experimental Coordination Game. In: Pro-
ceedings of the National Academy of Science 104, 7361⫺7366.
Senghas, Anne/Coppola, Marie/Newport, Elissa L./Supalla, Ted
1997 Argument Structure in Nicaraguan Sign Language: The Emergence of Grammatical
Devices. In: Hughes, Elizabeth/Hughes, Mary/Greenhill, Annabel (eds.), Proceedings
of the 21st Annual Boston University Conference on Language Development: Vol. 2.
Somerville, MA: Cascadilla Press, 550⫺561.
26. Homesign: gesture to language 625

Senghas, Ann/Kita, Sotaro/Özyürek, Aslı


2004 Children Creating Core Properties of Language: Evidence from an Emerging Sign Lan-
guage in Nicaragua. In: Science 305, 1779⫺1782.
Shatz, Marilyn
1982 On Mechanisms of Language Acquisition: Can Features of the Communicative Envi-
ronment Account for Development? In Wanner, Eric/Gleitman, Lila R. (eds.), Lan-
guage Acquisition: The State of the Art. Cambridge: Cambridge University Press,
102⫺127.
Slobin, Dan I.
1977 Language Change in Childhood and History. In: Macnamara, John (ed.), Language
Learning and Thought. New York, NY: Academic Press, 185⫺214.
Slobin, Dan I.
1996 Two Ways to Travel: Verbs of Motion in English and Spanish. In: Shibatani, Masayoshi/
Thompson, Sandra A. (eds.), Grammatical Constructions. Oxford: Clarendon Press,
195⫺220.
Spaepen, Elizabet/Coppola, Marie/Spelke, Elizabeth/Carey, Susan/Goldin-Meadow, Susan
2011 Number Without a Language Model. In: Proceedings of the National Academy of Scien-
ces of the United States of America 108(8), 3163⫺3168.
Talmy, Leonard
1985 Lexicalization Patterns: Semantic Structure in Lexical Forms. In: Shopen, Timothy
(ed.), Language Typology and Syntactic Description, Vol. III: Grammatical Categories
and the Lexicon. Cambridge: Cambridge University Press, 57⫺149.
Tervoort, Bernard T.
1961 Esoteric Symbolism in the Communication Behavior of Young Deaf Children. In:
American Annals of the Deaf 106, 436⫺480.
Thompson, Sandra A.
1988 A Discourse Approach to the Cross-linguistic Category ‘Adjective’. In: Hawkins, John
A. (ed), Explaining Language Universals. Cambridge, MA: Basil Blackwell, 167⫺185.
Yau, Shun-Chiu
1992 Creation Gestuelle et Debut du Langage. Creation de Langues Gestuelles chez les Sourds
Isolés. Hong Kong: Editions Langages Croises.
Zheng, Ming-yu/Goldin-Meadow, Susan
2002 Thought Before Language: How Deaf and Hearing Children Express Motion Events
Across Cultures. In: Cognition 85, 145⫺175.

Susan Goldin-Meadow, Chicago, Illinois (USA)


626 V. Communication in the visual modality

27. Gesture
1. Introduction
2. Gesture in spoken languages
3. Gesture in sign languages
4. Conclusion
5. Literature

Abstract
Gestures are meaningful movements of the body, the hands, and the face during commu-
nication, which accompany the production of both spoken and signed utterances. Recent
research has shown that gestures are an integral part of language and that they contribute
semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore,
they reveal internal representations of the language user during communication in ways
that might not be encoded in the verbal part of the utterance. Firstly, this chapter summa-
rizes research on the role of gesture in spoken languages. Subsequently, it gives an over-
view of how gestural components might manifest themselves in sign languages, that is,
in a situation in which both gesture and sign are expressed by the same articulators.
Current studies are discussed that address the question of whether gestural components
are the same or different in the two language modalities from a semiotic as well as from
a cognitive and processing viewpoint. Understanding the role of gesture in both sign and
spoken language contributes to our knowledge of the human language faculty as a multi-
modal communication system.

1. Introduction
It is a generally accepted view that the world’s languages can be grouped into two
main types in terms of the modality through which communicative messages are trans-
mitted. On the one hand, we have sign languages, the natural languages of Deaf com-
munities, which are transmitted mainly in the visual-gestural (spatial) modality by em-
ploying manual and non-manual articulators. On the other hand, there are spoken
languages, which use mainly the vocal-auditory channel to organize communicative
events (e.g., Meier 2002). However, this simple distinction between spoken and sign
languages does not capture the multi-modal complexity of the human language faculty.
In addition to the vocal channel, spoken languages all around the world also exploit
the visual-gestural modality for expression and use gestures accompanying speech with
the hands, face, and body as articulators (e.g., Goldin-Meadow 2003; Kendon 2004;
McNeill 1992, 2005). For example, speakers can use an ‘OK’ gesture as they utter the
word “OK”, move the fingers of an inverted W-hand in a wiggling manner while saying
“He walked across”, point to two empty spaces in front of them while saying “She
went from the bank to the supermarket”, or use bodily demonstrations of reported
actions as they tell narratives. Gestures are part of an utterance in vocal languages
27. Gesture 627

and contribute semantic, syntactic, and pragmatic information to the verbal part of an
utterance. Thus, in order to be able to understand the fundamental features of our
language faculty, we need to understand how both sign and spoken languages exploit
the multi-modal nature of the human communicative ability.
This chapter will give an outline of historical as well as state-of-the art debates and
findings concerning similarities and differences between sign and spoken languages
when the multi-modal nature of expressions (produced by the hands, body, and face)
both in sign and spoken languages are taken into account. This will bring us to the
issue of how gestural components of language might manifest themselves both in sign
and spoken languages. Since recent theories and studies about gestural components in
sign language have been based on ideas about how gestures are used in spoken lan-
guages, I begin by reviewing research on gestures in spoken languages in section 2. In
section 3, I outline how some of these ideas have been transferred and adapted to our
understanding of possible gestural components in sign language.

2. Gesture in spoken languages

Even though historically there has been initial interest in manual modality as part of
language, in the last century the field of linguistics has evolved as the science of speech
(see Kendon (2004) for a review). Only recently, the gestures that speakers use have
become a topic of inquiry in linguistics, psycholinguistics, and communication studies
(see McNeill 1992, 2005; Kendon 2004; and Kita 2008 for a review).
In this section, I will begin by giving an overview of different types of gestures that
can be used by hearing speakers (section 2.1). In section 2.2, I will discuss different
views concerning the relationship between speech and gesture.

2.1. Definition and classification of gestures

Kendon (1986, 2004) defines gestures as visible actions of the hand, body, and face that
are intentionally used to communicate and are expressed together with the verbal
utterance. These gestures are considered to manifest themselves in a continuum of
conventionalization in terms of form and meaning as well as in different semiotic types
and functions during communication (Clark 1996; Clark/Gerrig 1990; Kendon 2004;
McNeill 1992, 2005). Furthermore, while some gestures occur as accompaniments to
speech (these are sometimes categorized under the term ‘gesticulations’ such as repre-
sentational gestures, abstract points, beats), others can replace or complement speech
in an utterance or can be used without speech (such as emblems, pantomimes, or inter-
actional gestures), as explained further below. When talking about types of gestures, it
is important to keep in mind that different scholars have proposed different categories
and semiotic types of gestures used by speakers. Thus, the following list does not in-
clude all of the categories proposed so far (see Müller (2009) for a more extended
categorization or Kendon (2004) for an extended review of different classifications
proposed so far).
628 V. Communication in the visual modality

2.1.1. Emblems

Some gestures, such as the so-called ‘emblems’, are quite conventionalized and culture-
specific in form and meaning; examples being the ‘OK’ and ‘perfect’ gestures. There
is an arbitrary relationship between the form of an emblem and the meaning it conveys.
Emblems do not rely on the accompanying speech in terms of their production and
comprehension. In many cases, they can also replace or be used without speech. Some
of these gestures can also have illocutionary force, in that they may invite the interlocu-
tor to act in a certain way in the communicative interaction, for instance, a ‘come’
gesture asking somebody to come near or placing the index finger on the lips to ask
someone to be quiet.

2.1.2. Representational gestures

‘Representational’ gestures (sometimes also referred to as ‘iconic gestures’) are less


conventionalized and bear a more motivated (i.e., iconic) relation between their form
and the referent, action, or event they represent compared to emblems. For example,
a stirring hand movement accompanying a verbal utterance about cooking bears in
form a resemblance to the actual act of stirring. Even though such gestures are visually
motivated, the meaning they convey relies heavily on the speech phrase they accom-
pany. Experimental studies have shown that in the absence of speech, the meaning of
these gestures is highly ambiguous and not at all transparent from their form (Krauss
et al. 1991). Thus, when these gestures occur, they almost always overlap with semanti-
cally relevant speech (see examples (3) and (4) below).
Representational gestures vary in terms of their semiotic characteristics, that is, in
the way they can represent objects, actions, or events. Some examples of representa-
tional gestures are provided in (1). Müller (2009) categorizes these gestures as belong-
ing to different modes of representation; her classification is given in parentheses fol-
lowing each example.

(1) a. Moving the hands as if opening a window (enactment mode)


b. Tracing the shape of a picture in the air with two index fingers (tracing mode)
c. Hands move as if modeling bowls, boxes, etc., i.e. as if molding the objects
in a 3-dimensional way (modeling mode)
d. A flat hand represents a piece of paper (representing mode)

It is also important to note here that representational gestures do not always depict
concrete object, actions, or events. They can also be used to represent abstract notions
and concepts such as time, ideas, etc., for instance, when moving a flat extended hand
downwards to depict the iron curtain that separated the Western from the Eastern
World (example from Müller (2009)). These types of gestures have also been termed
“metaphoric gestures” in the literature (McNeill 1992).

2.1.3. Pantomimes

‘Pantomimes’ differ from representational gestures in that they can convey meaning
on their own without speech, bear a more visually transparent relation between their
27. Gesture 629

form and referent, and thus are not usually used to accompany speech, but to replace
or complement speech. Pantomime gestures can often be used in reports of actions
(as in direct quotations) and occur sequentially with the speech segment, rather than
simultaneously (Clark/Gerrig 1990; Clark 1996). Example (2) illustrates the sequential,
complementary, and pantomimic nature of a gesture as quoted action (Clark/Gerrig
1990, 783).

(2) I just got out of the car and I just [demonstration of turning around and bump-
ing head into a pole]

2.1.4. Points

Gestures can also be in the form of pointing that accompanies verbal references to
entities. Such pointing gestures can either be concrete, when targeting objects or places
in the here-and-now of the discourse participants, or abstract, when pointing to mean-
ingful abstract spaces in the gesture space in front of the speaker. The use of abstract
space and pointing in gesture space allows speakers to express coherent relationships
among the referents that figure in their discourse (McNeill/Cassell/Levy 1993). While
the meaning of abstract points would be fully ambiguous in the absence of the speech
content, points to objects in the here-and-now may sometimes unambiguously refer to
objects without speech, given shared knowledge among the participants.

2.1.5. Beats

Finally, gestures that obligatorily accompany speech can also take the form of ‘beats’,
that is, rhythmic movements of the hands with no apparent content that seem to occur
concurrently with new information or discourse contours in the speech stream. The
handshapes for these gestures may vary but unlike the previously reviewed gesture
types, there is no one-to-one mapping between their form and the meaning they
convey.

2.2. On the relation between speech and gesture: Different views

According to some views, speech and gestures (all types described in section 2.1) form
two parts of an integrated communicative system (Bernardis/Gentilucci 2006; Clark
1996; Kendon 2004; McNeill 1992, 2005). Co-speech gestures have been found to have
several functions in the communicative system just as language does. Gestures convey
co-expressive information together with the speech they accompany and they ground
the speaker’s message in the here-and-now of the speech context. For example, repre-
sentational gestures do not directly depict what is imagined by the speaker but they
are also shaped by the shared gesture space among the interlocutors at the moment of
speaking in addition to being shaped by visual aspects of the referents themselves
(Özyürek 2002). Furthermore, these gestures express aspects of the propositional or
conceptual content of the utterance which they are a part of and thus are considered
630 V. Communication in the visual modality

as part of “language” together with speech. Recent experimental and brain studies
have also shown that our brain processes semantic information from both speech and
gesture on a similar time course and uses overlapping neural correlates (that is, Broca’s
area – left inferior frontal cortex) providing further evidence for the two being an
integrated system (Özyürek et al. 2007; Willems/Özyürek/Hagoort 2007)
Even though researchers agree that speech and gesture are two related aspects of
the communication system, there are slightly different views on how this relation can
be characterized. Also, it has to be pointed out that different studies have focused on
different types of gestures (i.e., representational, emblems, or pantomimes) and on
cognitive versus communicative functions to characterize the relations between speech
and gesture.

2.2.1. Cognitive views on the relation between speech and gesture

According to cognitive views, gestures (in particular, representational gestures) repre-


sent aspects of imagistic thinking evoked during language production. Yet views differ
concerning the following two questions: (i) at what stage during the language produc-
tion process are gestures produced; and (ii) to what extent are they influenced by the
linguistic formulation of thinking? According to McNeill (1992, 2005), gesture and
speech are derived from an initial single unit, which he refers to as ‘Growth Point’,
composed of both types of representations ⫺ imagistic and linguistic. Both gesture and
speech are manifestations of this combined unit of representation.
However, according to another view, the Interface Hypothesis proposed in Kita
and Özyürek (2003), representational gestures and speech are best characterized as
originating from different representations: gesture from imagistic, and language from
propositional, representations. During the language production process, both represen-
tations interact. Previously, McNeill (1992) assumed that speakers’ representational
gestures should be similar across languages and cultures since gestures tap directly on
the imagistic part of the combined unit in the Growth Point. Recent findings, however,
suggest that this might not be the case and that gestural information about identical
events can be conveyed differently in languages that exhibit different lexical, semantic,
and grammatical patterning of information. Kita and Özyürek (2003), for instance,
have shown that most English speakers who describe a cartoon event in which Syl-

Fig. 27.1: Stills from the cartoon used to elicit English and Turkish narratives.
27. Gesture 631

vester tries to reach Tweety by swinging on a rope from one window to another (see
Figure 27.1 for stills from the elicitation clip) use the phrase “swing across/over”, which
encodes both the manner (the arc) and the path (across/over) of the motion. Both
these aspects are also usually encoded in their co-speech gesture, an arc-shaped trajec-
tory gesture. A prototypical combination of an English utterance with a gesture is
shown in (3).

(3) English co-speech gesture

Speech [swings over to]a Tweety’s


Gesture right hand: index finger moves in an arc movement from right to left
a
The brackets indicate the portion of the speech segment with which the stroke (i.e. the
meaningful part) of the gesture overlaps.

(4) Turkish co-speech gesture

Speech ordan [atlıyor]


from-there jumps
Gesture right hand: index finger moves to left laterally
632 V. Communication in the visual modality

In contrast, Japanese and Turkish speakers, who do not have a manner verb compa-
rable to English ‘swing’ in their lexicon, commonly use a phrase meaning ‘go across’
to refer to Sylvester’s motion. Interestingly, they also omit the arc in their gesture and
instead use a straight gesture. The Turkish speaker in (4), for instance, uses such a
gesture in combination with the verb atlamak (‘to jump’).
In light of these findings, it appears that gestures do not reflect the imagistic repre-
sentations of speakers directly but rather reflect their imagery as shaped by a language-
specific conceptualization of the event components. Thus, according to cognitive views,
speech and gesture reflect two linked representational systems active during lan-
guage production.

2.2.2. Functional/communicative views on the relation between speech


and gesture

Finally, according to functional accounts, gestures and speech function together ⫺ as a


composite multi-modal expression ⫺ to convey the communicator’s intended message
(Clark 1996; Kendon 2004, 2008). In this “multi-modal utterance view”, each modality
might convey information in different semiotic formats depending on the communica-
tor’s intent or the interactional context. According to Clark (1996), gestures that are
clear demonstrations of actions in direct quotes (i.e., pantomimic gestures), and that
are produced sequentially with speech (see example (2) above), are prime examples
of the multi-modal utterance view. Clark proposes that such gestures should be consid-
ered as a ‘component’ of language. It is implicit in the multi-modal utterance view that
speakers distribute the intended message over speech and gesture depending on the
communicative intent of the speaker, but unlike the cognitive views discussed above,
this view does not give a processing account of the interaction between the two modali-
ties during production.

2.2.3. Summary: Gesture and speech as part of language

Thus, no matter how the link between speech and gesture is characterized, it recently
has become clear that characterizations of language which only take into account as-
pects that are expressed through speech do not offer a comprehensive view of our
language capacity. Rather, both speech and gesture should be taken into account since
gestures are an integral part of language in terms of conveying semantic, syntactic, and
pragmatic information. Moreover, they play a role in conceptualization during
speaking.

3. Gesture in sign languages


This expanded view of language, which takes both speech and gesture to be part of
the same linguistic and cognitive system, has recently made an impact in the field of
sign language studies. After all, if gesture is an integral part of language, then it should
27. Gesture 633

also manifest itself in sign languages. In almost all studies on spoken languages, the
gestural component of language has been taken to be confined to what is expressed
by the manual and non-manual articulators (but see Okrent (2002), who suggests that
gestural components can also be expressed by the vocal-auditory channel, for example,
by vowel lengthening). This has led to the question of how gestural components might
be integrated in sign languages, which convey all communicative expressions in the
visuo-spatial modality. Historical developments in sign language research have only
recently made it possible to seek answers to such a question.
In early attempts to prove that sign languages are as complex in their linguistic
structure as spoken languages (e.g., Stokoe 1960; Tervoort 1961), the idea that sign
language expressions might also include gestural components was not widely accepted.
This was due to the fact that, at that time, sign languages began to be studied from the
point of view of structuralist linguistic models developed for spoken languages
(Kendon 2008; Meier 2002). In order to show that sign languages are natural languages
on a par with spoken languages, researchers emphasized the similarities between spo-
ken and sign language structures. Indeed, in spite of the differences in the main modal-
ity through which meaning is conveyed, sign languages have been shown to share basic
linguistic properties with spoken languages on the levels of phonology, morphology,
and syntax (Battison 1978; Klima/Bellugi 1979; Liddell 1980; Padden 1983; Stokoe
1960; Supalla 1982). Sign languages of different countries have been shown to vary in
terms of their vocabularies, form distinctions, and word order (Meier 2002; Zeshan
2004; also see chapter 12, Word Order). Furthermore, similar neural structures have
been found to support processing of both sign and spoken languages (Poizner et al.
1987; see also chapter 31, Neurolinguistics), and the acquisition of both types of lan-
guages shows a similar developmental progression (Newport/Meier 1985; see also chap-
ter 28, Acquisition). These findings have led to the conclusion that some fundamental
features of language are independent of the modality of expression and pattern simi-
larly in both spoken and sign languages.
However, recent studies have shown that in some core domains of linguistic expres-
sion, sign languages also exhibit interesting modality-specific patterns (Meier 2002;
Woll 2003; see also chapter 25 on language and modality). Such modality effects are
attested in, for instance, pronominalization, marking of arguments in directional
(agreement) verbs (e.g., give, ask), role shifts in reports of actions and quotations, and
in the expression of spatial relations (Emmorey 2002; Liddell 2003; Talmy 2003). These
modality-specific properties have raised doubts with regard to whether the respective
sign language structures can be analyzed in the same way as the corresponding linguis-
tic structures observed in spoken languages, or whether they should rather be analyzed
as “gestural” components in sign languages or as a combination of linguistic and ges-
tural components. In addition, in these domains, more similarities across sign languages
have been found than across spoken languages (Aronoff et al. 2003; Aronoff/Meir/
Sandler 2005; Newport/Supalla 2000; Woll 2003). Recent neuroimaging studies have
also reported modality-specific differences in the localization of brain structures for
sign versus spoken languages (e.g., Bavelier et al. 1998; MacSweeney et al. 2002; Neville
et al. 1997).
This section consists of three parts. In section 3.1, I discuss how gestures can be
characterized differently from signs in terms of various dimensions. Section 3.2 presents
634 V. Communication in the visual modality

a number of possible candidates for (manual and non-manual) gestural components in


sign languages. Finally, in section 3.3, I briefly address the issue of grammaticalization
of gestures in sign language.

3.1. Gesture vs. sign

McNeill (1992, 2000, 2005) and Kendon (1982) have proposed several continua for the
conventionalization and formation of linguistic features from gesture to sign language.
McNeill (2000) offers different continua each reflecting separate dimensions according
to which relations between gesture and sign can be characterized (see Table 27.1).
In the continuum of linguistic properties, gesticulations (representational gestures)
and pantomimes both lack linguistic properties. They are non-morphemic, are not sub-
ject to phonological constraints, and cannot be combined with other gestures in a rule-
governed fashion. Emblems show some linguistic constraints in that well-formed and
ill-formed ways of producing an emblematic gesture can be distinguished. In the ‘OK’
gesture, for instance, the circle should be formed by the thumb and the index finger
and not by thumb and middle finger. Still, emblems are not fully linguistic since they
do not combine with others beyond the lexical level. Sign languages obey all linguistic
constraints at the lexical and syntactic levels.
According to the conventionalization continuum (i.e., the extent to which form and
meaning mapping is socially constituted), gesticulation and pantomime are also consid-
ered to be at the lower end of the continuum compared to emblems and signs. Gesticu-
lations in particular are considered to be idiosyncratic and formed anew at the moment
of speaking, depending on the imagery, the context, and the accompanying linguistic
properties of the speech. As has been pointed out in section 2.1.2, representational
gestures would be meaningless to the interlocutor in the absence of speech due to their
lack of conventionalization. Emblems and signs, on the other hand, are recognizable
by the members of the community in which they arose because they are highly conven-
tionalized.
Finally, the gesture to sign continuum also reflects different semiotic characteristics
along the following two dimensions: global vs. segmented and synthetic vs. analytic.
Representational gestures and pantomimes can be characterized as conveying meaning

Tab. 27.1: Continuum from gesture to sign in terms of linguistic properties, conventionalization,
and semiotics
Gesticulation
Sign
(representa- / Pantomime / Emblems /
tional gestures) Language

linguistic
⫺ ⫺ some C
properties
convention-
⫺ ⫺ C C
alization
[Cglobal] [Cglobal] [Csegment] [Csegment]
semiotics
[Csynthetic] [Canalytic] [Csynthetic] [Canalytic]
27. Gesture 635

globally in that they cannot be deconstructed into independent and meaningful el-
ements. Rather, their meaning is determined by the meaning of the whole. In contrast,
emblems and linguistic signs are composed of phonological and morphological compo-
nents, which are combined in hierarchical and rule-governed ways. Moreover, gesticu-
lations and emblems are taken to convey meaning synthetically ⫺ each unit conveys
an idea that can be spread over an entire utterance ⫺ whereas in pantomime and signs,
each meaning is conveyed by a single analytic unit (see Goldin-Meadow et al. (1996)
for the emergence of analytic representations when speakers are asked to pantomime
and a comparison to their gestures used during speaking; however, this does of course
not mean that pantomimes are as analytic as sign languages are).
Note that McNeill (2000) mentions a fourth continuum which deals with the relation
of gestures and signs to speech (not included in Table 27.1). He notes that while gestic-
ulation always occurs concurrently with speech, emblems do not necessarily accompany
speech. Pantomime, on the other hand, is characterized by an absence of speech, and
signs obviously do not need speech in order to be produced and understood (although
bimodal bilinguals can produce signs and speech simultaneously).
As a demonstration of the fact that representational gestures have semiotic and
linguistic properties different from those of linguistic forms in spoken and, most impor-
tantly, sign languages (cf. Table 27.1), consider the examples in (5) and (6). Both exam-
ples were elicited by asking an English speaker and a German Sign Language (DGS)
signer, respectively, to describe the same cartoon event, which shows Sylvester cata-
pulting himself upwards to get Tweety from a window sill, grasping the bird, and com-
ing down holding the bird (see Figure 27.2 for stills from the elicitation clip).

Fig. 27.2: Stills from the cartoon used to elicit English and German Sign Language (DGS) narra-
tives.

In the English example (5), speech expresses components of the event, such as
grasping the bird and going down, by means of different lexical items, combined in a
phrase structure, whereas in gesture, these components are represented globally. After
the speaker uses a fist gesture to represent grabbing the bird (5a), this handshape is
retained as the speaker moves her fist hand down (5b). In the gesture in (5b), Sylvester
is represented both as holding the bird ⫺ the speaker’s hand represents Sylvester’s
hand from a character perspective ⫺ and as an entity going down ⫺ representing
Sylvester as a whole from an observer perspective (see chapter 19, Use of Sign Space,
for discussion of the use of different signing perspectives). These two aspects of the
event ⫺ holding and going down ⫺ are represented in one single gesture that cannot
be analyzed by deconstructing its elements into separate meaningful parts but that can
only be understood globally.
636 V. Communication in the visual modality

However, when the same event is described by a DGS signer, the events of grasping
and going down are described by separate signs, so-called “classifier predicates” (see
chapter 8), as shown in (6a) and (6b), and cannot be combined into one sign. The
grasping event is depicted by a Handling classifier predicate which expresses grabbing/
holding the bird (6a) while the event of Sylvester going down is represented by an
Entity classifier predicate (inverted W-Handshape) which represents Sylvester as a two-
legged entity going down (6b) and crucially without the holding component. Such a
depiction of event components in a segmented (i.e., each classifier predicate as a sepa-
rate morpheme) and combinatorial way is characteristic of spoken and sign languages
but not of co-speech representational gestures (Perniss/Özyürek 2007; submitted).

(5) English co-speech gesture


a. b.

Speech He [grabs] the bird and he [goes back]


Gesture right hand: fist hand grabs and moves down

(6) German Sign Language (DGS)


a. b.

right hand: hcl:grab/hold-small.objecta ecl:two.legged.entity-go.down


left hand: ecl:flat.surface
‘(Sylvester) grabs (the bird), goes down, and lands on the street.’
a
HCL: Handling classifier; ECL: Entity classifier

3.2. Gestures in sign language

Given that in sign languages, the same articulators compete for gestural and linguistic
components of expression, it might seem unlikely at first sight that gesture production
would figure prominently in sign languages. Some recent studies, however, argue that
gestural components do play a role in sign production. This argument is based on the
insight that sign languages exhibit modality-specific patterns and have ⫺ due to the
visual-gestural modality ⫺ the potential to directly access imagistic, analog, iconic, or
27. Gesture 637

spatio-temporal representations (e.g., Aronoff et al. 2003; Janzen/Shaffer 2002; Liddell


2003; Liddell/Metzger 1998; Rathmann/Mathur 2002; Talmy 2003; Wilcox 2004; Zeshan
2003). For example, the sign language structure that expresses that someone is standing
at a certain location ⫺ an inverted W-handshape located in the signing space ⫺ is more
iconic to the event than the corresponding English expression “He is standing there”
and thus might have more direct access to imagistic representations, as gestures do.
Thus, for exactly these modality-specific domains (i.e., for the expression of action
and space), some researchers have suggested that gestural components exist in sign
languages. In fact, the possible existence of ‘pantomimic’ gestures in sign languages
has already been acknowledged by Klima and Bellugi (1979). In their view, sign lan-
guages utilize a wide range of gestural devices from conventionalized signs to mimetic
elaboration on those signs, to mimetic depiction, to free pantomime. However, the
proposal that gestural components akin to representational gestures in spoken lan-
guages also occur in sign languages is more recent. Some accounts of gestural compo-
nents within sign languages will be sketched in the following three sections. We will
first consider sequential and simultaneous manual and body gestures (sections 3.2.1
and 3.2.2) and then turn to non-manual gestures (section 3.2.3).

3.2.1. Sequential manual and body gestures

Emmorey (1999) has argued that signers may make use of “demonstrative gestures”
or pantomimes which are expressed sequentially, that is, in alternation with signs. These
gestures resemble demonstrations of quoted actions used by speakers as discussed by
Clark and Gerrig (1990). Emmorey shows that, in order to quote actions of others, a
signer may momentarily stop signing, go into a demonstration mode, in which he uses
his face and body to visualize a character’s actions, and then resume the articulation
of manual linguistic signs. In such cases, the signer produces signs and gestures sequen-
tially, in a way similar to demonstrative or conventional gestures. In the American Sign
Language (ASL) example in (7), a signer is describing a scene from the Frog Story in
which a boy peers over a log, spots a group of baby frogs, and gestures to a dog sitting
next to him to be quiet and to come over to the log (Emmorey 1999, 146).

(7) look/ come on, shhh, come on, thumb-point, well what? come on/ [ASL]
cl:two-legged-creatures-move
‘Look over here. (gesture: come on, shhh, come on, thumb-point, well what?
come on). The two crept over (to the log).’

In this example, the signer uses a series of conventional (emblematic) gestures enacted
from the point of view of the boy to report what the boy says to the dog, such as come
on and shh (‘be-quiet’ gesture); these gestures intervene between the sign look and
the classifier predicate.
Similar sequential alternations between signs and enactments of actions using full
body demonstrations have been reported by Liddell and Metzger (1998). They refer
to these enactments as “constructed actions” and point out that they are used mostly
to shift between quoting actions of two different characters. However, it is still an
638 V. Communication in the visual modality

open question whether the use of such pantomimic actions during signing should be
considered as gestural or linguistic because they might well be obligatory and serve
dedicated syntactic functions such as role shift (Quinto-Pozos 2007).

3.2.2. Simultaneous manual and body gestures

According to a prominent current view, at least some gestural components in sign


languages are similar to representational gestures found in spoken languages in that
they are derived from imagery as suggested in cognitive models of speech and gesture
(see section 2.2.1). According to this view, in sign languages, gestures can manifest
themselves as blends, that is, as expressions in which gestural and linguistic elements
are co-produced within a single sign (Liddell 2003; Liddell/Metzger 1998; Schembri/
Jones/Burnham 2005). For example, in indicating (agreement) verbs and depicting
verbs (classifier predicates), location and movement are considered gestural compo-
nents while the handshape in both types of verbs is taken to be a linguistic morpheme.
Liddell (2003) claims that the location and movement components of these verbs are
analogical and gradient in nature rather than discrete and categorical (i.e. morphemic)
since ⫺ due to their correspondence with mental representations of space (e.g., Duncan
2002; Liddell/Metzger 1998) ⫺ such verbs can exploit an uncountable number of loca-
tions and movements (the “listability problem”). Crucially, according to Liddell, the
analogical and gradient use of locations and movements in these signs bears resem-
blance to how representational gestures accompanying speech represent location and
movement: these components are derived from imagery as in McNeill’s theory of
speech and gesture. Thus, Liddell concludes that one area where imagistic and gestural
components co-occur in sign language lexemes is in the use of signing space, that is, in
the movement and location component of indicating and depicting verbs.
Liddell’s claims have been subsequently tested in a study by Schembri, Jones, and
Burnham (2005). The authors compared event descriptions given by adult signers of
three sign languages (Australian Sign Language, Taiwan Sign Language, and ASL) to
descriptions of the same events provided by English speakers (non-signers) in a condi-
tion in which they were only allowed to use their hands but not to speak. In particular,
they compared the locations and movements of motion verbs and found that these two
components were not only similar across the three sign languages but also in the silent
gestures produced by English speakers. In contrast, it turned out that handshapes refer-
ring to entities were different in each sign language and also in the silent gestures.
According to the authors, these findings confirm Liddell’s claim that the use of the
signing space in these verbs is gestural while the handshapes (for instance, in classifier
predicates) are linguistic.
Comparing the gesture production of hearing adults to the signing of deaf children,
Casey (2003) found that hearing non-signing adults use space in a way similar to deaf
children when depicting action scenes without speech. She interprets these similarities
as evidence for the gestural origins of these sign language devices, due to the visual-
gestural modality. In contrast to Liddell (2003) and Schembri, Jones, and Burnham
(2005), however, she claims that her findings do not necessarily imply that these devices
remain gestural at further developed, that is, further grammaticalized stages of a sign
language. Supporting evidence for this assumption comes from the rapid grammaticali-
27. Gesture 639

zation as observed within only three generations in an emerging sign language in Nicar-
agua and the changes in the use of signing space resulting from this grammaticalization
process (Senghas/Coppola 2001; see also chapter 36, Language Emergence and Creoli-
zation).
Interestingly, so far no research has directly compared co-speech gestures and sign
language with respect to the use of locations and movements in depictions of motion
and action (the research of Schembri et al. and Casey focused on gestures without
speaking). In fact, if both gestures in sign languages and co-speech gestures arose from
imagery, as suggested by Liddell (2003), then we would expect them to look similar.
Furthermore, most research on gestural components in signs to date has focused on
location and movement but has not compared representational modes between co-
speech gestures and classifier predicates. For example, different modes of representa-
tions in co-speech gestures as proposed by Müller (2009; see (1) above) appear to
correspond to different types of sign language classifier predicates in terms of their
semiotic properties. In particular, the tracing mode bears similarities to Size-and-Shape
Specifiers, the enactment mode corresponds to Handling classifiers, and the representa-
tion mode corresponds to Entity classifiers (see Zwitserlood (2003) and chapter 8 for
discussion of sign language classifiers). In future research, it would be interesting to
make a direct comparison of these representations as used in co-speech gestures and
sign languages. Such a comparison may help us understand which aspects of the basic
semiotic properties that the visual-spatial modality affords go through grammaticaliza-
tion processes and which remain gestural in nature.

3.2.3. Simultaneous non-manual gestures: Gestures of the face and the mouth

Recently, Sandler (2009) has proposed that there is another domain in which sign
languages might display gestural components akin to representational co-speech ges-
tures, namely in the gestures expressed by the mouth and face. Since the mouth and
face are articulators that can be used simultaneously with the manual articulators,
they might provide yet another possibility for representational gestures and linguistic
expressions to occur simultaneously, just as in co-speech gestures. In an analysis of
renditions of the Sylvester and Tweety cartoon in Israeli Sign Language, Sandler identi-
fies ways in which mouth and face movements are used to co-express information
about the characters’ actions in the cartoon that are at the same time idiosyncratic and
complementary to the manually expressed information. For example, when a signer
describes Sylvester going up through a long drainpipe to get to Tweety, his manual
articulation consists of a W-handshape entity classifier moving upward in a zigzag man-
ner. At the same time, the narrowness of the drainpipe is represented by a mouth
gesture (cheeks sucked in, lips pursed). The combination of manual and non-manual
components yields the meaning that the cat went up through the narrow pipe zigzag-
ging.
These findings show that even though mouth and face gestures might be ‘gestural’
in signers, the iconicity of these gestures is less transparent than that of representa-
tional gestures accompanying speech ⫺ mainly due to the constraints of mouth and
face as a channel to express visual components. It is important to note here that if
these components are gestural and not conventionalized, this supports the view that
640 V. Communication in the visual modality

gestural components need not be ‘iconic’ and can appear in any modality (Okrent
2002).
Finally, even though most research on sign and gesture has focused on representa-
tional gestures, it is also possible to observe gestures that are affective or evaluative
expressions which simultaneously accompany manual signs, just as is the case in speech.
Emmorey (1999, 151) provides the below (somewhat adapted) example of non-linguis-
tic, gestural expressions of affect taken from an ASL rendition of the Frog Story. The
facial expression of the signer (in italics) accompanying the linguistic signed expres-
sions (within brackets) switches between the perspective of the bees (angry) and that
of the dog (fearful).

(8) large-round-object-falls. [cl:swarm. mad.] [ASL]


Eyes squint, angry expression
[dog cl:run.]
Tongue out, fearful expression
[bee cl:swarm-moves.]
Eyes squint, angry expression
‘The beehive fell to the ground. The bees swarmed out. They were mad.
The dog ran away, and the bees chased him.’

3.3. Grammaticalization of gestures

Another area of sign language research has investigated how gestures that are used by
people in the surrounding hearing communities can become integrated into the linguis-
tic system of sign languages. While some studies have shown that such gestures with
similar forms might still serve similar functions in the sign language used in the same
region, others have tried to demonstrate that gestures of the hearing community may
go through a process of grammaticalization in the sign language, thereby taking on
new linguistic and pragmatic functions (see Pfau/Steinbach (2006, 2011) for a review;
see also chapter 34).
Supporting evidence for the first claim comes from the fact that some sign language
lexemes or grammatical devices resemble co-speech gestures used by speakers in the
surrounding community. McClave (2001), for instance, has shown that speakers of
American English execute slight shifts of the head and body to the right or left in
direct quote situations similar to the role shift devices used in ASL (see chapter 17,
Utterance Reports and Constructed Action). Thus she concludes that the role shift
devices should be considered as “gestural” in the sign language. Zeshan (2003) argues
that some of the Handling classifiers found in Indopakistani Sign Language show con-
siderable variation and retain the same handshapes observed in the co-speech gestures
used among speakers. She suggests, therefore, that these handshapes are more on the
gestural than on the linguistic side when placed on a grammaticalization path.
As for the grammaticalization of gestures in sign languages, it has been proposed
that this process may take two different routes (Wilcox 2007). In one route, gestures
of the speaking community become lexicalized first, before, in a second step, acquiring
a grammatical meaning. Janzen and Shaffer (2002), for example, claim that some modal
27. Gesture 641

verbs in ASL (e.g. can) originate from gestures (i.e. the ‘strong’ gesture) which first
became lexical signs (i.e. strong) before developing further into modals. In the second
route, grammatical non-manual markers are grammaticalized directly from bound,
non-manual communicative gestures (e.g. eyebrows up for yes/no-questions, headshake
for negation) without going through a lexical stage. Once they enter the grammatical
system, such markers may acquire additional grammatical functions. The eyebrow posi-
tion typical of yes/no-questions, for instance, developed further into a topic marker in
ASL (Janzen/Shaffer 2002). Even manual communicative gestures may develop di-
rectly into grammatical markers. The palm-up presentation gesture, for example, has
taken on the function of a discourse marker in several sign languages (Engberg-Ped-
ersen 2002; McKee/Wallingford 2011).
Finally, while most previous research has focused on the grammaticalization of con-
ventional gestures used in the speaking community, recent research has investigated
how motion predicates in emerging sign languages compare to representational ges-
tures of motion in the speaking community. Within about 25 years and three cohorts
of signers, expressions of simultaneous manner and path (e.g. climb up) developed
linguistic patterning (segmented and analytic) in the emerging Nicaraguan Sign Lan-
guage and moved away from the global and synthetic representation of co-speech rep-
resentational gestures (Senghas/Kita/Özyürek 2004).
Thus, all types of gestures used in hearing communities can serve as the substrate
for various lexicalization and grammaticalization processes in sign languages. The
grammaticalization patterns, in particular, are informative with respect to the modality-
specific and modality-independent aspects of grammaticalization processes (Pfau/
Steinbach 2006, 2011).

4. Conclusion

This chapter has reviewed research on gestures in spoken languages and on the pos-
sible existence of similar gestural components in sign languages. Sign language research
has identified different ways in which gestures might manifest themselves in signs,
the different uses resembling emblematic, demonstrative, and representational gestures
previously identified as accompanying spoken languages. Consequently, even though a
continuum from gesture to sign exists in terms of conventionalization and emergence
of linguistic features (see Table 27.1), different semiotic levels of the continuum also
co-occur within sign languages, that is, signs and gestures can co-exist.
However, it is still a matter of debate whether the gestural components in sign and
spoken languages are similar in terms of semiotic composition as well as in terms of
their underlying cognitive representations (Emmorey 1999). According to Kendon
(2008), the semiotic modalities of signs and speech are so different that it should be
impossible to identify comparable gestural components in both language modalities,
simply because gestures are integrated with different modalities of expression. For
example, if the mouth can serve as an articulator for gestural representation in sign
language (Sandler 2009), then gestures in sign would be less iconic than gestures in
spoken languages, due to the different types of iconic mapping possibilities afforded
by the hands versus the mouth. Thus, even though both signers and speakers might
642 V. Communication in the visual modality

use gestural components, these components might differ in the way they are mani-
fested ⫺ or perhaps even be conceptualized (see Rathmann/Mathur (2002) for a pro-
posal that gestural components might be more obligatory in the use of verb agreement
in sign languages than in spoken languages because the visual-spatial modality as artic-
ulator is more closely linked to the imagistic aspects of conceptualization). The latter
scenario seems highly likely given the finding that different spoken languages also
make use of different co-speech gestures depending on the language-specific way of
expressing and perhaps even conceptualizing event components (Kita/Özyürek 2003).
It would also be interesting to investigate in future research whether signers of differ-
ent sign languages use different representational gestures for the same content just as
speakers of different spoken languages do.
To summarize, recent research clearly demonstrates that no matter which channel of
transmission is preferred in different systems of communication, our human language
capacity is multi-modal and is therefore able to convey information at different semi-
otic and representational levels. These initial studies make clear that further careful
research is required to understand how gestural components can be identified in sign
versus spoken languages and to facilitate further fruitful exchanges between gesture
and sign language researchers. Finally, it is important to note that the field of gesture
and sign language research is still in its initial stages and more research on co-speech
gestures in different spoken languages and sign languages is needed to understand the
fundamental features of our language faculty in its multi-modal form.

Acknowledgements: The writing of this chapter was supported by a VIDI scheme grant
from the Dutch Science Foundation (NWO) awarded to the author. I would like to
thank Pamela Perniss, Inge Zwitserlood, Richard Meier, and especially Roland Pfau
for valuable comments on the manuscript.

5. Literature
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2003 Classifier Constructions and Morphology in Two Sign Languages. In: Emmorey, Karen
(ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Law-
rence Erlbaum, 53⫺84.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 91, 301⫺344.
Battison, Robbin
1978 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
Bavelier, Daphne/Corina, David/Jezzard, Peter/Clark, Vince/Karni, Avi/Lalwani, Anil/Raus-
checker, Josef P./Braun, Allen/Turner, Robert/Neville, Helen J.
1998 Hemispheric Specialization for English and ASL: Left Invariance ⫺ Right Variability.
In: Neuroreport 9, 1537⫺1542.
Bernardis, Paolo/Gentilucci, Maurizio
2006 Speech and Gesture Share the Same Communication System. In: Neuropsychologia 44,
178⫺190.
Casey, Shannon
2003 Relationships Between Gestures and Sign Languages: Indicating Participants in Ac-
tions. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Cross-linguis-
tic Perspectives in Sign Language Research. Hamburg: Signum, 95⫺119.
27. Gesture 643

Clark, Herbert H.
1996 Using Language. Cambridge: Cambridge University Press.
Clark, Herbert H./Gerrig, Richard J.
1990 Quotations as Demonstrations. In: Language 66, 764⫺805.
Duncan, Susan
2002 Gesture, Verb Aspect, and the Nature of Iconic Imagery in Natural Discourse. In:
Gesture 2, 183⫺206.
Emmorey, Karen
1999 Do Signers Gesture? In: Messing, Lynn/Campbell, Ruth (eds.), Gesture, Speech, and
Sign. Oxford: Oxford University Press, 133⫺161.
Emmorey, Karen
2002 Language, Cognition and the Brain. Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Engberg-Pedersen, Elisabeth
2002 Gesture in Signing: The Presentation Gesture in Danish Sign Language. In: Schulmeis-
ter, Rolf/Reinitzer, Heino (eds.), Progress in Sign Language Research: In Honor of
Siegmund Prillwitz. Hamburg: Signum, 143⫺162.
Goldin-Meadow, Susan
2003 Hearing Gesture: How Our Hands Help Us Think. Cambridge, MA: Harvard Univer-
sity Press.
Goldin-Meadow, Susan/McNeill, David/Singleton, Jenny
1996 Silence is Liberating: Removing the Handcuffs on Grammatical Expressions in the
Manual Modality. In: Psychological Review 103, 34–55.
Janzen, Terry/Shaffer, Barbara
2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard
P./Cormier, Kearsy A./Quinto-Pozos, David G. (eds.), Modality and Structure in Signed
and Spoken Languages. Cambridge: Cambridge University Press, 199⫺203.
Kendon, Adam
1982 The Study of Gesture: Some Remarks on Its History. In: Semiotic Inquiry 2, 45⫺62.
Kendon, Adam
2004 Gesture. Cambridge: Cambridge University Press.
Kendon, Adam
2008 Some Reflections on the Relationship Between ‘Gesture’ and ‘Sign’. In: Gesture 8,
348⫺366.
Kita, Sotaro
2008 Cross-cultural Variation in Speech-accompanying Gestures: A Review. In: Language
and Cognitive Processes 24, 145⫺167.
Kita, Sotaro/Özyürek, Aslı
2003 What Does Cross-linguistic Variation in Semantic Coordination of Speech and Gesture
Reveal?: Evidence for an Interface Representation of Spatial Thinking and Speaking.
In: Journal of Memory and Language 48, 16⫺32.
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Krauss, Robert M./Morrel-Samuels, Palmer/Colasante, Christina
1991 Do Conversational Hand Gestures Communicate? In: Journal of Personality and Social
Psychology 61, 743⫺754.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
644 V. Communication in the visual modality

Liddell, Scott K./Metzger, Melanie


1998 Gesture in Sign Language Discourse. In: Journal of Pragmatics 30, 657⫺697.
MacSweeney, Mairéad/Woll, Bencie/Campbell, Ruth/Calvert, Gemma/McGuire, Philip/David,
Anthony/Simmons, Andrew/Brammer, Michael
2002 Neural Correlates of British Sign Language Comprehension: Spatial Processing De-
mands of Topographic Language. In: Journal of Cognitive Neuroscience 14, 1064⫺1075.
McClave, Evelyn
2001 The Relationship Between Spontaneous Gestures of the Hearing and American Sign
Language. In: Gesture 1, 51⫺72.
McKee, Rachel L./Wallingford, Sophia
2011 ‘So, Well, Whatever’: Discourse Functions of Palm-up in New Zealand Sign Language.
In: Sign Language & Linguistics 14(2), 213–247.
McNeill, David
1992 Hand and Mind: What Gestures Reveal About the Mind. Chicago: University of Chic-
ago Press.
McNeill, David
2000 Language and Gesture. Cambridge: Cambridge University Press
McNeill, David
2005 Gesture and Thought. Chicago: University of Chicago Press.
McNeill, David/Cassell, Justine/Levy, Elena
1993 Abstract Deixis. In: Semiotica 95, 5⫺19.
Meier, Richard P.
2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon
Linguistic Structure in Sign and Speech. In: Meier, Richard P./Cormier, Kearsy A./
Quinto-Pozos, D. G. (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 1⫺25.
Müller, Cornelia
2009 Gesture and Language. In: Malmkjaer, Kirsten (ed.), The Routledge Linguistics Ency-
clopedia. London: Routledge, 214–217.
Neville, Helen J./Coffrey, Sharon A./Lawson, Donald S./Fischer, Andrew/Emmorey, Karen/Bel-
lugi, Ursula
1997 Neural Systems Mediating American Sign Language: Effects of Sensory Experience
and Age of Acquisition. In: Brain and Language 57, 285⫺308.
Newport, Elissa L./Meier, Richard P.
1985 The Acquisition of American Sign Language. In: Slobin, Dan I. (ed.), The Crosslinguis-
tic Study of Language Acquisition, Vol. 1: The Data. Hillsdale, NJ: Lawrence Erlbaum,
881⫺938.
Okrent, Arika
2002 A Modality-free Notion of Gesture and How It Can Help Us with the Morpheme vs.
Gesture Question in Sign Language Linguistics (or at Least Give Us Some Criteria to
Work with). In: Meier, Richard P./Cormier, Kearsy A./Quinto-Pozos, D. G. (eds.), Mo-
dality and Structure in Signed and Spoken Languages. Cambridge: Cambridge Univer-
sity Press, 175⫺198.
Özyürek, Aslı
2002 Do Speakers Design Their Co-speech Gestures for Their Addressees? The Effects of
Addressee Location on Representational Gestures. In: Journal of Memory and Lan-
guage 46, 688⫺704.
Özyürek, Aslı/Willems, Roel M./Kita, Sotaro/Hagoort, Peter
2007 On-line Integration of Semantic Information from Speech and Gesture: Insights from
Event-Related Potentials. In: Journal of Cognitive Neuroscience 19, 605⫺616.
Padden, Carol
1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation,
University of California, San Diego [Published 1988 by Garland Outstanding Disserta-
tions in Linguistics, New York].
27. Gesture 645

Perniss, Pamela/Özyürek, Aslı


2007 Constraints on the Representation of Manner and Path in Caused Motion Events in
German Sign Language and Co-speech Gestures. Paper Presented at the 3rd Interna-
tional Society for Gesture Studies Conference, 18⫺21 June, 2007, Evanston, IL.
Perniss, Pamela/Özyürek, Aslı
submitted Channels of Expression Modulate Iconicity and Embodiment Differently in Co-
speech Gesture and Sign Language.
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 3⫺98. [Available at http://www.ling.uni-
potsdam.de/lip/].
Pfau, Roland/Steinbach, Markus
2011 Grammaticalization in Sign Languages. In: Narrog, Heiko/Heine, Bernd (eds.), The
Oxford Handbook of Grammaticalization. Oxford: Oxford University Press, 683–695.
Poizner, Howard/Klima, Edward S./Bellugi, Ursula
1987 What the Hands Reveal About the Brain. Cambridge, MA: MIT Press.
Quinto-Pozos, David
2007 Can Constructed Action Be Considered Obligatory? In: Lingua 117, 1285⫺1314.
Rathmann, Christian/Mathur, Gaurav
2002 Is Verb Agreement the Same Cross-modally? In: Meier, Richard P./Cormier, Kearsy A./
Quinto-Pozos, David G. (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 370⫺405.
Sandler, Wendy
2009 Symbiotic Symbolization by Hand and Mouth in Sign Language. In: Semiotica 174,
241⫺275.
Schembri, Adam/Jones, Caroline/Burnham, Denis
2005 Comparing Action Gestures and Classifier Verbs of Motion: Evidence from Australian
Sign Language, Taiwan Sign Language, and Nonsigners’ Gestures Without Speech. In:
Journal of Deaf Studies and Deaf Education 10(3), 272⫺290.
Senghas, Ann/Coppola, Marie
2001 Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial
Grammar. In: Psychological Science 12(4), 323–328.
Senghas Ann/Kita, Sotaro/Özyürek, Aslı
2004 Children Creating Core Properties of Language: Evidence from an Emerging Sign Lan-
guage in Nicaragua. In: Science 305, 1779⫺1782.
Stokoe, William C.
1960 Sign Language Structure: An Outline of the Visual Communication Systems of the
American Deaf. In: Studies in Linguistics, Occasional Papers 8. Silver Spring, MD: Lin-
stok Press.
Supalla, Ted
1982 Structure and Acquisition of Verbs of Motion in American Sign Language. PhD Disser-
tation, University of California, San Diego.
Talmy, Leonard
2003 The Representation of Spatial Structure in Spoken and Signed Languages. In: Emmo-
rey, Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah,
NJ: Lawrence Erlbaum, 169⫺197.
Tervoort, Bernard T.
1961 Esoteric Symbolism in the Communication Behaviour of Young Deaf Children. In:
American Annuals of the Deaf 106, 436⫺480.
Wilcox, Sherman
2004 Cognitive Iconicity: Conceptual Spaces, Meaning, and Gesture in Signed Languages.
In: Cognitive Linguistics 15, 119⫺147.
646 V. Communication in the visual modality

Wilcox, Sherman
2007 Routes from Gesture to Language. In: Pizzuto, Elena/Pietrandrea, Paola/Simone, Raf-
faele (eds.), Verbal and Signed Languages. Comparing Structures, Constructs and Meth-
odologies. Berlin: Mouton de Gruyter, 107⫺131.
Willems, Roel M./Özyürek, Aslı/Hagoort, Peter
2007 When Language Meets Action: The Neural Integration of Gesture and Speech. In:
Cerebral Cortex 17(10), 2322⫺2333.
Woll, Bencie
2003 Modality, Universality and the Similarities Among Sign Languages: an Historical Per-
spective. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Cross-
linguistic Perspectives in Sign Language Research. Hamburg: Signum, 17⫺27.
Zeshan, Ulrike
2003 Classificatory Constructions in Indo-Pakistani Sign Language: Grammaticalization and
Lexicalization Processes. In: Emmorey, Karen (ed.), Perspectives on Classifier Construc-
tions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 113⫺141.
Zeshan, Ulrike
2004 Interrogative Constructions in Sign Languages: Crosslinguistic Perspectives. In: Lan-
guage 80, 7⫺39.
Zwitserlood, Inge
2003 Classifying Hand Configurations in Nederlandse Gebarentaal (Sign Language of the
Netherlands). PhD Dissertation, University of Utrecht. Utrecht: LOT.

Aslı Özyürek, Nijmegen (The Netherlands)


VI. Psycholinguistics and neurolinguistics

28. Acquisition
1. Introduction
2. Babbling
3. Phonological development
4. Lexical development
5. Morphological and syntactic development
6. Discourse development
7. Acquisition in other contexts
8. Conclusions
9. Literature

Abstract
This chapter provides a selective overview of the literature on sign language acquisition
by children. It focuses primarily on phonological, lexical, morphological, and syntactic
development, with a brief discussion of discourse development, and draws on research
conducted on a number of natural sign languages. The impact of iconicity on sign lan-
guage acquisition is also addressed. The chapter ends with brief discussion of acquisition
in other, less typically researched contexts, including late L1 acquisition, bilingual sign-
speech acquisition, and adult acquisition of sign as a second language.

1. Introduction
This chapter is an overview of the acquisition of phonological, lexical, morphological,
syntactic and discourse properties of sign languages. Only a few decades ago, the task
of reading everything written about sign language acquisition was still reasonably man-
ageable. Today, with the establishment of new sign research programs all around the
globe, the list of published articles on sign acquisition (not to mention unpublished
theses and dissertations) has far outstripped the abilities of even the most assiduous
reader. This chapter does not attempt to summarize them all. Rather it aims to lay out
the major directions in which sign language research has progressed over the last few
decades, sketching a general outline of what we know so far about this fascinating
aspect of human development.
A major theme of early sign acquisition research was to draw parallels between L1
acquisition of natural sign languages by native-signing deaf children and more tradi-
tional L1 acquisition of spoken languages by hearing children. In emphasizing the
underlying similarities in acquisition regardless of modality, this research contributed
crucially to the argument that sign languages are fully complex natural languages, au-
tonomous from and equal in linguistic status to the spoken languages that surround
648 VI. Psycholinguistics and neurolinguistics

them (Marschark/Schick/Spencer 2006). With the linguistic status of sign languages


now firmly established (in academic circles, if not yet in the broader context), focus is
now turning to the identification of modality effects: aspects of acquisition that are
unique to languages in one or the other modality. Chief among the investigated sources
for modality effects is the enormous potential in sign languages for iconic representa-
tion (see chapter 18, Iconicity and Metaphor). Sign languages are well suited to ex-
pressing visual information, offering transparent representations for shapes of objects,
the posture of human hands manipulating those objects, spatial configurations of multi-
ple entities and direction of movement through space. Iconic elements can be identified
at all levels of sign language organization, in contrast to their comparative rarity in
spoken language. If it turns out that children are sensitive to iconicity, then the acquisi-
tion processes for the iconic elements of sign language and their non-iconic counter-
parts in spoken language may be expected to diverge noticeably.
For reasons of space, this chapter focuses on early L1 development of sign lan-
guages, aiming at the period from birth to about four years (for information on later
L1 development, readers are referred to Newport/Meier 1985; Emmorey 2002; Schick/
Marschark/Spencer 2006). Section 2 discusses research on manual babbling and the
theoretical implications of these findings for language acquisition in general. Section 3
focuses on sign phonology, summarizing developmental patterns for the three major
formational parameters, and section 4 discusses first signs and the development of a
sign lexicon. Section 5 presents overviews for a selection of syntactic and morphologi-
cal aspects of sign languages, including word order, spatial syntax, wh-questions, topics
and focus, classifiers, and non-manuals; section 6 summarizes the acquisition of referen-
tial shift, a discourse phenomenon that recruits a number of syntactic devices discussed
in section 5. The chapter closes with a very brief mention of other types of sign acquisi-
tion (late-exposed L1, bimodal bilingualism, and second language acquisition) in sec-
tion 7 and a look towards future studies in section 8.

2. Babbling

Until fairly recently, babbling was thought to be a phenomenon exclusive to speech,


tied to the development of the parts of the body used for speaking (van der Stelt/
Koopmans-van Bienum 1986). At around 4⫺6 months after birth, physiological
changes allow the infant to produce true vocalizations, rather than just the cooing and
other vegetative sounds typical of earlier stages. Babbling is non-referential (i.e. it does
not have any associated meaning) and uses a subset of the phonetic units available to
spoken language (not necessarily specific to the target language). Investigations of
vocal babbling have revealed two stages of development: syllabic or canonical babbling
at around 7⫺10 months, characterized by multi-cyclic repetition of simple consonant-
vowel (CV) syllables, and variegated or jargon babbling from about 12⫺14 months,
characterized by strings of different CV syllables, often produced with prosodic pat-
terns appropriate to the target language (Oller/Eilers 1988). Interestingly, infants dis-
play individual preferences for certain phonemes and syllable types in their babbling,
and these preferences carry over into their first words (Vihman et al. 1985). This last
observation has led some researchers to consider babbling as the first stage of language
28. Acquisition 649

production development, one that is uniquely shaped by and suited for the speech
modality (Liberman/Mattingly 1985, 1989).
Petitto and Marentette (1991) first challenged the concept that babbling is exclu-
sively tied to speech development, presenting evidence for babbling in the gestural
modality. They studied the manual activity of two deaf and three hearing infants and
reported that both sets of babies produced non-referential hand activity with the gen-
eral characteristics reported for vocal babbling. This “manual babbling” shared the
phonetic and syllabic (movement) structure of natural sign languages and exhibited
the repetitive, rhythmic patterns typical of vocal babbling. Manual babbling by the
deaf subjects, who had been exposed to American Sign Language (ASL) from birth,
followed a time course similar to that reported for vocal babbling in hearing children,
including syllabic babbling at 10 months and more complex, variegated babbling at 12
months. Deaf babies’ manual babbling was deliberate and communicative, and deaf
mothers responded by signing back to their infants. Most crucially, individual preferen-
ces for hand configurations, locations, and/or movement types in deaf infants’ manual
babbling continued into their first ASL signs (see also Cheek et al. 2001). None of
these patterns were observed in the manual activity of hearing infants, who exhibited
a limited inventory of hand configurations and movement types and did not progress
to more complex forms over time. A similar disparity has been noted by Masataka
(2000) for Japanese deaf and hearing infants. In subsequent discussion, Petitto (2000)
classified manual activity by the hearing babies as excitatory motor hand behavior
rather than true manual babbling.
Petitto (2000) and Petitto and Marentette (1991) interpreted their findings as sup-
port for an amodal language capacity equally suited for speech or sign. Under this view,
babbling is not triggered by motor developments of the speech articulatory system, but
rather by infants’ innate predisposition towards structures with the phonetic and syl-
labic patterns characteristic of human language, spoken or signed. This predisposition
leads to either vocal or manual babbling, depending on the input the child receives.
As further support for an amodal language capacity, Petitto (2000) cites the observa-
tion that deaf babies occasionally produce limited vocal babbling (Oller/Eilers 1988)
just as hearing, non-signing babies occasionally produce limited manual babbling. If
the tongue and hands are truly “equipotential language articulators” at birth, Petitto
predicts that “we will see language-like articulations spill out into the ‘unused’ modal-
ity, albeit in unsystematic ways” (2000, 8).
Meier and Willerman (1995) pursued an alternative, motor-driven explanation for
the structural and timing similarities between manual babbling and signing. They exam-
ined the gestural production of two hearing and three ASL-exposed deaf children from
8 to 15 months of age. In their patterns of handshape, movement and place of articula-
tion, deaf and hearing subjects looked very similar, in contrast to the reports by Petitto
and her colleagues. Both groups also tended to produce communicative gestures with
a single cycle, but non-referential gestures with multiple cycles. The only difference
observed was that deaf subjects tended to produce their non-referential gestures with
more cycles than did hearing subjects.
Like Petitto and Marentette (1991), Meier and Willerman (1995) also interpreted
their results as support for the language capacity’s equal potential to develop as speech
or as sign. However, they argued that there is no reason to assume, as Petitto and
Marentette did, that input is crucial for triggering babbling in one modality or the
650 VI. Psycholinguistics and neurolinguistics

other. Babbling may emerge due to motor factors that apply equally to speech and
gesture. For example, all babbling is characterized by repetitive movement; in manual
babbling this repetition occurs at the hands, whereas in vocal babbling, it occurs at the
mandible (MacNeilage/Davis 1990). Meier and Willerman proposed that these re-
peated movements may both be rhythmical motor stereotypies of the type described
by Thelen (1981) as occurring at the transition between “uncoordinated activity and
complex, coordinated voluntary motor control” (1981, 239). Under this view, the fact
that both manual and vocal babbling (uncoordinated activity) occur when the infant is
ready to transition to language (complex, coordinated activity) accounts for their simi-
lar onset times.
As for why hearing children with no exposure to any sign language should exhibit
robust manual babbling, Meier and Willerman speculate that since adult speech and
sign are both rhythmically organized, simply hearing adult speech might be enough to
trigger rhythmic behavior in both the gestural and vocal modalities. Hearing infants
may persist in babbling manually because they receive visual feedback of their own
gestures (such feedback is not available for deaf infants with regard to their vocaliza-
tions, perhaps leading to the late onset of their vocal babbling (Oller/Eilers 1988)).
Still, Meier and Willerman did not discount the effect of the input language entirely:
the tendency of deaf infants to produce more multi-cyclicity in their babbling gestures
than their hearing counterparts could well be an effect of growing up in a sign environ-
ment, where multi-cyclicity is an extremely salient feature of the target language.

3. Phonological development
When sign-exposed children begin to produce lexical items in their target sign lan-
guage, their production is characterized by phonological simplifications and substitu-
tions affecting the formational parameters of sign languages (hand configuration, loca-
tion, movement, and orientation). Like their speech-exposed counterparts, sign-
exposed children must gradually develop a system of phonetic contrasts, adding to
their phonetic inventory and learning the phonotactic constraints that apply for their
target language. In the meantime, production by both groups of children is subject to
certain universal factors such as markedness, which presumably apply regardless of
language modality. However, despite striking parallels in the L1 development of speech
and sign, modality effects nevertheless exist. Most obviously, sign and speech implicate
very different sets of articulators that may be subject to different motoric limitations
and develop at disparate rates. Broad typological differences between sign and speech
may also mean that signing and speaking children choose different strategies to com-
pensate for their immature phonological systems. As mentioned in the introduction,
the prevalence of iconicity in sign languages is one such typological feature, and a
thorough account of language acquisition must consider how this feature might affect
the early phonological forms produced by signing children.

3.1. Effects of iconicity on early phonology


Unlike their peers acquiring spoken languages, sign-exposed children are regularly
presented with highly iconic forms in their input. If iconicity enhances the transparency
28. Acquisition 651

of new signs, it could potentially be very attractive to children as something that facili-
tates the mapping from form to meaning. Sign-exposed children might initially assume
that lexical items should be as faithfully iconic as possible and seek to enhance the
iconicity of target signs in their own production, resulting in phonological forms that
do not match target forms. Research in this area, however, indicates that iconicity does
not play this role in most of children’s phonological errors. In a study of the early ASL
of two deaf girls, Launer (1982) found that roughly 15 % of their production displayed
enhanced iconicity, while 20 % was counter-iconic, or less iconic than the target form
of the sign. More recently, Meier et al. (2008) found that signing adults rated the vast
majority (59.4 %) of the ASL signs produced by four 8⫺17 month old babies as neither
more nor less iconic that their target forms. Of the remaining child tokens, significantly
more were judged as less iconic than the target (36.2 %) than more iconic (4.3 %). Of
the 33 signs judged as more iconic than the target, Meier et al. (2008) noted that
one-third of them featured additions of mimetic facial movements rather than any
modification of the manual component. They cited as an example one child’s articula-
tion of eat with a cookie in hand, and mouth movements mimicking chewing (although
the child did not actually bite the cookie). This pattern calls to mind claims that young
signers assume lexical meaning to be encoded by the hands, while affect is encoded by
the face (Reilly 2000; see discussion in section 5.5).
The fact that so few of the signs in the Launer (1982) and Meier et al. (2008) data
showed enhanced iconicity might be puzzling in light of well-known studies on home-
sign systems (Goldin-Meadow 2003; see chapter 26 for discussion). Iconicity is the
hallmark of the communicative gestures invented by home-signers, deaf children raised
without exposure to any conventional sign language. For these children, a high degree
of transparency is necessary to ensure that their invented gestures will be understood
by others, and this constraint alone may pressure the child to favor highly iconic forms.
Meier et al. (2008) proposed that as the inventors of their own gesture systems, home-
signers are free to choose iconic forms that match their articulatory capacities. This is
in contrast to ASL-exposed children, who have no control over the articulatory com-
plexity of the conventionalized forms presented to them.
In summary, both Meier et al. (2008) and Launer (1982) concluded that iconicity
does not exert a major effect on the phonological production of ASL-exposed children.
Errors in early ASL are better explained by appealing to motoric factors, as I will
discuss in the next sub-section.

3.2. Effects of motoric factors

Observations of infant motor development offer many potential insights into the acqui-
sition of manually-expressed languages such as sign languages. Recent studies of pho-
nological development in sign-exposed children have focused on three motoric factors
in particular, proximalization of movement, tendency towards multi-cyclicity, and sym-
pathy. Proximalization refers to the fact that infant motor control generally progresses
from proximal articulators (close to the torso) to distal articulators (far from the torso).
Multi-cyclicity is related to the prevalence of repeated movement patterns across many
domains of early motor development. Finally, sympathy refers to infants’ initial diffi-
652 VI. Psycholinguistics and neurolinguistics

culty in inhibiting movement of one of their hands, resulting in a tendency to move


both hands in tandem. In the following sub-sections, I will detail how each of these
features of early motor development affects early articulation of signs by sign-exposed
children, as well as address relative rates of error for the three major sign formational
parameters of hand configuration, location, and movement.

3.2.1. Proximalization of movement

Signing activates a series of joints along the arm and hand from the shoulder and elbow
(the joints most proximal to the torso) to the wrist and first (K1) and second (K2) sets
of knuckles (the joints most distal to the torso) (Mirus/Rathmann/Meier 2001). De-
tailed investigation of child signing has revealed that their patterns of joint activation
do not always match those of adult signers. Children often shift movements at distal
joints to more proximal ones, resulting in bigger movement patterns. Proximalization
in child signing has been noted in studies of sign languages other than ASL (Takkinen
(2003) for Finnish Sign Language (FinSL); Lavoie/Villeneuve (1999) for Quebec Sign
Language (LSQ)), but the most detailed study to date is Meier et al. (2008). In a
careful examination of 518 ASL signs spontaneously produced by four deaf children
between 8 to 17 months, these researchers found that only 32 % were correct in their
joint activation pattern. The rest exhibited errors due to omission of one or more joints
and/or substitution of an alternative joint for the target.
Meier et al. (2008) established three predictions with respect to proximalization.
The first was that when children made a joint substitution, they would be more likely
to substitute a proximal articulator than a distal one. This prediction was borne out by
the data: the majority of all the children’s substitutions were proximal to the target
joint. Substitution of a more proximal joint was especially pronounced when the sign
called for movement at the elbow, wrist, and K1. Figure 28.1 shows an example of a
proximalization error is the sign horse signed with bending at the wrist rather than
at K1.
Next, Meier et al. (2008) predicted that distal joints would be more likely to be
omitted than proximal joints. This prediction was also confirmed by the data: out of
55 errors of joint omission, all but two involved omission of the more distal joint.
Conversely, Meier et al. (2008) reasoned that children’s signs might sometimes activate

Fig. 28.1: Adult form of ASL horse (© 2006, www.Lifeprint.com; used by permission), followed
by a child’s proximalized form.
28. Acquisition 653

additional joints not specified in the adult form, and that in these cases, the additional
joint would be proximal to those specified in the adult form. On first analysis, this final
prediction was not supported by the data: out of 78 errors of this type, the majority
involved an additional distal joint. Closer examination of these cases revealed that in
almost all of them, children added K2 to signs that targeted K1. This was the only
context in which K2 was added. Once these cases were excluded from analysis, addition
of proximal joints greatly exceeded that of distal joints, strengthening the generaliza-
tion that children manipulate and control proximal joints earlier than distal ones.
Interestingly, the tendency to proximalize movement is not limited to infants, but
has also been observed in child-directed signing (Holzrichter/Meier 2000) and in the
signing of adult second language learners of sign language (Mirus/Rathmann/Meier
2001). The former observation raises the importance of considering input factors as a
possible influence on children’s proximalization, in addition to motoric constraints.
The latter observation indicates that proximalization is not limited to immature motor
systems. Indeed, even fluent Deaf signers were observed by Mirus, Rathmann, and
Meier to occasionally proximalize when asked to repeat signs from a sign language
that they do not know.

3.2.2. Multi-cyclicity

The propensity for repetitive movement noted earlier for manual babbling has also
been observed in infants’ first signs in British Sign Language (BSL, Clibbens/Harris
1993; Morgan/Barrett-Jones/Stoneham 2007). Clibbens and Harris (1993) proposed
that repeating a sign multiple times gives children the chance to improve the accuracy
of their articulation. However, Morgan, Barrett-Jones, and Stoneham (2007) found that
although their BSL-exposed subject produced additional cycles in 47 % of her signs,
the extra repetitions led to improved articulation in only 10 % of those cases.
Meier et al. (2008) found that across 625 early signs produced by four deaf, ASL-
exposed children (the same mentioned in the proximalization study in the previous
subsection), signs were produced with anywhere from one to 37 cycles, with a median
of three cycles per sign. Using elicited production from a Deaf, native ASL signer as
a standard, the researchers counted 151 instances in which the children produced a
different number of cycles than the adult standard. The children were slightly more
likely to err for monocyclic targets than for multi-cyclic targets. However, multi-cyclic-
ity is a robust feature of ASL, so most of the time children’s tendency towards multi-
cyclicity did not result in any error. The target forms of 70 % of the signs attempted
by the children call for repeated movement; for these signs, the children’s multi-cyclic
productions were counted as target-like (cf. Juncos et al. (1997) for a similar report of
child accuracy on multi-cyclic target signs in Spanish Sign Language (LSE)).
Meier et al. (2008) concluded that while their subjects demonstrated a preference
for multi-cyclic forms, they were already learning to inhibit this preference for monocy-
clic signs. This was despite the fact that, like proximalization of movement, addition of
cycles to target signs is a feature of child-directed signing (Maestas y Moores 1980;
Masataka 2000 for Japanese Sign Language (NS)).
654 VI. Psycholinguistics and neurolinguistics

3.2.3. Sympathy

Infants in their first year of life have difficulty with actions requiring separate control
of the two hands, including some one-handed activities that require inhibiting the ac-
tion of the second hand (Fagard 1994). As a result, some children may mirror one-
handed movements, such as reaching for an object, with the second hand (Trauner et
al. 2000). Meier (2006) referred to this type of mirroring as sympathetic movement.
In the realm of sign production by sign-exposed children, sympathetic movement is
sometimes observed for one-handed signs, as in the ASL example of horse in Fig-
ure 28.1 above. Although the child correctly raises only the dominant hand to the
target location, his non-dominant hand mirrors the movement of the dominant hand,
repeatedly bending at the wrist. Generally, however, sign-exposed children have little
difficulty inhibiting the non-dominant hand when producing one-handed target signs
(Cheek et al. 2001; Meier 2006).

Fig. 28.2: The ASL target form of cookie

Sympathy is much more likely to cause problems for two-handed target signs that
require distinct hand configurations, such as the ASL sign cookie (Figure 28.2) where
the dominant hand acts upon a static base hand. This category of signs is referred to
as two-handed dominance arrangements by Cheek et al. (2001). These researchers
reported that such signs were attempted in less than 10 % of the total spontaneous
production of their four deaf subjects, and were produced with correct arrangement of
the hands only 40 % of the time. The remaining cases were counted as errors; these
were articulated as either one-handed signs in which the non-dominant base hand was
dropped, or as two-handed symmetrical signs in which the base hand was assimilated
to the same movement (and sometimes handshape) of the dominant hand (also noted
by Siedlecki/Bonvillian (1997) and Marentette/Mayberry (2000) for ASL; Takkinen
(2003) for FinSL).
As for two-handed dominance signs, Cheek et al. (2001) also reported errors for
this category (29 %), all but one of them articulated as one-hand signs. This type of
error cannot be caused by sympathy, and indeed may appear unexpected in light of
the mirrored reaching movements reported by non-linguistic infant studies. Indeed,
such modification may not be an error at all; Cheek et al. noted that omission of the
non-dominant hand in two-handed symmetrical signs (with non-alternating movement)
is frequently attested in adult signing, where it is known as “Weak Drop” (Padden/
Perlmutter 1987).
28. Acquisition 655

If signs with a two-handed dominance arrangement are cognitively more demanding


than either two-handed symmetrical or one-handed signs, young sign-exposed children
may avoid two-handed dominance arrangements in their production. Cheek et al.
(2001) noted that two-handed dominance signs are extremely common in adult signing
(although admittedly not necessarily as common in child-directed signing), making up
25 % of the adult ASL lexicon (Klima/Bellugi 1979, calculated on the basis of the
Dictionary of American Sign Language by Stokoe/Casterline/Croneberg 1965). At less
than 10 %, this class of signs is strikingly underrepresented in the children’s ASL.
Takkinen (2003), Karnopp (2002), and Pizzuto (2002) have also commented on lower-
than-expected rates of signs with two-handed dominance arrangements in children’s
FinSL, Brazilian Sign Language (LSB), and Italian Sign Language (LIS), respectively.
In contrast, Morgan, Barrett-Jones, and Stoneham (2007) reported that nearly 20 % of
their BSL-exposed subject’s signs involved a non-dominant base hand, with only 24 %
errors. At least part of this observed difference from the ASL findings may be due to
the fact that the BSL data were observed between 19 and 24 months, a bit older than
the age range studied by Cheek et al. (2001) (5⫺16 months).

3.3. Developmental patterns for the three major formational parameters

In addition to the effects of motoric tendencies detailed in the previous subsection,


studies on early sign phonology have observed a recurring pattern for the order of
acquisition of the three major formational parameters. In general, location is acquired
earliest, produced accurately in even the earliest signs. Movement is controlled less
well than location; the motoric factors discussed above all affect the movement param-
eter, and we have seen that these effects persist well beyond the first year of life.

Tab. 28.1: Early handshape, location, and movement accuracy from selected reports
Study # of Age span Handshape Location Path Hand-inter-
subjects nal mvt
Marentette & 1 Deaf 1;0⫺2;0 dominant: horizontal: (57 %) (48 %)
Mayberry (27 %) (89 %)
(2000) (ASL) non-dom: vertical:
(26 %) (74 %)
Conlin et al. 3 Deaf 0;07⫺1;05 93/372 303/372 201/372 (48 %)
(2000) (ASL) (25 %) (81 %) (54 %)
Cheek et al. 4 Deaf 0;05⫺1;04 195/528 499/623 354/630 77/162
(2001) (ASL) (37 %) (80 %) (56 %) (48 %)
Morgan et al. 1 Deaf 1;07⫺2;0 602/1018 763/1018 462/910 106/198
(2007) (BSL) (59 %) (75 %) (51 %) (54 %)
Mann et al. 91 Deaf 3⫺5 yrs 54 % not 72 % 55 %
(2010) (BSL)* 6⫺8 yrs 64 % measured 86 % 64 %
9⫺11 yrs 76 % 91 % 77 %
* The results of Mann et al. are based on a nonsense sign repetition task, whereas the remaining
studies in this table are based on natural production data.
656 VI. Psycholinguistics and neurolinguistics

Finally, handshape is mastered the latest of the major parameters, exhibiting both a
low degree of accuracy and a high degree of variability. Table 28.1 summarizes the
percentage accuracy for handshape, location, and movement reported by selected stud-
ies on ASL and BSL.
The finding that location is acquired early and handshape late, with movement
falling somewhere in between, is cross-linguistically very robust, having been reported
for LSE (Juncos et al. 1997), FinSL (Takkinen 2003), LSB (Karnopp 2008), BSL (Clib-
bens/Harris 1993; Morgan/Barrett-Jones/Stoneham 2007; Mann et al. 2010), and a num-
ber of studies on ASL (Bonvillian/Siedlecki 1996; Conlin et al. 2000; and Marentette/
Mayberry 2000, among others). Furthermore, children’s accuracy is affected by pho-
netic complexity; for instance, target signs with a complex or marked handshape are
more likely to be produced with errors than those with an unmarked handshape (Boyes
Braem 1973, 1990), as well as more likely to lead to errors in movement (Mann et
al. 2010).

3.3.1. Location

In the case of location, the basic pattern of motor control discussed earlier (control of
proximal joints before distal joints) works in favor of early acquisition, as production
of location tends to implicate the most proximal articulators, the shoulder and elbow
(Cheek et al. 2001). Errors in sign location must thus be attributed to sources other
than difficulty controlling articulators. Morgan, Barrett-Jones, and Stoneham (2007)
proposed that the best predictor of location errors is size of the target location. Twenty-
five percent of their BSL-exposed subject’s total signs contained location errors (a
somewhat higher rate than reported by studies of ASL), and of these 71 % involved
movement to a larger nearby location (e.g. from the temple to the cheek, or from the
neck to the chest).
Marentette and Mayberry (2000) attributed similar errors in their ASL data to loca-
tion saliency rather than size. These authors reported that 91 % of all location errors
committed by their ASL-exposed subject involved substitution of a neighboring loca-
tion with higher saliency, such as signing telephone at the ear rather than at the cheek.
They argued that low saliency locations such as the cheek and temple are not yet well
represented in the child’s developing body schema, and as such may be unavailable as
locations for signing. A similar conclusion was reached by Conlin et al. (2000), also
for ASL.
Most studies agree that locations in neutral space and on or around the head are
among the earliest and most accurate locations to emerge in early signing (Marentette/
Mayberry 2000; Bonvillian/Siedlecki 1996; Conlin et al. 2000; Cheek et al. 2001). A
notable exception is Morgan, Barrett-Jones, and Stoneham (2007), who reported that
BSL signs targeting the face and head were consistently more prone to error than
signs targeting the trunk, non-dominant hand, or neutral space. There is also some
disagreement as to whether children’s patterns of location substitution indicate pre-
ferred default values. Marentette and Mayberry (2000) reported that three location
values ⫺ the trunk, head, and mouth ⫺ appeared in the majority of their subject’s
location substitutions, while other researchers found no favored substitute location in
their data (Cheek et al. 2001).
28. Acquisition 657

3.3.2. Movement

Sign production by young sign-exposed children is marked by movement errors caused


by proximalization, sympathy, and multi-cyclicity (Meier et al. 2008), as discussed ear-
lier. In this subsection, I will focus on patterns of error and relative order of acquisition
for specific movement values, for which there is considerable variation across the exist-
ing literature. Nevertheless, some interesting generalizations are beginning to emerge.
Most studies consider path or directional movement separately from hand-internal
movement; as illustrated in Table 28.1 above, hand-internal movement is consistently
reported as being less accurate in child signing than path movement. Cheek et al.
(2001) reported a 56 % accuracy rate for path in their subjects’ signing, very close to
the accuracy rates reported by other studies in Table 28.1. The most common path
movement according to these authors was “no path”, or signs in which no articulation
occurs at either the shoulder or elbow, although wrist extension or flexion (e.g. horse),
rotation of the forearm (e.g. book) or hand-internal movement might occur. Similarly,
Morgan, Barrett-Jones, and Stoneham (2007) found the path type “hold” to be the
most accurate of movement types produced by their BSL-exposed subject, although it
was also the least frequently attested. At the other end of the acquisition spectrum,
circular path movements were attested only once in the Cheek et al. (2001) data, and
were also found by Morgan, Barrett-Jones, and Stoneham (2007) to be the least accu-
rate (although not the least frequent) in their subject’s production.
As for path movement substitutions, Marentette and Mayberry (2000) and Karnopp
(2008) found no pattern or preferences in early errors of ASL and LSB, respectively. In
contrast, Cheek et al. (2001) identified up-down movement, the second most common
movement type in their data, as the most common substitution during movement errors
(e.g. cold with an up-down rather than a side-to-side movement).
Hand-internal movement is universally reported to be difficult for young sign-ex-
posed children. Cheek et al. (2001) reported a 48 % accuracy rate for this category of
signs, again similar to that found by Marentette and Mayberry. The former researchers
listed open/close movement as the most frequent and most accurate in their data, in
contrast to reports by the latter researchers that bending of K1 and rotation of the
forearm were the most frequent in their data. The status of open/close as the most
preferred substitution in errors of hand-internal movement was also reported for early
BSL by Morgan, Barrett-Jones, and Stoneham (2007), although it was exceeded in
accuracy rate by wrist-bend in their data. Both research groups commented on the
difficulty posed by finger wiggling, which was either omitted or replaced with the open/
close movement in the Cheek et al. (2001) data. In contrast, Morgan, Barrett-Jones,
and Stoneham (2007) reported that all errors, including those involving finger wiggling,
were due to substitution, never to omission.
Morgan, Barrett-Jones, and Stoneham (2007) noted a very interesting error type
specific to target signs requiring a combination of path and hand-internal movement.
In 85 % of such signs, their subject produced the two components sequentially rather
than simultaneously. For example, she signed the BSL sign how-many, normally articu-
lated with a repeated side-to-side movement with simultaneous finger wiggle, as a side-
to-side movement followed by finger wiggle. In other cases, she deleted one element
of the combination, or inserted extra holds between segments of the target sign.
Several researchers noted that their subjects often replaced path movement with
hand-internal movement, or vice versa (Marentette and Mayberry (2000) for ASL;
658 VI. Psycholinguistics and neurolinguistics

Karnopp (2008) for LSB), suggesting that children may not yet fully distinguish these
as separate classes of movement types. Also, early studies such as Siedlecki (1991) and
Siedlecki and Bonvillian (1993) reported that children sometimes produced two-
handed signs in which each hand executed a different movement, violating the Symme-
try and Dominance conditions of Battison (1978). However, I have found no further
extension of this claim in the literature.

3.3.3. Handshape

Contrasts in hand configurations are encoded in relatively small changes of finger and/
or thumb position, rendering some contrasts difficult to articulate and perceive. Conlin
et al. (2000) found that their subjects produced not only a high number of handshape
errors but also a high degree of variability in their handshape substitutions. Similar
observations have recently been reported for BSL by Morgan, Barrett-Jones, and
Stoneham (2007).
Given that handshape poses such significant challenges to young signers, this forma-
tional parameter has been the subject of relatively intense study from early on, begin-
ning with Boyes Braem (1973, 1990). She developed a system of eight features predict-
ing hand configuration markedness in sign languages, based largely on anatomical
characteristics of the human hand and observations of early infant reaching, grasping,
and pointing behavior. On the basis of these features, Boyes Braem predicted the four
stages of development (plus A as the maximally unmarked configuration, the posture
of the infant hand at rest) listed in the table below.

Tab. 28.2: Boyes Braem (1973, 1990) hierarchy of hand configuration markedness
Max. unmarked A Closest to the posture of the hand at rest
configuration
Stage I S, L, bO, G/1, 5, C involves manipulation of hand as a whole OR
thumb and/or index only
Stage II B, F, O only the highly independent digits are able to
move separately (thumb and index)
Stage III (I, Y) (D, P, 3, V, H) W requires differentiation of individual fingers, to
inhibit or activate specific groups of fingers
Stage IV (8, 7), X, R, (T, M, N) requires activation and inhibition of ulnar
fingers independently; applies additional
features cross and insertion

The Boyes Braem markedness hierarchy has been tested by various investigators
of early sign development, beginning with Boyes Braem herself. She found heavy reli-
ance on Stage I configurations in the signing of one deaf, ASL-signing girl at 2;7 for
both overall production (49 %) and substitutions for more marked handshapes (76 %).
Accuracy for the Stage I configurations was generally high. Similar dependence on
unmarked (Stage I and II) handshapes for early signs and substitutions has been re-
ported by numerous other researchers of ASL (McIntire 1977; Siedlecki/Bonvillian
28. Acquisition 659

1997; Conlin et al. 2000, among others) and other sign languages (e.g. Clibbens/Harris
(1993) for BSL; von Tetzchner (1984) for Norwegian Sign Language (NSL)).
Of course, researchers have also reported many patterns that are not consistent
with the predictions described above. Some can be accounted for by secondary factors
influencing hand configuration accuracy noted by Boyes Braem (1973, 1990). For in-
stance, children are more accurate when they have visual feedback on their production
of hand configurations; the same configuration may be executed accurately at visible
locations, but inaccurately when the hand is placed out of view. Also, children may err
on unmarked hand configurations if they occur in combination with movements or
other features that increase formational complexity of the target sign (Boyes Braem
1990; McIntire 1977). Kantor (1980) suggested that the use of a hand configuration in
a classifier construction may represent one such increase in complexity, causing errors
in configurations that the child already controls for lexical signs. Conversely, many
children learn to form all the letters of the manual alphabet, but are unable to control
some of those same configurations in the context of lexical signs (e.g. Siedlecki/Bonvil-
lian 1997). All of these examples indicate that the determination of markedness (or
indeed, developmental stages) on the basis of whole hand configurations is too simplis-
tic an approach to acquisition. More recent studies (such as Karnopp 2002 for LSB)
are returning to discussion of markedness and acquisitional stages in terms of individ-
ual features, similar to those that Boyes Braem originally used to generate her stages
of acquisition (cf. Johnson/Liddell 2010).

4. Lexical development
Somewhere around the first year, sign-exposed children begin to produce recognizable
signs. There has been considerable debate over whether or not signing children experi-
ence accelerated progress through the early stages of lexical development (the so-
called ‘sign advantage’), as I discuss below.

4.1. Vocabulary content and trajectory

As sign-exposed children begin to develop their lexicon, the high potential for trans-
parent mapping between form and meaning raises the possibility that iconicity might
facilitate lexical acquisition in sign languages. If this is so, children may preferentially
learn highly iconic signs earlier than arbitrary signs, resulting in a disproportional bias
towards iconic signs in their early production. However, Orlansky and Bonvillian
(1984) found that this was not the case; iconic signs were not particularly well repre-
sented in the earliest vocabularies of the nine ASL-exposed children they studied,
accounting for only a third of their earliest signs. This finding aligns with the reports,
summarized in section 3.1, that iconicity is not a major factor in children’s phonological
realizations of their earliest signs (Meier et al. 2008).
Rather than being determined by iconicity, early vocabulary content for sign-ex-
posed children appears to be organized around semantic categories that are typical for
infants learning English and other languages (Fenson et al. 1994). This is the conclusion
660 VI. Psycholinguistics and neurolinguistics

reached by Anderson and Reilly (2002) for ASL and Woolfe et al. (2010) for BSL,
based on ASL and BSL adaptations of the MacArthur Communicative Development
Inventory, a parental report originally developed for American English by Fenson et
al. (1994). The ASL data, collected from 69 ASL-exposed children, indicated that chil-
dren’s first signs were predominantly nouns and revolved around terms for food (e.g.
milk, cookie), family members (e.g. mommy, daddy, baby), animal names (e.g. dog,
cat), clothing items (e.g. shoe, hat), and greetings (e.g. bye). Acquisition of wh-signs,
negative signs, emotion signs, verbs of cognition, and the onset of two-sign combina-
tions were also similar to the norms reported for American English in both sequence
and time course (Fenson et al. 1994).
The many similarities across early spoken and sign vocabulary development not-
withstanding, the ASL and BSL studies also described some notable differences.
Whereas hearing English learners reportedly experience a “vocabulary burst” at some
point in their first three years (Bloom 1973) during which they rapidly increase their
rate of new word production, the ASL signers studied by Anderson and Reilly (2002)
showed no evidence for such a burst. Instead, they appeared to follow a steady, linear
course of vocabulary development. This is not necessarily a universal feature of sign
development, however, because Woolfe et al. (2010) reported that their BSL subjects
did exhibit a general vocabulary spurt, parallel to that described for spoken language
development. They surmised that Anderson and Reilly may not have sampled their
subjects frequently enough to detect a vocabulary burst. Interestingly, Anderson and
Reilly did report a sort of ‘verb burst’ at around 200 signs, when ASL children’s propor-
tion of predicates increased dramatically, such that by 400C signs, it was twice that
observed for English learners. A verb bias has been reported for other sign languages,
as well. Hoiting (2006) reported an even more dramatic difference between percenta-
ges of predicates in early English and Sign Language of the Netherlands (NGT), with
her NGT-learning subjects producing predicates five times more frequently than their
English-learning counterparts. Woll (2010) observed that the first 50 English words of
young English/BSL bilingual children included no action words; yet these concepts
were all expressed by the children in their BSL. These findings may reflect important
typological differences by which predicates are more salient in ASL, BSL, and NGT
than in English (Slobin et al. 2003; Hoiting 2006, 2009).

4.2. The “sign advantage” in early lexical development

Early studies of sign language development (Schlesinger/Meadow 1972; Prinz/Prinz


1979; Bonvillian/Orlansky/Novack 1983, among others) reported sign-exposed children
producing their first signs 1.5⫺4.5 months earlier than the onset of the first word for
speech-exposed children (de Villiers/de Villiers 1978). This difference led to specula-
tion about a so-called “sign advantage”, beginning with the assumption that the articu-
lators for sign languages (i.e. the hands) develop earlier than those used for spoken
languages, allowing sign-exposed children to produce lexical items earlier, which in
turn might lead to accelerated progression through the early stages of linguistic devel-
opment. Indeed, Bonvillian, Orlansky, and Folven (1990) reported that ASL-exposed
children consistently achieved the milestones of first word/sign, first ten words/signs
and first word/sign combination earlier than their English-exposed counterparts. Find-
28. Acquisition 661

ings like these quickly catapulted interest in a possible “sign advantage” into the realm
of popular parenting, where a massive “baby sign” industry developed to encourage
preverbal hearing children to use signs and gestures to communicate with their hearing
parents (Garcia 1999; Acredolo/Goodwyn/Abrams 2002).
Meier and Newport (1990) reviewed studies of morphological and syntactic devel-
opment in ASL and spoken language, concluding that the data do not support a sign
advantage for these areas. In contrast, they agreed with previous claims for a sign
advantage for the onset of lexical development, concluding that the age of signing
onset is the age at which all children are ready to begin lexical development, but
that speech-exposed children are delayed due to restrictions of motor development (a
“speech disadvantage”). This position has been contested by Volterra and her col-
leagues (Volterra/Iverson 1995; Capirci et al. 2002), who argue that studies of early
lexical development have conflated early signs with early communicative gestures for
sign-exposed subjects, while counting only early words for speech-exposed subjects.
Once communicative gestures are properly distinguished from signs and are coded for
both sign- and speech-exposed children, both groups show comparable ages of onset
for communicative gestures and first word/sign.
However, recent work by Meier et al. (2008) on motoric factors in early signing
reiterates the claim for a sign advantage in early lexical development. As discussed in
section 3.3, early control of two proximal oscillators (the shoulder and elbows) facili-
tates early and accurate articulation of one out of three major formational parameters
of sign (location), allows the signing child to signal a comparatively large number of
lexical contrasts in their early output. This may render signing children’s “early clumsy
attempts […] more recognizable to parents and experimenters than […] the garbled
first words of speaking children” (Meier et al. 2008, 341).

5. Morphological and syntactic development


Around the second year, sign-exposed children begin the process of morphological and
syntactic development. Much of the research attention in this area has revolved around
the child’s use of word order and phenomena that alter word order for morphological
(e.g. verb agreement) or pragmatic purposes (e.g. wh-questions, topics, or focus). Inter-
est has also been high for morphological and syntactic constructions unique to sign
languages (e.g. spatial syntax, classifier predicates, and non-manual signals). Many of
these morphological and syntactic phenomena are also relevant to the development of
discourse, which will be mentioned briefly at the end of this chapter.

5.1. Basic word order

Studies of early word order are available for several sign languages, including NGT
(Coerts/Mills 1994; Coerts 2000), ASL (Hoffmeister 1978; Schick 2002; Chen Pichler
2001), LSB (Pizzio 2006), and NS (Torigoe/Takei 2001). All of these sign languages
allow variation in word order, in which the subject and/or object appear in non-canoni-
cal positions. Faced with variable word order in their input, sign-exposed children could
662 VI. Psycholinguistics and neurolinguistics

conceivably react in several different ways. They might ignore the variability, insisting
on a single order (perhaps the basic or canonical order of their target language) in
their early production. Alternatively, they might copy the variability they see, but in a
random fashion. Finally, they might demonstrate early acquisition of the syntactic and
pragmatic nuances distinguishing one order from another, leading to target-like word
order variation.
One of the earliest English-language publications on the acquisition of sign word
order was Coerts and Mills (1994), a study of the first subject and verb combinations
of deaf twins acquiring NGT from their deaf mother. Subjects in NGT are canonically
ordered before verbs (SV order), so Coerts and Mills set out to determine the age at
which this ordering rule became productive in the child data. They found such high
variability in the children’s subject placement (i.e. both SV and VS orders) between
1;06 and 2;06 that they were forced to conclude that these children had still not ac-
quired basic SV word order. It was not until later, when Bos (1995) documented sen-
tence-final subjects as a grammatically licensed word order in NGT, that the true rea-
son for the Dutch children’s word order variability became clear. Coerts (2000) re-
analyzed the data from Coerts and Mills (1994) and confirmed that the SVS and VS
orders coded as errors in the earlier study were actually well-formed instances of sen-
tence-final subjects. She concluded that NGT-exposed children control word order (at
least with respect to subjects and verbs) from around the age of 2;01, in line with cross-
linguistic reports.
The importance of taking into account the possibility that children use adult-like
variation early is demonstrated again in the ASL literature on early word order. Hoff-
meister (1978) reported a strong preference for canonical SVO order (i.e. preverbal
subjects and/or post-verbal objects) in the early sign combinations of his three deaf
subjects, reflecting a fixed word order strategy in contrast to the variable word order
variation of adult ASL (Newport/Meier 1985). Schick and Gale (1996) and Schick
(2002) subsequently reported the opposite trend, finding high word order variability
in the sign combinations of 12 American deaf children at their second birthday. Of the
total multi-sign utterances including a verb and overt theme argument (the authors
referred to agents and themes to avoid making any claims that the syntactic notions
of subject and object had been acquired by this age), only 57 % to 68 % appeared in
canonical verb-theme order, and canonical agent-verb order hovered around 66 %
across most of the children. Schick concluded from these figures that there was no
evidence for a canonical word order strategy in her data, contrary to what had been
previously reported by Hoffmeister (1978).
Chen Pichler (2001, 2008) reanalyzed the word order frequency rates provided by
Hoffmeister (1978) and concluded that although his subjects’ use of canonical orders
increased with time, they produced a significant percentage of their earliest utterances
with non-canonical orders: 17⫺33 % of all utterances containing a subject and verb
were VS, while 38⫺42 % of all utterances containing a verb and an object were OV.
These rates are comparable to those reported by Schick and Gale (1996) and Schick
(2002). Interestingly, although Hoffmeister (1978) did not provide a list of the actual
utterances produced by the children, he noted that OV utterances tended to occur
with verbs that “allow modulation” (most likely referring to agreement for person and
number, but possibly also including location or classifier information) although most
of the forms actually produced by the children in the early stages were uninflected.
28. Acquisition 663

Chen Pichler (2001, 2008) hypothesized that, similar to the cases described by Coerts
and Mills (1994), the non-canonical orders reported in the Hoffmeister and Schick
studies might reflect early and target-like use of order-modifying operations.
The study conducted by Chen Pichler (2001) investigated the placement of both
subjects and objects with respect to the verb for all multi-sign utterances produced by
four deaf children between 20 and 30 months of age. All multi-sign utterances contain-
ing a verb and overt object were coded as either canonical VO or non-canonical OV.
Index points (ix) clearly directed towards an identifiable referent (i.e. pronouns) were
counted as overt subjects and objects, a practice that appears to have also been adopted
by both Hoffmeister (1978) and Schick (2002). Additionally, Chen Pichler (2001) coded
non-canonical utterances for evidence of order-modifying operations available in adult
ASL. Following Padden (1988), post-verbal subjects were coded as instances of subject-
pronoun copy, as long as the post-verbal copy of the subject appeared in pronoun
form. Preverbal objects were coded as target-like whenever they occurred with an
aspectual verb, a spatial verb, or a handling verb (where the hand configuration corre-
sponded to either an instrument or a theme of the verb), following Fischer and Janis
(1992) and other proposals for morphosyntactically-motivated object shift or (right-
ward) verb raising in ASL (Matsuoka 1997; Braze 2004). Examples of target-like in-
stances of VS and OV order are shown in (1) and (2), respectively.

(1) put-on-shirt can ix(mother) [ASL, 26 months]


‘You can dress yourself.’
(2) hat bring-here [ASL, 26 months]
‘I’ll bring the hat here.’

Results indicated that the four children’s use of canonical preverbal subjects ranged
between 54 % and 72 % over the age span investigated, while their use of postverbal
objects ranged from 32 % to 52 %. Taken together, these figures do not support an
early fixed word order strategy. Once post-verbal subjects and pre-verbal objects meet-
ing the criteria for order-modifying operations were taken into consideration, the chil-
dren’s rate of target-like word order rose to between 96 % and 97 % for subjects and
verbs, and between 76 % and 86 % for verbs and objects (for all but one child, who
made little use of order-modifying operations; see discussion of this child’s OV produc-
tion in section 5.2.3). Chen Pichler (2001) concluded that canonical word order rules
are acquired early in ASL, by 30 months of age, but that their effects are obscured by
very productive application of subject-pronoun copy and developing competence with
certain morpho-syntactic operations triggering non-canonical OV order. Pizzio (2006),
in a study of early LSB, came to a similar conclusion, although she attributed more of
her subject’s non-canonical object placement to topic and focus (discussed in the next
section) than to the order-modifying verb types studied by Chen Pichler (2001).

5.2. Wh-questions, focus, and topics

The few existing studies that explore wh-questions, focus, and topics are limited to
ASL and LSB. These three topics are presented together here because of their common
664 VI. Psycholinguistics and neurolinguistics

potential for movement in generative approaches, the theoretical perspective adopted


by much of the early work on sign language linguistics. According to generative theo-
ries, movement of a wh-element, focused element, or topic may result in either a new
word order (e.g. raising of a sentence-final wh-object to sentence-initial position) or
leave word order unchanged (e.g. raising of a subject, already in sentence-initial posi-
tion, to a topic position further to the left). Word order changes are used in many
languages to signal changes in information structure, an important component of dis-
course/pragmatic organization that has long been assumed to emerge late in child lan-
guage, although recent reports have indicated that some aspects of information struc-
ture are acquired early (e.g. de Cat (2003) for spoken French).

5.2.1. Wh-questions

The position of wh-elements in sign languages displays more variation than is typical
of spoken languages, due to the fact that wh-signs can either remain in-situ (i.e. in their
original, base-generated positions) or move to various positions in the sentence. A
sample of the variety of possible configurations is illustrated by the ASL wh-questions
below, drawn from Lillo-Martin and de Quadros (2006) and Petronio and Lillo-Martin
(1997). (Note that these questions are marked by the wh-non-manual marker; acquisi-
tion of the non-manual component of wh-questions is covered in section 5.5.)

wh
(3) a. wh-initial: what john buy [ASL]
‘What did John buy?’
(generally unacceptable according to Neidle et al. 2000)
wh
b. wh-final: john buy (yesterday) what
‘What did John buy (yesterday)?’
wh
c. wh-doubled: what john buy what
‘WHAT did John buy?’

There is currently much debate over the structure of wh-questions in sign language,
fuelled in large part by the variability illustrated in (3a⫺c). There are two main posi-
tions in this debate, hinging on the direction in which basic wh-movement to the speci-
fier of CP proceeds. Petronio and Lillo-Martin (1997) claimed that wh-elements move
leftward to the specifier of CP, resulting in sentence-initial wh-questions like (3a). In
contrast, Neidle et al. (2000) reported that wh-initial questions such as (3a) are gener-
ally unacceptable, at least for wh-objects. They proposed that wh-movement to the
specifier of CP proceeds rightward, resulting in sentence-final wh-questions like (3b).
Under both positions, wh-doubled constructions such as (3c) involve additional opera-
tions (see chapter 14, Sentence Types, for details).
From an acquisition perspective, both the leftward- and rightward-movement ac-
counts predict that wh-in-situ questions should be among the earliest to appear in child
signing, as these do not require any movement. Under both accounts, then, one would
expect to see sentence-initial wh-subjects, as well as wh-objects surfacing just after the
28. Acquisition 665

verb. The next “easiest” type of wh-construction should be those involving basic wh-
movement to the specifier of CP. Here, the two accounts make competing predictions.
According to Petronio and Lillo-Martin (1997), wh-movement to the specifier of CP
should yield a preponderance of both subject and object wh-initial questions in early
production. According to Neidle et al. (2000), the same operation should result in an
early preference for wh-final questions; wh-initial subject questions should only appear
as in-situ questions in early stages, and wh-initial object questions should not occur
at all.
Lillo-Martin and de Quadros (2006) used acquisition data to test these competing
predictions. Their longitudinal data from two ASL-exposed and two LSB-exposed deaf
children (falling within the age range of 1;01 to 3;0) indicated that all four children
produced in-situ and sentence-initial wh-questions from the very earliest observations.
Crucially, sentence-initial wh-questions occurred for both subjects and objects. Dou-
bled wh-questions appeared subsequently in the ASL data, but did not occur in the
LSB data. Neither group of children produced any unambiguously wh-final structures
during the period of observation. This acquisition pattern is expected if wh-movement
is leftward in ASL and LSB, but unexpected if wh-movement is rightward, especially
if object wh-initial questions are unacceptable in ASL, as Neidle et al. (2000) reported.
Further support for leftward wh-movement comes from experimental data from older
ASL signers (4⫺6 years), in which the youngest children showed a strong preference
for wh-initial structures for subject, object, and adjunct wh-elements (Lillo-Martin
2000).

5.2.2. Focus

Like the studies of wh-questions summarized in the previous subsection, the existing
literature on acquisition of focus in sign language is motivated by theoretical debate
over syntactic structure. According to a proposal advanced by Lillo-Martin and de
Quadros (2008), ASL and LSB distinguish between (non-contrastive) information fo-
cus (I-focus) (4) and two related variants of emphatic focus (E-focus): focus doubling
constructions (5a) and focus final constructions (5b). Examples of these focus types
are shown below (these examples are drawn from Lillo-Martin and de Quadros (2005)
and are grammatical in both ASL and LSB).

(4) Q: What did you read?


I-focus
A: book jairo i read [ASL/LSB]
‘I read Jairo’s book.’
(5) a. john can read can [ASL/LSB]
b. john read can
‘John really CAN read.’

Under the Lillo-Martin and de Quadros (2008) proposal, focus double and focus final
constructions are structurally related, while they are unrelated under competing analy-
ses (Neidle et al. 2000). Lillo-Martin and de Quadros (2005) reported that acquisition
666 VI. Psycholinguistics and neurolinguistics

data from two ASL-exposed and two LSB-exposed children revealed early use of
I-focus (as early as 1;01 for the Brazilian subjects, and 1;07 for the American subjects).
Focus doubling and focus final constructions emerged at very similar ages (between
1;09 and 2;02), both significantly later than I-focus. Pizzio (2006) reported a similar
gap in ages of acquisition between I-focus and E-focus in her subject (one of the LSB
subjects examined by Lillo-Martin and de Quadros (2005)), and added that a third
type of focus, contrastive focus, appeared at 2;01. These data support the proposal by
Lillo-Martin and de Quadros (2008) linking focus doubles and focus finals, and indicate
that sign-exposed children use at least some aspects of information structuring early
in development, echoing recent reports from spoken languages research (e.g. de Cat
2003).

5.2.3. Topics

In their investigation of grammatical non-manual markers in early signing, Reilly,


McIntire, and Bellugi (1991) reported that the ASL topic non-manual (raised brows
over the topicalized element) did not emerge until 3;0 in their data. The authors noted
that children might potentially produce topics prior to that age, but that only the non-
manual marker constitutes “inescapable evidence” that the child has developed compe-
tence in this domain (Reilly/McIntire/Bellugi 1991, 15). This view is consistent with the
prevailing assumption that brow raise is the most salient and critical component of the
ASL topic non-manual.
In contrast, Nespor and Sandler (1999) and Rosenstein (2001) reported that while
brow raise sometimes marks topics in Israeli Sign Language (Israeli SL), the true topic
marker is not limited to that or any other single non-manual feature. Rather, it involves
the simultaneous change of a combination of features (that could include widened eyes,
raised brows, head nods, eye blinks, and holds) between the topic and the remainder of
the sentence (the comment). Chen Pichler (2001, 2010), examined utterances from
one ASL-exposed child with preverbal objects but no evidence of the order-modifying
features discussed in section 5.1. She found very simple prosodic breaks of the type
described for Israeli SL, beginning at 24.5 months. These were typically characterized
by repetition or holding of the topic sign, followed by a change in head position or
slight nodding of the head for the remainder of the sentence. Using similar evidence,
Pizzio (2006) attributed her LSB-exposed subject with early use of topics, marked with
various non-manual features (brow raise, head movement, or eye gaze direction).
These two studies, although very preliminary, suggest that competence in topicalization
may begin nearly a year earlier than estimated by Reilly, McIntire, and Bellugi (1991).

5.3. Spatial syntax

Perhaps the most prolifically studied phenomenon in sign acquisition is spatial syntax,
or the use of space to establish reference for pronouns and what has traditionally been
referred to as verb agreement (see chapters 7 and 11 for discussion). Lillo-Martin
summarized four crucial things that a sign-exposed child must learn in order to control
spatial syntax: “(a) to associate a referent with a location, (b) to use different locations
28. Acquisition 667

for different referents […], (c) to use verb agreement or pronouns with non-present
referents, and (d) to remember the association of referents with locations over a stretch
of discourse” (Lillo-Martin 1999, 538⫺539). Newport and Meier (1985) provided an
excellent summary of the extensive research on the acquisition of ASL pronouns and
verb agreement conducted through the mid-eighties. The general consensus of that
early literature was that the heavily iconic properties of spatial syntax did not facilitate
its acquisition by sign-exposed children. The studies of Meier (1982) and Petitto (1987)
presented this position particularly clearly, for verb agreement and pronouns, respec-
tively.
Petitto (1987) documented the development of pointing behaviors for two ASL-
exposed deaf girls between 0;06 and 2;03. Both girls engaged in pointing at people,
objects, locations, and events at 10 months, the same age at which hearing children
(not exposed to sign language) begin to point gesturally. Between 12 and 18 months,
the girls replaced pointing at people, including themselves, with lexical signs (mostly
kinship terms like mother); other types of pointing continued unchanged. Pointing at
people resumed between 21 and 23 months for both children, but they committed
reversal errors in both production and comprehension in which they signed/understood
you or your(s) to refer to themselves. The full pronoun system was not mastered until
around 27 months. Petitto noted that the observed pattern of pronoun avoidance, er-
rors and acquisition displayed striking similarities in timing to the development of
spoken language pronouns: pronouns emerge in early speech between 18 and
20 months, but are unstable and prone to error (including reversal errors) until 30
months (Charney 1978). Although this particular pattern of pronoun development has
not been replicated by other researchers, Petitto’s data have been widely interpreted as
evidence that children do not transition smoothly from purely gestural, non-linguistic
pointing to formal ASL deictic pronouns. The transparent nature of the latter does not
accelerate the mastery of pronouns by sign-exposed children with respect to their
speech-exposed peers.
Meier (1982) investigated the development of verb agreement in ASL with present
referents. He reported that from 2;0 to 2;06, his three ASL-exposed subjects used verbs
that participate in agreement, but produced most of them in citation (uninflected)
form, an early preference also documented by Hoffmeister (1978) and others. Meier
concluded that his subjects did not acquire verb agreement (under the stringent crite-
rion of suppliance in 90 % of obligatory contexts for acquisition) until between 3;0 and
3;06. This is late compared to children learning languages like Turkish that feature rich,
regular, and phonetically salient verbal morphology, yet comparable to children learn-
ing languages like English, where verbal morphology is less reliable (Slobin 1982).
Meier concluded that the iconic qualities of the ASL verb agreement system do not
facilitate acquisition, nor lead sign-exposed children to analyze inflected verbs as holis-
tic, ‘mimetic’ representations of real world actions.
The age of acquisition reported by Meier (1982) applied to agreement with present
referents, but sign languages also allow agreement with non-present referents. In these
cases, referents are associated with locations established in signing space, a task that
presents difficulty for children. For example, Loew (1984) reported that agreement
with non-present referents was not consistently correct until 4;09 for her subject, well
after agreement with present referents was controlled. Her data included spontaneous
narratives with multiple characters, and revealed interesting errors. Between 3;06 and
668 VI. Psycholinguistics and neurolinguistics

3;11, Loew observed ‘stacking’ errors, in which the child used the same location in
space for more than one referent. In other instances, the child established multiple
referents in different locations, but in an inconsistent way. Between 4;0 and 4;09, she
directed verb forms towards real life objects standing in for non-present referents (e.g.
establishing a chair habitually occupied by her father as the location for her non-
present father); agreement with these ‘semi-real world forms’ had been previously
noted by Hoffmeister (1978).
If agreement with non-present referents is controlled a full year later than agree-
ment with present referents, this could indicate that children acquire the two as sepa-
rate systems. However, Lillo-Martin et al. (1985) and Bellugi et al. (1988) argued
against this conclusion, claiming that difficulties related to spatial memory are behind
the delay in acquiring agreement with non-present referents. Their deaf ASL-exposed
subjects scored poorly on act-out and picture selection tasks testing comprehension of
agreement with non-present referents (Lillo-Martin et al. 1985) and continued to make
errors of the type described by Loew (1984) in their production until the age of 5;0
(Bellugi et al. 1988). However, subjects as young as 3;0 were successful in a task requir-
ing them to watch the experimenter place two or three referents in space (e.g. boy
herea, girl hereb) then answer questions about associations between specific referents
and spatial locations (e.g. where boy or what herea). Furthermore, they performed
more accurately on test items involving two referents than on items with three referents
(Lillo-Martin et al. 1985). Thus ability to associate spatial locations with referents,
crucial for verb agreement in sign language, appears to be in place by 3 years, although
it is subject to memory limitations.
Recent work on sign verb agreement has both extended investigation crosslinguisti-
cally and demonstrated that our understanding of this aspect of sign acquisition is still
far from complete. Hänel (2005) reported productive verb agreement with both present
and non-present referents emerging together (e.g. with no lag) for two deaf children
learning German Sign Language (DGS). Casey (2003) extended the search for “direc-
tionality” to gestures, reporting directional gestures and ASL verbs much earlier than
previous studies (as early as 0;08 for gestures and 1;11 for signs). Additionally, Casey
noted production of directionality with verbs “denoting literal, iconic movement prior
to those denoting metaphorical movement”, a sequence she interpreted as evidence
that children attend to iconicity in their development of verb agreement (contra Meier
1982). Perhaps most surprisingly, de Quadros and Lillo-Martin (2007) reported that
their ASL and LSB data included virtually no instances of verb agreement omission
in obligatory contexts, even as early as 2;0. They included eye gaze as a possible marker
of agreement, contributing to a higher rate of target-like production than reported in
earlier studies like Meier (1982). They also found that a sizeable portion of children’s
uninflected forms were judged as acceptable by native-signing adults, and indeed were
also produced by adults interacting with the children (see also Morgan/Barrière/Woll
(2006) for reports of similar verb agreement variability in child-directed BSL). Count-
ing these forms as target-like not only reduces the number of obligatory contexts, but
also calls into question the traditional, strict categorization of agreeing verbs as always
requiring inflection.
Finally, some researchers have pointed to parallels between the development of verb
agreement in sign languages and non-linguistic spatial and representational skills (Em-
morey 2002; Jackson 2006), such as the ability to understand a scale model of a room
28. Acquisition 669

as a spatial representation of a real room (Blades/Cook 1994). The possible influence


of developing spatial coding strategies on children’s acquisition of verb agreement in
sign languages is an interesting line of investigation that deserves further exploration.

5.4. Constructions formerly known as classifier predicates

One of the long-standing challenges to sign language research is a comprehensive yet


coherent account of what was known in the early literature as ‘classifier predicates’
(see chapter 8 for discussion). This original label invoked a deliberate parallel with
classifiers in spoken languages, whose function is to “categorize nouns by salient, per-
ceived characteristics of their referents” (Kantor 1980, 41). Recently, however, this
perceived parallel between spoken and sign classifiers has come under heavy scrutiny,
prompting multiple proposals of new terminology to replace the term “classifier” in
sign language research (although I will continue to use the term “classifier predicates”
in this section for the sake of simplicity). Recent studies are also reconsidering the
effect of iconicity on the acquisition of classifier constructions. Iconicity is a striking
characteristic of these constructions; to describe a car driving past a tree, adult signers
maneuver a dominant handshape representing a vehicle past a non-dominant hand-
shape representing a tree. The degree to which children perceive (and reproduce) such
representations as analogue or mimetic has been a topic of considerable debate.
The earliest reports on children’s production of classifier constructions, focusing on
‘complex verbs of motion’ such as the example above, noted a very protracted course
of acquisition and a pattern of errors suggesting that children approach these construc-
tions as morphological complexes. Newport and Supalla (1980) reported that ASL-
exposed deaf children under 3;0 failed to express manner of movement (e.g. producing
simple linear movement when the target called for linearCbouncing movement) and
often omitted secondary ground objects (e.g. the tree in the adult example given
above); similar omissions of ground objects have been noted crosslinguistically (e.g. by
Morgan et al. (2008) for BSL, and Tang/Sze/Lam (2007) for Hong Kong Sign Lan-
guage). Older children continued to omit aspects of target movement or produced
them sequentially rather than simultaneously (e.g. linear movement followed by boun-
cing movement), even as late as 5 years. Researchers interpreted these errors as evi-
dence for a morphemic analysis of classifier predicates, by which each element of the
described event (e.g. path, manner, entity undergoing movement) is composed of dis-
crete subunits drawn from a finite list of possible values (e.g. an upward movement in
an arc is composed of upwardCarc). Furthermore, children’s error patterns were taken
as evidence for a universal bias towards componential analysis of language, indicating
once again that “the potential iconicity of ASL morphology does not assist in its acqui-
sition” (Newport/Meier 1985, 908).
Other early studies reported sequences of acquisition for the three subcategories of
classifiers traditionally distinguished in the literature, based on the type of information
encoded by the classifier handshape. Handle classifiers specify the shape of the human
hand when handling a particular object (e.g. the :-handshape used for holding a
glass or can). In the size and shape or SASS classifiers, the handshape encodes “visual
geometric features of the object” (Schick 2006), capable of showing not only the shape
of the referent, but also its depth (e.g. two J-handshapes representing thin disk-like
670 VI. Psycholinguistics and neurolinguistics

objects like a plate, versus two :-handshapes for a deeper pot). Finally, in entity
classifiers, also known as class or semantic classifiers, the handshape represents a se-
mantic category (e.g. the @-handshape representing upright entities).
In a picture elicitation task with 24 ASL-exposed deaf children aged 4;05 to 9;0,
Schick (1990) found that with respect to handshape, children were most accurate with
entity classifiers, followed by SASS, then handle classifiers. Schick attributed this pat-
tern to the morphological complexity of SASS and handle handshapes, which she ana-
lyzed as including morphemes for size, depth, and movement, in contrast to monomor-
phemic entity handshapes. Kantor (1980) suggested that classifier status in itself
seemed to add processing complexity; as mentioned in section 3.3.3, she noted that
some handshapes already controlled in lexical signs were produced with errors when
they appeared as entity classifiers. In contrast, with respect to location accuracy, chil-
dren demonstrated more accuracy for handle classifiers than for either entity or SASS
classifiers. Schick surmised that this was because it might be easier to use syntactic
space for verb inflection (of which she analyzed handle classifier predicates to be one
case) than to encode locative relationships (as in the case of entity and SASS classifier
predicates). She cited the late acquisition of the latter, despite their highly iconic repre-
sentation in ASL, as further evidence that “iconicity has little effect on the acquisition
of a morphological system despite the potential for such an analysis” (Schick 1990,
370).
More recently, however, Schick (2006) has argued that earlier studies were too sim-
plistic in their assessment of children’s sensitivity to iconicity. Effects were regarded as
all or nothing, when in fact it is more likely that iconicity affects some aspects of
acquisition more, and others less. For example, Schick (2006) points out that although
complete mastery of the classifier system occurs late in acquisition, parts of it are in
place from as early as 2;0 (Lindert 2001). Children between 2;0 and 3;0 recognize
contexts that call for classifiers and select semantically appropriate (if not formation-
ally accurate) forms (Schick 2006; Kantor 1980). They comprehend signed classifier
predicates depicting figure and ground despite frequent omissions of ground in their
own production (Lindert 2001). They do not resort to lexical strategies for encoding
spatial relations (e.g. prepositional signs like on or in), even when these are available
in their sign language, and despite a preference for lexical strategies in other domains
of sign acquisition (e.g. as an alternative to certain grammatical non-manual markers
(Reilly 2000), summarized in the next section).
Slobin et al. (2003) also argued for an effect of iconicity on classifier acquisition,
citing the early ability of their ASL- and NGT-exposed deaf subjects (as young as 2;0)
for meaningful selection of handshapes with a visual relationship with the referent.
They rejected earlier proposals that classifier acquisition is difficult because object
categorization is difficult (Newport/Meier 1985), arguing that classifiers do not actually
categorize at all. Rather, they simply depict some property of the referent, a task well
within the abilities of a two-year-old. De Beuzeville (2006) expanded on this perspec-
tive, framing her investigation of classifier acquisition in Australian Sign Language
(Auslan) in terms of depicting verbs (Liddell 2003). Consistent with previous observa-
tions from ASL and NGT, she reported that handling depicting verbs were the earliest
to be controlled in Auslan. She also drew parallels in timing and stages between the
development of depiction and the development of visual representation (such as draw-
ing or gesture). Whereas early sign language studies automatically equated discreteness
28. Acquisition 671

with arbitrariness, de Beuzeville argued that both depicting verbs and visual represen-
tation incorporate elements that are analogue and iconic with elements that are dis-
crete and iconic.
In short, many now argue that the representation of iconic relations through a lin-
guistic system is not difficult for children and can be observed in their early production
(Schick 2006). These researchers propose that the protracted course of acquisition
observed for classifier constructions is due to the complex discourse functions that
children must control when they use these constructions, including establishment of
referents represented by classifier handshapes, coordination of the relation of figure
to ground, manipulation of focus or perspective, and so on (Slobin et al. 2003).

5.5. Non-manual markers

It is well known that sign languages grammars involve not only a manual component,
but a non-manual component as well. Research on non-manual activity has tradition-
ally focused on the face and head, with more limited reference to positions of the rest
of the body (e.g. shoulder shrugs and body leans). One can broadly distinguish between
lexical, communicative (or affective), and grammatical non-manuals in sign languages
(see Pfau/Quer (2010) for an overview). The first refer to specific non-manual configu-
rations that are lexically specified for particular signs; these will not be discussed in
this chapter. Affective non-manuals convey emotional and affective information (e.g.
scowling during angry signing). They occur with great variability across structures and
signers and are considered to be communicative or paralinguistic. Only grammatical
non-manuals are considered to be fully within the domain of linguistic organization,
their appearance subject to grammatical constraints on form and scope (Reilly 2000).
This chapter takes as its point of departure the traditional literature on ASL, recogni-
zing distinct and obligatory non-manual markers for yes-no questions, wh-questions,
relative clauses, conditionals, topics, and various adverbial constructions (Liddell 1980).
Readers should be aware, however, that the degree to which these non-manuals are
obligatory varies across sign languages (cf. Zeshan 2004), and traditional assumptions
about the obligatory status of grammatical non-manuals in ASL are beginning to be
questioned.
On the surface, some communicative and grammatical non-manuals overlap consid-
erably in form, prompting speculation that the latter category developed via grammati-
calization of related communicative non-manuals (MacFarlane 1998; Pfau/Steinbach
2006; also see chapter 34, Lexicalization and Grammaticalization). For instance, the
same headshake that is used as a communicative non-manual among hearing popula-
tions is also used as a grammatical non-manual marker for negative construction in
ASL. Similarly, as can be seen in Figure 28.3, the non-manual marker for wh-questions
bears a resemblance to the affective expression adopted when one is perplexed or
confused (both are characterized by a furrowing of the brow).
Communicative non-manuals are acquired by hearing and deaf children alike within
the first year of life (Hiatt/Campos/Emde 1979; Nelson 1987; Reilly 2006). Given the
resemblance of some grammatical non-manuals to communicative non-manuals, an
interesting question is whether sign-exposed infants are able to transfer their early
control of the latter to serve grammatical purposes. This question has been extensively
672 VI. Psycholinguistics and neurolinguistics

Fig. 28.3: The ASL wh-non-manual (left; © 2006, www.Lifeprint.com; used by permission) and
the communicative non-manual indicating that one is perplexed or confused (right)

investigated by Judy Reilly and her colleagues over a series of studies on early ASL
(summarized in Reilly 2006). The general pattern observed by these researchers is that
grammatical non-manuals are acquired much later than communicative non-manuals,
manifesting error patterns that reflect the complexity of coordinating the manual and
non-manual channels used in sign language.
The case of negatives serves as an illustrative example. Anderson and Reilly (1997)
reported that deaf children learning ASL begin to use communicative headshakes as
gestures around their first birthday, similar to their hearing, non-signing counterparts.
By 18⫺24 months, the first negative signs (no and don’t-want) emerged in their data,
followed over the next 26 months by none, can’t, don’t-like, not, don’t-know, and
not-yet. Crucially, each time a new negative sign emerged, it initially appeared without
the obligatory headshake. Anderson and Reilly interpreted this pattern as evidence
that children cannot recruit their ability with communicative non-manuals directly into
the domain of grammatical function. They must first analyze the manual and non-
manual components of ASL grammatical structures as distinct, independent elements
before they can combine them appropriately.
This process may take years, a fact that is most clearly demonstrated by the pro-
tracted pattern of errors characterizing the development of the ASL wh-non-manual.
Reilly, McIntire, and Bellugi (1991) reported that their ASL-exposed subjects produced
around 18 months what appeared to be well-formed combinations of simultaneous wh-
signs (e.g. what) and the ASL wh-non-manual (brow furrow). Reilly and her colleagues
argued that these were actually unanalyzed signCnon-manual amalgams, because sub-
sequently (around 30 months), children dropped the non-manual, signing wh-signs with
a blank face. Alternatively, they marked their wh-questions with an inappropriate non-
manual marker, brow raise (also attested in the mothers’ child-directed signing, partic-
ularly when the mother actually already knew the answer to the wh-question). Around
5 years of age, children combined wh-signs with brow raise, but restricted the scope of
the non-manual to just the wh-sign. Only around 6 or 7 years of age did children finally
manage proper coordination of wh-signs and the wh-non-manual with appropriate
scope (see also Lillo-Martin (2000), discussed in section 5.2.1).
The error patterns described above for negatives and wh-questions, as well as those
for other non-manuals investigated by Reilly and her colleagues, reveal an important
generalization about how children approach the acquisition of grammatical non-manu-
28. Acquisition 673

Fig. 28.4: Wh-signs with blank face (left) and raised brows (right)

als: until children are able to coordinate the manual and non-manual components of
structures that are normally (redundantly) marked by both, they systematically opt to
preserve the manual channel, sacrificing the non-manual channel in an apparent
“hands before faces” bias (Reilly 2006, 286). This bias is in contrast to the adult lan-
guage, in which negative and question signs may remain unexpressed because their
corresponding non-manual markers are sufficient to encode their illocutionary force.
A second generalization noted by Reilly and her colleagues relates to how children
react when the same non-manual component serves as a grammatical non-manual
marker for multiple distinct syntactic structures. Reilly and her colleagues noted that
children’s strategies in this situation followed from the principle of unifunctionality
(Slobin 1973), by which children initially assume a one-to-one mapping of grammatical
form to function, and resist marking multiple construction types with the same marker.
Brow raise is a salient feature of several grammatical non-manuals in ASL, including
conditionals, yes-no questions, and topics. Prior to the age of 3 years, subjects in the
Reilly studies began producing all three of these constructions, but only yes-no ques-
tions were correctly marked with brow raise (Reilly/McIntire/Bellugi 1991). Both topics
(or more accurately, preposed objects that were plausible candidates for topics) and
conditionals appeared without the obligatory brow raise. Instead, children marked
these two structures from within the manual channel. Topics were signaled by moving
the topic signs to the front of the clause (and possibly by marking them with prosodic
patterns as described earlier by Chen Pichler (2010)). Conditionals were signaled by
using lexical markers of conditionality (e.g. the signs suppose or #if) that are optional
in the adult system. Both of these strategies also lend further support for the “hands
before face” bias mentioned earlier.
Finally, to the extent that some grammatical non-manuals resemble the communica-
tive non-manuals for related affective reactions (e.g. the ASL wh-non-manual vs. non-
manual indicating that one is puzzled or perplexed), it could be argued that these
grammatical non-manuals are iconically motivated. If sign-exposed children are sensi-
tive to the iconic link between communicative and grammatical facial non-manuals,
they might potentially use the former, which are reportedly acquired early, in the first
year of life (Hiatt/Campos/Emde 1979; Nelson 1987), to ‘break into’ the system of
grammatical non-manuals in the target sign language. This again raises the question of
whether this iconic link is recognized and exploited by the sign-exposed child, affecting
the acquisition timetable. As we have seen, this does not appear to be the case. ASL
674 VI. Psycholinguistics and neurolinguistics

grammatical non-manuals are acquired fairly late, indicating that the potential iconic
link between grammatical non-manuals and related affective non-manuals does not
facilitate the acquisition process.

6. Discourse development

As children progress into their fifth year of life, their signing demonstrates increasing
command of spatial syntax, classifiers, and non-manuals at the sentence level. How-
ever, children often require several more years before they are able to use these same
syntactic devices appropriately at the discourse level, where additional pragmatic con-
straints come into play. This lag is quite apparent in the narratives of young signing
children, which differ from those of adults in many respects. Research in this area has
largely focused on the development of referential shift (also know as role shift or
constructed action), a device commonly used in sign narratives (see chapter 17 for
discussion). In adult signing, referential shift allows the signer to switch between nar-
rating an event and showing a particular character’s point of view of that same event.
Shifts between the points of view of multiple characters within a single narrative are
also possible. In both cases, adult signing includes a variety of features to ensure that
referents can be properly distinguished and identified by the addressee. For example,
as discussed by Reilly (2006), an adult signer using referential shift to express a direct
quote by Baby Bear in a narrative about Goldilocks and the Three Bears would typi-
cally preface the shift by labeling the character whose perspective is about to be shown,
by pointing to the locus previously established for baby bear, and/or signing baby bear.
As the actual referential shift begins, often with a physical shift to one side of the head
and upper torso, the signer’s eye gaze and non-manual expression change to reflect
that of Baby Bear. Additionally, all instances of pronouns, verb agreement and other
spatial syntax produced during referential shift are interpreted from the point of view
of the character.
In a study of 28 Deaf, native ASL signers between ages 3;0 to 7;05, Reilly (2006)
found that referential shift for direct quotes occurred in elicited narratives of even
their youngest subjects, signaled by a disengagement of eye contact from the addressee.
From 3 to 4 years of age, children also assumed non-manual expressions associated
with the shifted character, but were inconsistent in their timing and scope until the age
of 6 or 7 years. A similar pattern was reported by Lillo-Martin and de Quadros (2011)
in a study of two Deaf children between 1;07 and 2;05, one acquiring ASL and the
other LSB. Almost all instances of referential shift produced by these children were
marked by changes in eye gaze and/or non-manual expression, with greater accuracy
in scope and timing than was reported for the children in the Reilly (2006) study. This
difference is likely a reflection of methodology: Lillo-Martin and de Quadros (2011)
examined instances of referential shift that occurred spontaneously in natural produc-
tion, a context imposing fewer cognitive demands than the elicited narratives of the
Reilly (2006) study.
Both Reilly (2006) and Lillo-Martin and de Quadros (2011) reported that their
youngest subjects were unreliable in labeling the character whose perspective was be-
ing represented by referential shift, often resulting in non-adult-like ambiguity. These
28. Acquisition 675

errors most likely stem from young children’s oft-noted lack of awareness that others
do not always share their knowledge and assumptions. Similarly, Morgan (2006) re-
ported that young BSL signers between 4 and 6 years old occasionally used referential
shifts to introduce new referents in their narratives, a function that is more appropri-
ately accomplished with a noun phrase label. However, the majority of the British
children’s referential shifts were used to maintain previously introduced references, so
it seems that even very young children are aware of this important pragmatic function
of referential shift at the discourse level.
Well-formed sign narratives also require proper organization, including accurate
sequencing of episodes within the narrative and pragmatically appropriate reference
forms. Morgan (2006) elicited narratives from young BSL signers that included scenes
in which two events co-occur. The youngest children (ages 4⫺6) focused on only one
of the two events, and tended to overuse full noun phrases when referring to the same
subject in successive sentences. Older children (ages 7⫺10) produced narratives that
included both co-occurring events, but alternated between them without integrating
them into a single scene. Only the oldest children in the study (ages 11⫺13) were able
to convey simultaneous occurrence of both events, using entity classifiers in overlap-
ping space and multiple instances of referential shift. Narratives such as these illustrate
the complex integration of multiple syntactic devices that is required at the discourse
level. It is not surprising, then, that children display a protracted course of acquisition
for narratives that extends long after they have acquired sentence-level control of spa-
tial syntax, classifiers, and non-manuals.

7. Acquisition in other contexts


The length of the current chapter testifies to the considerable and rapidly expanding
literature that now exists regarding the acquisition of sign language as a first language.
The vast majority of this literature, however, is limited to deaf signers with early expo-
sure to the target sign language, only a tiny fraction of the total deaf population (about
5 % in the US, according to Mitchell/Karchmer 2004). As mentioned earlier, this bias
has been intentional, reflecting a desire to understand how sign acquisition proceeds
under the most optimal conditions, in order to have a baseline against which other
types of acquisition can be compared. While studies of late-exposed signers abound
(readers are referred to Mayberry/Locke (2003) and Emmorey (2002) as good starting
points on this literature), very few test late signers during childhood. Yet childhood
studies are crucial for revealing developmental divergences from the baseline. For ex-
ample, previous studies have reported that control of basic word order is spared in
adults who were exposed to ASL after age 12 (Newport 1990), suggesting that age of
acquisition has little effect on word order. However, Lillo-Martin and Berk (2003),
reporting on two late-exposed children in the two-sign stage, provided a more compre-
hensive developmental picture: these children not only relied mostly on canonical SVO
word order, but their production of non-canonical orders was lower and significantly
more prone to error than that of the native-signing children in Chen Pichler (2001).
Unfortunately, systematic comparisons between early and late L1 signers in childhood
of this type are still quite rare, despite the considerable baseline knowledge we have
accumulated so far.
676 VI. Psycholinguistics and neurolinguistics

Another aspect of sign acquisition that still awaits further study is bilingual acquisi-
tion, as manifested in both deaf and hearing children. Research on spoken language
bilinguals has shown us that these individuals display interesting inter-language effects,
particularly during the early phases of development. These effects can significantly
alter the course of acquisition, such that the early Croatian of a Croatian-Taiwanese
bilingual can be quite different from that of a monolingual Croatian speaker at the
same developmental stage. In addition to typical inter-language effects, sign-speech
bilingualism also displays effects unique to bilingualism across modalities. For instance,
van den Bogaerde and Baker (2005), Petitto et al. (2001), and Lillo-Martin et al. (2009)
describe patterns of simultaneous mixing of spoken Dutch/NGT, spoken French/LSQ,
spoken English/ASL, and spoken Portuguese/LSB, in the production of young bimodal
bilinguals. Some of the earliest studies of L1 sign acquisition were actually conducted
on hearing sign-speech bilinguals (e.g. Siedlecki/Bonvillian 1997), but researchers are
only recently beginning to focus on the interaction of these children’s developing gram-
mars in two modalities. Fewer still have investigated sign-sign bilingualism (but see
Pruss-Ramagosa 2001), an instantiation of bilingualism that is becoming increasingly
common as Deaf adults become more internationally mobile. Because of the potential
of these studies to uncover phenomena that do not occur in speech-only bilingualism,
they contribute uniquely to our understanding of how the human mind organizes multi-
ple languages, and how these develop and interact with one other (see also chapter 39
for discussion of bilingualism).
Finally, very little research has been conducted on the acquisition of sign language
as a second language. Most existing reports in this area focus on phonological aspects
of second language signing (Mirus/Rathmann/Meier 2001; Rosen 2004; Chen Pichler
2011; Ortega/Morgan 2010). Like the study of sign bilingualism, studies of second lan-
guage signing have great potential to uncover modality-specific effects that do not
occur in traditionally studied second language acquisition of a spoken or written lan-
guage. There are also likely to be differences between learners who already know a
sign language and those who do not. Some researchers adopt the term ‘M1/L2 signers’
for individuals who are learning a second sign language versus ‘M2/L2 (second modal-
ity second language) signers’ for those who are learning their first sign language. In
addition to standard L2 effects, this latter group might be subject to additional effects
of learning language in a new modality. The demand is growing for more research on
M2/L2 acquisition as sign language courses rise in popularity across the US and other
countries. Effective teaching methods for M2/L2 signers are also critical for hearing
parents of deaf infants, who must learn a sign language as quickly and accurately as
possible in order to provide accessible language exposure to their children.

8. Conclusions

The studies of sign language acquisition summarized in this chapter span just over four
decades and have clearly demonstrated their importance for developing balanced and
truly universal proposals about how children develop their first language. Acquisition
studies have always served as crucial tests for linguistic theory, and in this sense sign
acquisition studies are doubly useful, with the potential to inform us on issues of mo-
28. Acquisition 677

dality as well as learnability. For example, two of the studies mentioned in section 3
were explicitly designed to test current theoretical models of sign phonology: Karnopp
(2008) for Dependency Phonology approaches to sign phonology (van der Hulst 1995)
and Morgan, Barrett-Jones, and Stoneham (2007) for the Prosodic model proposed
by Brentari (1998). In the realm of syntax, Lillo-Martin and de Quadros (2005) used
acquisition data as a tool to judge between competing proposals on the direction of
wh-movement in ASL.
The sign acquisition studies summarized here also reveal striking parallels with the
acquisition of spoken languages, pointing to fundamental mechanisms that shape lan-
guage development in either modality. At the same time, we are discovering important
modality-specific phenomena that are equally important to linguistic inquiry, as they
broaden our understanding of the possible ways in which the human brain perceives,
acquires, and organizes language. The next task facing sign acquisition research is to
replicate past findings with larger sample sizes and more diverse populations, including
bilingual learners and learners with delayed exposure to signed input. More crosslin-
guistic studies will also remain important. Whereas ASL is disproportionately repre-
sented in this chapter, reflecting past research trends, there is now a rich and growing
body of comparative work from other sign language communities, including some that
have only recently been discovered. Propelled by such a burst of linguistic diversity,
the next decades of sign acquisition research promise to be just as fruitful and thought-
provoking as the first have been.

9. Literature
Acredolo, Linda/Goodwyn, Susan/Abrams, Douglas
2002 Baby Signs: How to Talk with Your Baby Before Your Baby Can Talk. New York:
McGraw-Hill.
Anderson, Diane/Reilly, Judy
1997 The Puzzle of Negation: How Children Move from Communicative to Grammatical
Negation in ASL. In: Applied Psycholinguistics 18, 411⫺429.
Anderson, Diane/Reilly, Judy
2002 The MacArthur Communicative Development Inventory: Normative Data for Ameri-
can Sign Language. In: Journal of Deaf Studies and Deaf Education 7, 83⫺106.
Battison, Robin
1978 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstock Press.
Bellugi, Ursula/Hoek, Karin van/Lillo-Martin, Diane/O’Grady, Lucinda
1988 The Acquisition of Syntax and Space in Young Deaf Signers. In: Bishop, Dorothy/
Mogford, Karen (eds.), Language Development in Exceptional Circumstances. Edin-
burgh: Churchill Livingstone, 132⫺149.
Beuzeville, Louise de
2006 Visual and Linguistic Representation in the Acquisition of Depicting Verbs: A Study of
Native Signing Deaf Children of Auslan. PhD Dissertation, University of Sydney.
Blades, Mark/Cooke, Zana
1994 Young Children’s Ability to Understand a Model as a Spatial Representation. In: Jour-
nal of Genetic Psychology 155, 201⫺218.
Bloom, Lois
1973 One Word at a Time: The Use of Single-word Utterances Before Syntax. The Hague:
Mouton.
678 VI. Psycholinguistics and neurolinguistics

Bogaerde, Beppie van den/Baker, Anne


2005 Code Mixing in Mother-Child Interaction in Deaf Families. In: Sign Language and
Linguistics 8, 153⫺176.
Bonvillian, John/Orlansky, Michael/Folven, Raymond
1990 Early Sign Language Acquisition: Implications for Theories of Language Acquisition.
In: Volterra, Virginia/Erting, Carol (eds.), From Gesture to Language in Hearing and
Deaf Children. Washington, DC: Gallaudet University Press, 219⫺232.
Bonvillian, John/Orlansky, Michael/Novack, Lesley
1983 Developmental Milestones: Sign Language Acquisition and Motor Development. In:
Child Development 54, 1435⫺1445.
Bonvillian, John/Siedlecki, Theodore
1996 Young Children’s Acquisition of the Location Aspect of American Sign Language
Signs: Parental Report Findings. In: Journal of Communication Disorders 29, 13⫺35.
Bos, Heleen
1995 Pronoun Copy in Sign Language of the Netherlands. In: Bos, Heleen/Schermer, Trude
(eds.), Sign Language Research 1994: Proceedings of the 4 th European Congress on Sign
Language Research. Hamburg: Signum, 121⫺148.
Boyes Braem, Penny
1973 Acquisition of the Handshape in American Sign Language. Manuscript, University of
California, Berkeley.
Boyes Braem, Penny
1990 Acquisition of the Handshape in American Sign Language: A Preliminary Analysis. In:
Volterra, Virginia/Erting, Carol (eds.), From Gesture to Language in Hearing and Deaf
Children. Washington, DC: Gallaudet University Press, 107⫺127.
Braze, Forrest David
2004 Aspectual Inflection, Verb Raising, and Object Fronting in American Sign Language.
In: Lingua 114, 29⫺58.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Capirci, Olga/Iverson, Jana/Montanari, Sandro/Volterra, Virginia
2002 Gestural, Signed and Spoken Modalities in Early Language Development: The Role of
Linguistic Input. In: Bilingualism: Language and Cognition 5, 25⫺37.
Casey, Shannon
2003 ‘Agreement’ in Gestures and Signed Languages: the Use of Directionality to Indicate
Referents Involved in Actions. PhD Dissertation, University of California, San Diego.
Cat, Cécile de
2003 Syntactic Manifestations of Very Early Pragmatic Competence. In: Beachley, Barbara/
Brown, Amanda/Conlin, Frances (eds.), Proceedings of the 27 th Annual Boston Univer-
sity Conference on Language Development. Somerville, MA: Cascadilla Press, 209⫺219.
Charney, Rosalind
1978 The Development of Personal Pronouns. PhD Dissertation, University of Chicago.
Cheek, Adrianne/Cormier, Kearsey/Repp, Ann/Meier, Richard
2001 Prelinguistic Gesture Predicts Mastery and Error in the Production of First Signs. In:
Language 77, 292⫺323.
Chen Pichler, Deborah
2001 Word Order Variability and Acquisition in American Sign Language. PhD Dissertation,
University of Connecticut.
Chen Pichler, Deborah
2008 Views on Word Order in Early ASL: Then and Now. In: Quer, Josep (ed.), Signs of the
Time. Selected Papers from TISLR8. Seedorf: Signum, 293⫺315.
Chen Pichler, Deborah
2009 Development of Sign Language. In: Bot, Kees de/Makoni, Sinfree/Schrauf, Robert
(eds.), Language Development Over the Life-span. Mahwah, NJ: Lawrence Erlbaum,
217⫺241.
28. Acquisition 679

Chen Pichler, Deborah


2010 Using Early ASL Word Order to Shed Light on Word Order Variability in Sign Lan-
guage. In: Anderssen, Merete/Bentzen, Kristine/Westergaard, Marit (eds.), Variation in
the Input: Studies in the Acquisition of Word Order; Studies in Psycholinguistics, Vol. 39.
Dordrecht: Springer, 157⫺177.
Chen Pichler, Deborah
2011 Sources of Handshape Error in First-time Signers of ASL. In: Napoli, Donna Jo/Mat-
hur, Gaurav (eds.), Deaf Around the World. The Impact of Language. Oxford: Oxford
University Press, 96⫺121.
Clibbens, John/Harris, Margaret
1993 Phonological Processes and Sign Language Development. In: Messer, David/Turner,
Geoffrey (eds.), Critical Aspects of Language Acquisition and Development. London:
Macmillan/St. Martin’s Press, 197⫺208.
Coerts, Jane
2000 Early Sign Combinations in the Acquisition of Sign Language of the Netherlands: Evi-
dence for Language-specific Features. In: Chamberlain, Charlene/Morford, Jill/May-
berry, Rachel (eds.), Language Acquisition by Eye. Mahwah, NJ: Lawrence Erlbaum,
91⫺109.
Coerts, Jane/Mills, Anne
1994 Early Sign Combinations of Deaf Children in Sign Language of the Netherlands. In:
Ahlgren, Ingrid/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language
Structure: Papers from the Fifth International Symposium on Sign Language Research.
Durham, UK: ISLA, 69⫺88.
Conlin, Kim/Mirus, Gene/Mauk, Claude/Meier, Richard
2000 The Acquisition of First Signs: Place, Handshape, and Movement. In: Chamberlain,
Charlene/Morford, Jill/Mayberry, Rachel (eds.), Language Acquisition by Eye. Mah-
wah, NJ: Lawrence Erlbaum, 51⫺69.
Emmorey, Karen
2002 Language, Cognition and the Brain: Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Fagard, Jacqueline
1994 Manual Strategies and Interlimb Coordination During Reaching, Grasping, and Manip-
ulating Throughout the First Year of Life. In: Swinnen, Stephan/Heuer, Herbert/Mas-
sion Jean/Caesar, Paul (eds.), Interlimb Coordination: Neural, Dynamical, and Cognitive
Constraints. San Diego: Academic Press, 439⫺460.
Fenson, Larry/Dale, Philip/Reznick, J. Stephen/Bates, Elizabeth/Thal, Donna/Pethick, Stephen
1994 Variability in Early Communicative Development. In: Monographs of the Society for
Research in Child Development 59(5), 1⫺189.
Fischer, Susan/Janis, Wynne
1992 License to Derive: Resolving Conflicts Between Syntax and Morphology in ASL.
Manuscript.
Garcia, Joseph
1999 Sign with Your Baby: How to Communicate with Infants Before They Can Speak. Se-
attle: Northlight Communications.
Goldin-Meadow, Susan
2003 The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About
How All Children Learn Language. New York: Psychology Press.
Hänel, Barbara
2005 The Acquisition of Agreement in DGS: Early Steps Into a Spatially Expressed Syntax.
In: Leuninger, Helen/Happ, Daniela (eds.), Gebärdensprachen: Struktur, Erwerb, Ver-
wendung (Linguistische Berichte, Special Issue 15). Hamburg: Buske, 201⫺232.
680 VI. Psycholinguistics and neurolinguistics

Hiatt, Susan/Campos, Joseph/Embde, Robert


1979 Facial Patterning and Infant Emotional Expression: Happiness, Surprise, and Fear. In:
Child Development 50, 1020⫺1035.
Hoffmeister, Robert
1978 Word Order in the Acquisition of ASL. Manuscript, Boston University.
Hoiting, Nini
2006 Deaf Children Are Verb Attenders: Early Sign Language Acquisition in Dutch Tod-
dlers. In: Schick, Brenda/Marschark, Marc/Spencer, Patricia Elizabeth (eds.), Advances
in Sign Language Development by Deaf Children. New York: Oxford University Press,
161⫺188.
Hoiting, Nini
2009 The Myth of Simplicity: Sign Language Acquisition by Dutch Deaf Toddlers. PhD Dis-
sertation, University of Groningen.
Holzrichter, Amanda/Meier, Richard
2000 Child-directed Signing in American Sign Language. In: Chamberlain, Charlene/Mor-
ford, Jill/Mayberry, Rachel (eds.), Language Acquisition by Eye. Mahwah, NJ: Law-
rence Erlbaum, 25⫺40.
Hulst, Harry van der
1995 Dependency Relations in the Phonological Representation of Signs. In: Bos, Heleen/
Schermer, Trude (eds.), Sign Language Research 1994: Proceedings of the 4 th European
Congress on Sign Language Research. Hamburg: Signum, 11⫺38.
Jackson, Caroline
2006 Verbs in Space: On the Relationship Between the Development of Spatial Cognition and
the Acquisition of Morphosyntactic Uses of Space in Children Learning Sign Languages.
B.A. Honors Thesis, Harvard University.
Johnson, Robert E./Liddell, Scott. K.
2010 Toward a Phonetic Representation of Signs: Sequentiality and Contrast. In: Sign Lan-
guage Studies 11, 241⫺274.
Juncos, Onesimo/Caamaño, Andres/Justo, M. Jose/López, Elvira./Rivas, Rosa M./Sola, M. Teresa
1997 Primeras Palabras en la Lengua de Signos Española (LSE). Estructura Formal, Semán-
tica y Contextual. In: Revista de Logopedia, Foniatría y Audiología 17, 170⫺181.
Kantor, Rebecca
1980 The Acquisition of Classifiers in American Sign Language. In: Sign Language Studies
28, 193⫺208.
Karnopp, Lodenir Becker
2002 Phonology Acquisition in Brazilian Sign Language. In: Morgan, Gary/Woll, Bencie
(eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins, 29⫺53.
Karnopp, Lodenir Becker
2008 Sign Phonology Acquisition in Brazilian Sign Language. In: Quadros, Ronice M. de
(ed.), Sign Languages: Spinning and Unraveling the Past, Present, and Future. 45 Papers
and 3 Posters from TISLR9. Petropolis, Brazil: Editorar Arara Azul, 204⫺218.
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Launer, Patricia B.
1982 “A Plane” is Not “To Fly”: Acquiring the Distinction Between Related Nouns and Verbs
in American Sign Language. PhD Dissertation, City University of New York.
Lavoie, Charlen/Villeneuve, Suzanne
1999 Acquisition du Lieu d’Articulation en Langue des Signes Québécoise: Étude de Cas.
In: Variations: Le Langage en Théorie et en Pratique. Actes du Colloque: Le Colloque
des Étudiants et Étudiantes en Sciences du Langage. Montreal: University of Quebec
at Montreal.
28. Acquisition 681

Liberman, Alvin M./Mattingly, Ignatius G.


1989 A Specialization for Speech Perception. In: Science 243, 489⫺494.
Liddell, Scott
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lillo-Martin, Diane
1999 Modality Effects and Modularity in Language Acquisition: The Acquisition of Ameri-
can Sign Language. In: Bhatia, Tej/Ritchie, William (eds.), Handbook of Child Lan-
guage Acquisition. New York: Academic Press, 531⫺567.
Lillo-Martin, Diane
2000 Aspects of the Syntax and Acquisition of WH-Questions in American Sign Language.
In: Emmorey, Karen/Lane, Harlan Lane (eds.), The Signs of Language Revisited: An
Anthology in Honor of Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence
Erlbaum, 401⫺414.
Lillo-Martin, Diane/Bellugi, Ursula/Struxness, Lucinda/O’Grady, Maureen
1985 The Acquisition of Spatially Organized Syntax. In: Papers and Reports on Child Lan-
guage Development 24, 70⫺78.
Lillo-Martin, Diane/Berk, Stephanie
2003 Acquisition of Constituent Order Under Delayed Linguistic Exposure. In: Beachley,
Barbara/Brown, Amanda/Conlin, Frances (eds.), Proceedings of the 27 th Annual Boston
University Conference on Language Development. Somerville, MA: Cascadilla Press,
484⫺495.
Lillo-Martin, Diane/Chen Pichler, Deborah
2006 Acquisition of Syntax in Signed Languages. In: Schick, Brenda/Marschark, Marc/Spen-
cer, Patricia Elizabeth (eds.), Advances in Sign Language Development by Deaf Chil-
dren. New York: Oxford University Press, 231⫺261.
Lillo-Martin, Diane/Quadros, Ronice M. de
2005 The Acquisition of Focus Constructions in American Sign Language and Língua de
Sinais Brasileira. In: Brugos, Alejna/Clark-Cotton, Manuella R./Ha, Seungwan (eds.),
Proceedings of the 29 th Boston University Conference on Language Development. Som-
erville, MA: Cascadilla Press, 365⫺375.
Lillo-Martin, Diane/Quadros, Ronice M. de
2006 The Position of Early WH-Elements in American Sign Language and Brazilian Sign
Language. In: Deen, Kamil Ud/Nomura, Jun/Schulz, Barbara/Schwartz, Bonnie (eds.),
The Proceedings of the Inaugural Conference on Generative Approaches to Language
Acquisition (University of Connecticut Occasional Papers in Linguistics 4), 195⫺203.
Lillo-Martin, Diane/Quadros, Ronice M. de
2008 Focus Constructions in American Sign Language and Língua de Sinais Brasileira. In:
Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR 2004. Seedorf: Sig-
num, 161⫺176.
Lillo-Martin, Diane/Quadros, Ronice M. de
2011 Acquisition of the Syntax-Discourse Interface: The Expression of Point of View. In:
Lingua 121(4), 623⫺636.
Lillo-Martin, Diane/Quadros, Ronice M. de/Koulidobrova, Helen/Chen Pichler, Deborah
2009 Bimodal Bilingual Cross-language Influence in Unexpected Domains. In: Proceedings
of Generative Approaches to Language Acquisition (GALA) 2009. Newcastle upon
Tyne: Cambridge Scholars Publishing.
Lindert, Reyna
2001 Hearing Families with Deaf Children: Linguistic and Communicative Aspects of Ameri-
can Sign Language Development. PhD Dissertation, University of California, Berkeley.
682 VI. Psycholinguistics and neurolinguistics

Loew, Ruth
1984 Roles and Reference in American Sign Language: A Developmental Perspective. PhD
Dissertation, University of Minnesota.
MacFarlane, James
1998 From Affect to Grammar: Ritualization of Facial Affect in Signed Languages. Paper
Presented at the 6 th International Conference on Theoretical Issues in Sign Language
Research, Washington, DC.
MacNeilage, Peter F./Davis, Barbara L.
1990 Acquisition of Speech Production: Achievement of Segmental Independence. In: Hard-
castle, William J./Marchal, Alain (eds.), Speech Production and Speech Modeling. Dor-
drecht: Kluwer, 55⫺68.
Maestas y Moores, Julia
1980 Early Linguistic Environment: Interaction with Deaf Parents and Their Infants. In: Sign
Language Studies 25, 1⫺13.
Mann, Wolfgang/Marshall, Chloe/Mason, Kathryn/Morgan, Gary
2010 The Acquisition of Sign Language: The Impact of Phonetic Complexity on Phonology.
In: Language Learning and Development 6, 60⫺86.
Marentette, Paula/Mayberry, Rachel
2000 Principles for an Emerging Phonological System: A Case Study of Acquisition of
American Sign Language. In: Chamberlain, Charlene/Morford, Jill/Mayberry, Rachel
(eds.), Language Acquisition by Eye. Mahwah, NJ: Lawrence Erlbaum, 51⫺69.
Marschark, Marc/Schick, Brenda/Spencer, Patricia Elizabeth
2006 Understanding Sign Language Development. In: Schick, Brenda/Marschark, Marc/
Spencer, Patricia Elizabeth (eds.), Advances in Sign Language Development by Deaf
Children. New York: Oxford University Press, 3⫺19.
Masataka, Nobuo
2000 The Role of Modality and Input in the Earliest Stage of Language Acquisition: Studies
of Japanese Sign Language. In: Chamberlain, Charlene/Morford, Jill/Mayberry, Rachel
(eds.), Language Acquisition by Eye. Mahwah, NJ: Lawrence Erlbaum, 3⫺24.
Masataka, Nobuo
2003 The Onset of Language. Cambridge: Cambridge University Press.
Matsuoka, Kazumi
1997 Verb Raising in American Sign Language. In: Lingua 103, 127⫺149.
Mayberry, Rachel/Lock, Elizabeth
2003 Age Constraints on First Versus Second Language Acquisition: Evidence for Linguistic
Plasticity and Epigenesis. In: Brain and Language 87, 369⫺383.
McIntire, Marina
1977 The Acquisition of American Sign Language Hand Configurations. In: Sign Language
Studies 16, 247⫺266.
McIntire, Marina/Reilly, Judy
1988 Nonmanual Behaviors in L1 & L2 Learners of American Sign Language. In: Sign Lan-
guage Studies 61, 351⫺375.
Meier, Richard
1982 Icons, Analogues, and Morphemes: The Acquisition of Verb Agreement in American
Sign Language. PhD Dissertation, University of California, San Diego.
Meier, Richard
2006 The Form of Early Signs: Explaining Signing Children’s Articulatory Development. In:
Schick, Brenda/Marschark, Marc/Spencer, Patricia Elizabeth (eds.), Advances in Sign
Language Development by Deaf Children. New York: Oxford University Press, 202⫺
230.
Meier, Richard/Mauk, Claude/Cheek, Adrianne/Moreland, Christopher
2008 The Form of Children’s Early Signs: Iconic or Motoric Determinants? In: Language
Learning and Development 4, 393⫺405.
28. Acquisition 683

Meier, Richard/Newport, Elissa


1990 Out of the Hands of Babes: On a Possible Sign Advantage in Language Acquisition.
In: Language 66, 1⫺23.
Meier, Richard/Willerman, Raquel
1995 Prelinguistic Gesture in Deaf and Hearing Infants. In: Emmorey, Karen/Reilly, Judy
(eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 391⫺409.
Mirus, Gene/Rathmann, Christian/Meier, Richard
2001 Proximalization and Distalization of Sign Movement in Adult Learners. In: Dively,
Valerie/Metzger, Melanie/Taub, Sarah/Baer, Anne Marie (eds.), Signed Languages: Dis-
coveries from International Research. Washington, DC: Gallaudet University Press,
103⫺119.
Mitchell, Ross/Karchmer, Michael
2004 Chasing the Mythical 10 %: Parental Hearing Status of Deaf and Hard of Hearing
Students in the United States. In: Sign Language Studies 4, 138⫺163.
Morgan, Gary
2006 The Development of Narrative Skills in British Sign Language. In: Schick, Brenda/
Marschark, Marc/Spencer, Patricia Elizabeth (eds.), Advances in Sign Language Devel-
opment by Deaf Children. New York: Oxford University Press, 314⫺343.
Morgan, Gary/Barrett-Jones, Sarah/Stoneham, Helen
2007 The First Signs of Language: Phonological Development in British Sign Language. In:
Applied Psycholinguistics 28, 3⫺22.
Morgan, Gary/Barrière, Isabelle/Woll, Bencie
2006 The Influence of Typology and Modality in the Acquisition of Verb Agreement in
British Sign Language. In: First Language 26, 19⫺44.
Morgan, Gary/Herman, Rosalind/Barrière, Isabelle/Woll, Bencie
2008 The Onset and Mastery of Spatial Language in Children Acquiring British Sign Lan-
guage. In: Cognitive Development 23, 1⫺19.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G.
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Nelson, Charles A.
1987 The Recognition of Facial Expression in the First Two Years of Life: Mechanisms of
Development. In: Child Development 58, 890⫺909.
Newport, Elissa
1990 Maturational Constraints on Language Learning. In: Cognitive Science 14, 11⫺28.
Newport, Elissa/Meier, Richard
1985 The Acquisition of American Sign Language. In: Slobin, Dan (ed.), The Crosslinguistic
Study of Language Acquisition. Mahwah, NJ: Lawrence Erlbaum, 881⫺938.
Newport, Elissa/Supalla, Ted
1980 The Structuring of Language: Clues from the Acquisition of Signed and Spoken Lan-
guage. In: Bellugi, Ursula/Studdert-Kennedy, Michael (eds.), Signed and Spoken Lan-
guage: Biological Constraints on Linguistic Form. Weinheim: Verlag Chemie, 187⫺212.
Oller, D. Kimbrough/Eilers, Rebecca E.
1988 The Role of Audition in Infant Babbling. In: Child Development 59, 441⫺466.
Ortega, Gerardo/Morgan, Gary
2010 Comparing Child and Adult Development of a Visual Phonological System. In: Lan-
guage, Interaction and Acquisition 1, 67⫺81.
Padden, Carol
1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland.
Padden, Carol/Perlmutter, David
1987 American Sign Language and the Architecture of Phonological Theory. In: Natural
Language and Linguistic Theory 5, 335⫺375.
684 VI. Psycholinguistics and neurolinguistics

Petitto, Laura Ann


1987 On the Autonomy of Language and Gesture: Evidence from the Acquisition of Per-
sonal Pronouns in American Sign Language. In: Cognition 27, 1⫺52.
Petitto, Laura Ann
2000 On the Biological Foundations of Human Language. In: Emmorey, Karen/Lane, Harlan
(eds.), The Signs of Language Revisited: An Anthology in Honor of Ursula Bellugi and
Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 447⫺471.
Petitto, Laura Ann/Katerelos, Marina/Levy, Bronna G./Gauna, Kristine/Tétrault, Karine/Fer-
raro, Vittoria
2001 Bilingual Signed and Spoken Language Acquisition from Birth: Implications for the
Mechanisms Underlying Early Bilingual Language Acquisition. In: Journal of Child
Language 28, 453⫺496.
Petitto, Laura Ann/Marentette, Paula
1991 Babbling in the Manual Mode: Evidence for the Ontogeny of Language. In: Science 25,
1483⫺1496.
Petronio, Karen/Lillo-Martin, Diane
1997 Wh-movement and the Position of Spec CP: Evidence from American Sign Language.
In: Language 73, 18⫺57.
Pfau, Roland/Quer, Josep
2010 Nonmanuals: Their Grammatical and Prosodic Roles. In: Brentari, Diane (ed.), Sign
Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press,
381⫺402.
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 3⫺98.
Pizzio, Aline Lemos
2006 Variability in Word Order in the Acquisition of Brazilian Sign Language: Constructions
with Topic and Focus. MA Thesis, Federal University of Santa Catarina (UFSC), Flori-
anópolis, Brazil.
Pizzuto, Elena
2002 The Development of Italian Sign Language (LIS) in Deaf Preschoolers. In: Morgan,
Gary/Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benja-
mins, 77⫺114.
Prinz, Peter M./Prinz, Elizabeth A.
1979 Simultaneous Acquisition of ASL and Spoken English in a Hearing Child of Deaf
Mother and Hearing Father: Phase I ⫺ Early Lexical Development. In: Sign Language
Studies 25, 283⫺296.
Pruss-Ramagosa, Eva
2001 EI Is Not huevo: Bilingualism in Two Sign Languages. Manuscript, University of Ham-
burg.
Quadros, Ronice M. de/Lillo-Martin, Diane
2007 Gesture and the Acquisition of Verb Agreement in Sign Languages. In: Caunt-Nulton,
Heather/Kulatilake, Samantha/Woo, I-hao (eds.), Proceedings of the 31st Annual Boston
University Conference on Language Development. Somerville, MA: Cascadilla Press,
520⫺531.
Reilly, Judy
2006 Development of Nonmanual Morphology. In: Schick, Brenda/Marschark, Marc/Spen-
cer, Patricia Elizabeth (eds.), Advances in Sign Language Development by Deaf Chil-
dren. New York: Oxford University Press, 262⫺290.
Reilly, Judy/McIntire, Marina/Bellugi, Ursula
1991 BABYFACE: A New Perspective on Universals of Language Acquisition. In: Siple,
Patricia/Fischer, Susan (eds.), Theoretical Issues in Sign Language Research: Psycholin-
guistics. Chicago: University of Chicago Press, 9⫺23.
28. Acquisition 685

Rosen, Russell S.
2004 Beginning L2 Production Errors in ASL Lexical Phonology. In: Sign Language Studies
7, 31⫺61.
Rosenstein, Ofra
2001 Israeli Sign Language: A Topic-prominent Language. MA Thesis, University of Haifa,
Israel.
Schick, Brenda
1990 The Effects of Morphosyntactic Structure on the Acquisition of Classifier Predicates in
ASL. In: Lucas, Ceil (ed.), Sign Language Research: Theoretical Issues. Washington,
DC: Gallaudet University Press, 358⫺374.
Schick, Brenda
2002 The Expression of Grammatical Relations by Deaf Toddlers Learning ASL. In: Mor-
gan, Gary/Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam:
Benjamins, 143⫺158.
Schick, Brenda
2006 Acquiring a Visually Motivated Language. In: Schick, Brenda/Marschark, Marc/Spen-
cer, Patricia Elizabeth (eds.), Advances in the Sign Language Development of Deaf
Children. Oxford: Oxford University Press, 102⫺134.
Schick, Brenda/Gale, Elaine
1996 The Development of Syntax in Deaf Toddlers Learning ASL. Paper Presented at the
5 th Conference on Theoretical Issues in Sign Language Research, Montreal, Canada.
Schick, Brenda/Marschark, Marc/Spencer, Patricia Elizabeth (eds.)
2006 Advances in the Sign Language Development of Deaf Children. Oxford: Oxford Univer-
sity Press.
Schlesinger, Hilde S./Meadow, Kathryn P.
1972 Sound and Sign: Childhood Deafness and Mental Health. Berkeley: University of Cali-
fornia Press.
Siedlecki, Theodore Jr./Bonvillian, John
1993 Location, Handshape and Movement: Young Children’s Acquisition of the Formational
Aspects of American Sign Language. In: Sign Language Studies 78, 31⫺52.
Siedlecki, Theodore Jr./Bonvillian, John
1997 Young Children’s Acquisition of the Handshape Aspect of American Sign Language:
Parental Report Findings. In: Applied Psycholinguistics 18, 17⫺31.
Slobin, Dan
1973 Cognitive Prerequisites for the Development of Grammar. In: Ferguson, Charles/
Slobin, Dan (eds.), Studies of Child Language Development. New York: Holt, Rine-
hart & Winston, 175⫺208.
Slobin, Dan
1982 Universal and Particular in the Acquisition of Language. In: Wanner, Eric/Gleitman,
Lila R. (eds.), Language Acquisition: The State of the Art. New York: Cambridge Uni-
versity Press, 128⫺172.
Slobin, Dan/Hoiting, Nini/Kuntze, Marlon/Lindert, Reyna/Weinberg, Amy/Pyers, Jennie/Anthony,
Michelle/Biederman, Yael/Thumann, Helen
2003 A Cognitive/Functional Perspective on the Acquisition of Classifiers. In: Emmorey,
Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ:
Lawrence Erlbaum, 271⫺296.
Stelt, Jeannette M. van der/Koopmans-van Bienum, Florien J.
1986 The Onset of Babbling Related to Gross Motor Development. In: Lindblom, Björn/
Zetterstrom, Rolf (eds.), Precursors of Early Speech. New York: Stockton Press, 163⫺
173.
Stokoe, William Jr./Casterline, Dorothy/Croneberg, Carl
1965 A Dictionary of American Sign Language. Washington, DC: Gallaudet College Press.
686 VI. Psycholinguistics and neurolinguistics

Takkinen, Ritva
2003 Variations of Handshape Features in the Acquisition Process. In: Baker, Anne/Bo-
gaerde, Beppie van den/Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign
Language Research: Selected Papers from TISLR 2000. Hamburg: Signum, 81⫺91.
Tang, Gladys/Sze, Felix/Lam, Scholastica
2007 Acquisition of Simultaneous Constructions by Deaf Children of Hong Kong Sign Lan-
guage. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simulta-
neity in Signed Languages: Form and Function. Amsterdam: Benjamins, 283⫺316.
Tetzchner, Stephen von
1984 First Signs Acquired by a Norwegian Deaf Child with Hearing Parents. In: Sign Lan-
guage Studies 44, 225⫺257.
Thelen, Esther
1981 Rhythmical Behavior in Infancy: An Ethological Perspective. In: Developmental Psy-
chology 17, 237⫺257.
Torigoe, Takashi
1994 Resumptive X Structures in Japanese Sign Language. In: Ahlgren, Ingrid/Bergman,
Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure: Papers from the
Fifth International Symposium in Sign Language Research. Durham, UK: ISLA, 187⫺
198.
Torigoe, Takashi/Takei, Wataru
2001 A Descriptive Analysis of Early Word Combinations in Deaf Children’s Signed Utter-
ance. In: Japanese Psychological Research 43, 249⫺250.
Trauner, Doris/Wulfeck, Beverly/Tallal, Paula/Hesselink, John
2000 Neurologic and MRI Profiles of Language Impaired Children. In: Developmental Medi-
cine and Child Neurology 42, 470⫺475.
Vihman, Marilyn/Macken, Marlys/Miller, Ruth/Simmons, Hazel/Miller, James
1985 From Babbling to Speech: A Reassessment of the Continuity Issue. In: Language 61,
397⫺445.
Villiers, Jill G. de/Villiers, Peter A. de
1978 Language Acquisition. Cambridge, MA: Harvard University Press.
Volterra, Virginia/Iverson, Jana
1995 When Do Modality Factors Affect the Course of Language Acquisition? In: Emmorey,
Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erl-
baum, 371⫺390.
Woll, Bencie
2010 Early Spoken and Sign Language Acquisition. Paper Presented at the 10 th International
Conference on Theoretical Issues in Sign Language Research, West Lafayette, IN.
Woolfe, Tyron/Herman, Rosalind/Roy, Penny/Woll, Bencie
2010 Early Lexical Development in Native Signers: A BSL Adaptation of the MacArthur-
Bates CDI. In: Journal of Child Psychology and Psychiatry 51, 322⫺331.
Zeshan, Ulrike
2004 Interrogative Constructions in Signed Languages: Cross-linguistic Perspectives. In: Lan-
guage 80, 7⫺39.

Deborah Chen Pichler, Washington, DC (USA)


29. Processing 687

29. Processing
1. Introduction
2. Mental representation of sign languages
3. Working memory
4. Lexical access
5. Exploiting the visual modality
6. Conclusion
7. Literature

Abstract
This chapter aims to provide an introduction and summary of the key findings in the
field of sign language processing. A discussion of two main strands of research ⫺ exami-
nations of working memory processes that subserve language comprehension (and pre-
sumably production) and studies of lexical access ⫺ forms the core of the chapter. These
studies have in common the aim of determining how sign languages are represented in
the human language system, with lexical access studies also seeking to understand how
those representations are used to access long-term language representations. The first
part of this chapter will argue that a combination of location and movement properties
are the most psychologically salient aspects of the sign language signal, with working
memory representations being based upon a spatio-temporal code that exploits such a
signal. Drawing upon studies of lexical access and production in the sign language litera-
ture, the second part of this chapter will posit similarities and possible differences be-
tween spoken and sign language processing. In the third part of this chapter, two particu-
lar aspects will be explored and their implications for models of sign language
processing discussed.

1. Introduction
Language processing refers to the means by which a natural language is comprehended
and produced by language users. While much psychological research has been devoted
to understanding how we process spoken languages, there has been relatively little
research looking at how sign languages are understood and produced. In terms of sign
language comprehension, there have been two main strands of research ⫺ examina-
tions of working memory processes that subserve language comprehension (and pre-
sumably production) and studies of lexical access. These studies have in common the
aim of determining how sign languages are represented in the human language system,
with lexical access studies also seeking to understand how those representations are
used to access long-term language representations. Studies of sign language production
are much less frequent, and have centered in the main on production errors and what
those signal about how sign languages are processed (see chapter 30 for a detailed
discussion of production errors).
688 VI. Psycholinguistics and neurolinguistics

Like all natural languages, sign languages provide a means of transmitting informa-
tion from one language user to another. Thoughts, intentions, and desires can be turned
into externally observable linguistic utterances, which are then produced by the arms,
hands, face, and body of the signer ⫺ sign language production. These utterances are
then reinterpreted by the addressee, and the communicative intent and message of the
signer are inferred ⫺ sign language comprehension. Whereas sign language linguistics
is concerned with the linguistic structure of sign languages and the linguistic structures
that are used to convey meaning, sign language psycholinguistics is concerned with the
mental representations and processes that allow this to occur. That is, psycholinguists
seek to determine the mental representations and processes that subserve language
comprehension and production.
There is now increasing evidence that the formational parameters of sign languages
(handshape, location, movement, orientation) have a psychological reality in that they
can be used to explain how language is maintained and manipulated in working mem-
ory, and how those working memory representations interface with longer-term lexical
representations. The first part of this chapter will argue that combinations of location
and movement properties are the most psychologically salient aspects of the sign lan-
guage signal, with working memory representations being based upon a spatio-tempo-
ral code that exploits such a signal. Evidence from similarity judgment and lexical
access studies will also be put forward in support of this hypothesis, and studies where
the data seem to be at odds will be noted. Alongside an account of the representations
used for sign language processing, an account of the mechanisms that utilize those
representations in the service of language comprehension and production is required.
Drawing upon studies of lexical access and production in the sign language literature,
the second part of this chapter will posit similarities and possible differences between
spoken and sign language processing. Sign languages differ fundamentally from spoken
languages in the way that they exploit the visual medium. In the third part of this
chapter, two particular aspects will be explored and their implications for models of
sign language processing discussed. The first concerns iconicity and the ability of sign
languages to produce gestures that bear a relationship to real-world referents. This is
a controversial topic, and both sides of the debate will be explored. While conclusions
may be hard to draw, it will hopefully become clear that what is meant by iconicity is
fundamental, and the requirements for demonstrating its role in sign language process-
ing will be spelled out. The second aspect relates to the productive lexicon of sign
languages, and also touches upon issues of iconicity and real-world representation. We
will ask whether or not the productive lexicon of sign languages ⫺ the ability to gener-
ate novel linguistic forms ‘on line’ ⫺ requires a different kind of explanation from that
proposed for spoken languages. The chapter will conclude by summarizing what is
known and unknown about sign language processing, and by suggesting where compar-
isons to spoken language models are fruitful and where it may be necessary to go
beyond those models in order to further our understanding of sign languages and,
indeed, language processing in general.

2. Mental representation of sign languages


In order for a signer to understand a sign language utterance, the signs that they ob-
serve must somehow be encoded in a form that allows the signer to match the input
29. Processing 689

with a long-term representation of a sign and thus have access to its meaning. Signs
are formed from complex movements of the hands and arms, along with facial expres-
sions and positioning of the signer’s body. Different signs ⫺ each with their own mean-
ings ⫺ are created by different combinations of these movements. Furthermore, the
same sign produced by different signers ⫺ and by the same signer at different points in
time ⫺ will rarely take exactly the same form. The problem faced by the sign language
processing system is therefore to extract the relevant aspects of the sign input that are
required to access the intended meaning of the sign. The seminal work of William
Stokoe and colleagues (Stokoe/Casterline/Croneberg 1976) suggested that signs are
not pantomimic in nature, but are constituted of rule-governed combinations of basic
building blocks. In spoken languages, such building blocks are termed phonemes ⫺ in
sign languages, the term chereme proposed by Stokoe was originally used, but has now
been largely replaced by the term formational parameter. Stokoe’s major contribution
to sign language studies was to demonstrate that the words of a sign language were
conventionalized and consistent combinations of such formational parameters, notably
handshape, location, movement, and orientation. For example, the American Sign Lan-
guage (ASL) sign please consists of an open [-handshape that is oriented such that
the palm faces the body of the signer and then moves in a circular motion that is
located in front of the signer’s chest (see Figure 29.1). This sign may be articulated
slightly differently each time it is produced, but it will always contain these formational
parameters. Any change in these parameters will result in a sign with a different mean-
ing. For example, switching the [-handshape to an /-handshape, but keeping all of the
other parameters the same, results in the ASL sign sorry (see Figure 29.1). By looking
for such minimal pairs, Stokoe and colleagues were able to isolate the formational
parameters of ASL in much the same way that phonemes are identified in spoken lan-
guages.

Fig. 29.1: An example of a minimal pair in ASL. The signs please and sorry share the same
location and movement, but differ in terms of the handshape parameter. Changing the
[-handshape to an /-handshape results in a change in meaning, whereas the handshapes
themselves are meaningless in that they do not carry any semantic information.
690 VI. Psycholinguistics and neurolinguistics

A seminal set of studies reported by Klima and Bellugi (1979) sought to determine
whether or not these formational parameters played a role in the mental representation
of sign language utterances. That is, they asked whether or not the language processing
system extracted these formational parameters from the visual input and used them to
mentally represent the sign that had been observed. The motivation for doing so
stemmed from research into short-term memory in users of spoken languages. The
work of Baddeley (Baddeley 1966) had proposed that the words of spoken language
were not initially stored in terms of their meaning, but rather in terms of how they
sounded. These sound-based representations were subsequently used to access the
meaning of the spoken words (a process termed lexical access). This sound-based code
for spoken words was proposed due to an effect called the phonological similarity effect
(Baddeley/Lewis/Vallar 1984), the observation that lists of spoken words that sound
alike are harder to recall than lists of words that sound dissimilar. The effect is typically
observed in a serial, ordered recall task. In this task, individuals are presented with
lists of words that they must attend to, remember, and then recall in the same order
as they were presented. An example of a similar list in spoken English would be man-
cap-ran-can-rap-map. Such lists often result in poorer performance than dissimilar
sounding lists, with errors such as word transpositions (getting the words right, but in
the wrong order) and sound-based confusions (substituting incorrect phonemes). In
one of their studies, Klima and Bellugi (1979) created lists of signs for serial, ordered
recall that varied internally in terms of their formational similarity. They wanted to
know whether or not signers of ASL represented signs in terms of formational param-
eters (handshape, location, movement, orientation). Lists of signs presented for recall
contained either signs that shared many formational parameters (similar lists) or signs
that shared very few formational parameters (dissimilar lists). Examples of the sign
sets that they used to create the similar lists are given in Figure 29.2.
Deaf signers when tested on their recall of these signs performed significantly worse
for lists of signs that shared formational parameters (set 1: 60.6 % vs. 45.7 %; set 2:
69.7 % vs. 53.3 %; set 3: 77.4 % vs. 57.2 %). Klima and Bellugi went on to conduct
another study in which the similarity of signs within a list was more specific. In order
to determine which formational parameter was most salient, they constructed lists that
were similar in terms of only handshape, only location, or only movement. Examples
taken from their study are given in Figure 29.3.
Klima and Bellugi analyzed their data by looking at the probability of recalling a
specific sign when it occurred in a formationally similar list compared to when that
same sign occurred in a formationally dissimilar list. Their data suggested that when a
sign appeared in a list of signs which shared the same handshape it significantly in-
creased the probability of recall (by 9 %). There was no effect of sign movement, but
when a sign appeared in the context of other signs sharing its location there was a
significant decrement in recall success (by 14 %). In a serial, ordered recall task, simi-
larity of representation is thought to be the leading factor in impairing performance.
As a result, Klima and Bellugi concluded that a sign’s production location was likely
the key parameter in the representation of signs in short-term memory, whereas shared
handshape across signs within a list may have provided a cue to aid recall or decide
between competing responses. To further support their conclusions, Klima and Bellugi
reported examples of the errors made by hearing and deaf participants in their studies.
Whereas hearing speakers often made sound-based errors (recalling /vote/ as /boat/ for
29. Processing 691

Fig. 29.2: A reproduction of three formationally similar sign sets reported in Klima and Bellugi
(1979). The signs within a set (arranged in columns) overlap in terms of their forma-
tional parameters. Note: The lexical choices used to select signs here may result in some
differences from the sign forms utilized by Klima and Bellugi (1979).
692 VI. Psycholinguistics and neurolinguistics

Fig. 29.3: A reproduction of formationally similar sign sets reported in Klima and Bellugi (1979).
The signs within each column share a formational parameter: the M-handshape is the
common denominator in the first column; the right cheek is a common location across
signs in the second column; and a downward movement of the dominant hand is com-
mon to the signs in the third column. Note: The lexical choices used to select signs here
may result in some differences from the sign forms utilized by Klima and Bellugi (1979).

example) this was not typical of the errors made by deaf signers, suggesting that they
were not systematically recoding ASL signs into a sound-based English code. Rather,
the deaf signers made errors related to the formational properties of the signs them-
selves, for example recalling vote as tea. As shown in Figure 29.4, the ASL signs vote
and tea have a high formational similarity. They are distinguishable by the manner of
the movement used to create the sign ⫺ vote uses a short, downward path motion
toward the base hand, whereas tea has a small, circling motion near to the base hand.
29. Processing 693

Fig. 29.4: Recall errors made by deaf signers commonly reflect the formational nature of signs.
For example, the sign vote may be recalled as tea. These signs are dissimilar in terms
of their meaning. However, they are highly similar formationally, differing only in terms
of the movement pattern of the dominant hand.

The work of Klima and Bellugi laid the groundwork for further psycholinguistic
studies of sign languages. They opened up the possibility that deaf people ‘think in
sign’ and were able to represent the world internally in terms of the formational prop-
erties of sign languages.

3. Working memory
More recent work by Karen Emmorey and colleagues built upon a model of short-
term memory for hearing speakers ⫺ Baddeley and Hitch’s (1974) working memory
model. In doing so, they moved beyond a consideration of the nature of the mental
representation of sign languages and sought to explain how those representations were
stored and utilized within the mental systems of signers.

3.1. A model of working memory

Baddeley and Hitch (1974) had proposed that working memory consisted of a central
executive linked to two ‘slave’ systems that held representations required by the central
executive for brief periods of time. For example, given the task of listening to a string
of digits and then reporting their sum, there would be a short-term representation of
the digits in a slave system that could be read off as needed by the central executive
as it performed the additions and calculated that sum. Baddeley and Hitch proposed
two such slave systems, one for verbal information (the phonological loop) and one
for visuo-spatial information (the visuo-spatial sketchpad). Emmorey and colleagues
sought to determine whether this model would hold for deaf signers as well as for
694 VI. Psycholinguistics and neurolinguistics

hearing speakers, and in a series of papers they sought to establish the same experimen-
tal effects with ASL materials that Baddeley and Hitch had obtained in support for
their model of the phonological loop with spoken English. These effects are termed
the phonological similarity effect, articulatory suppression effect, word length effect,
and irrelevant speech effect.
The working memory model as originally proposed (Baddeley/Hitch 1974) con-
tained a phonological loop responsible for the temporary maintenance and rehearsal
of verbal information. This loop had two sub-components ⫺ a short-term buffer and
an articulatory rehearsal routine. The model proposed that verbal information gained
automatic access to the short-term buffer where it was stored in a sound-based code.
Information in this buffer would decay over time, however, unless the articulatory
rehearsal routine (the ‘inner voice’) was used to covertly rehearse the information. This
articulatory rehearsal routine also served another function ⫺ it could recode visual
information into a sound-based code and allow it to be stored in the short-term buffer.
Presentation of visual digits (1 4 3 9 5) would result in the observer using their articula-
tory rehearsal routine to covertly sound out the digits, which would then be stored in
a sound-based code (/one/ /four/ /three/ /nine/ /five/). Evidence for this model of the
phonological loop came from the four experimental effects listed above. These effects
will be considered in relation to sign language in sections 3.2 to 3.6. In section 3.7,
the role of spatial coding in articulatory rehearsal will be addressed, and in section 3.8,
memory span will be discussed.

3.2. Phonological similarity effect


The phonological similarity effect refers to the observation that lists of similar-sound-
ing words are harder to recall than control lists of words that sound dissimilar to
one another (Baddeley 1966). This is proposed to reflect sound-based coding in one
component of the phonological loop ⫺ the short-term buffer. Representations stored
in this buffer are subject to decay over time, with the result that similar representations
will become confused and errors become more likely, especially in serial, ordered recall
tasks. This effect had already been demonstrated using ASL materials by Bellugi and
Klima (1979, see above) with a study by Poizner, Bellugi, and Tweney (1981) demon-
strating that semantic similarity does not produce the same effect. Wilson and Emmo-
rey (1997) replicated this effect using different materials and different participants, and
the effect has been reported by others for both ASL users and users of other sign
languages (see Kyle/Woll (1985) for British Sign Language (BSL)). There is now abun-
dant evidence that signers are capable of storing sign language information in a sign-
based code in their short-term memory. There is, however, a lack of consensus on
exactly how signs are to be coded in terms of their formational parameters and there-
fore what constitutes ‘similar’ in a sign language. This is an issue that will be returned
to later in the chapter.

3.3. Articulatory suppression effect


The articulatory suppression effect is the reduction in the number of items that can be
correctly recalled when the operation of the articulatory rehearsal routine is inter-
29. Processing 695

rupted. By asking participants to articulate a simple sound repeatedly, Baddeley pro-


posed that the articulatory rehearsal loop is occupied and unavailable to rehearse items
in the short-term buffer. As a result, representations decay and the number of items
that can be recalled without error decreases. This is indeed what resulted from asking
participants to engage in the articulatory suppression task (Baddeley/Lewis/Vallar
1984).
In addition to demonstrating a phonological similarity effect using their materials
and participants, Wilson and Emmorey (1997) showed that manual articulatory sup-
pression reduced recall for deaf signers also. They asked deaf signers to perform a
simple hand movement during presentation of stimulus materials (a repeated alterna-
tion between the ASL signs eight and world). This resulted in fewer signs being
recalled correctly and, importantly, the phonological similarity effect persisted. That is,
despite occupying the articulatory rehearsal routine with the manual suppression task,
it was still harder for deaf individuals to recall lists of formationally similar signs as
compared to lists of formationally dissimilar signs. In the context of the model of the
phonological loop, this is evidence that sign input has automatic access to the short-
term buffer (the locus of the phonological similarity effect) and does not need to be
recoded in any way by the articulatory rehearsal routine.

3.4. Interaction of phonological similarity and articulatory suppression

Baddeley, Lewis, and Vallar (1984) showed that the phonological similarity effect and
the articulatory suppression effect do interact when pictorial stimuli are used. Typically,
when presented with images that can be recoded into a sound-based form, this is pre-
cisely what people do and as a result phonological similarity effects can be observed.
Wilson and Emmorey (1997) presented deaf signers with sequences of pictures of ob-
jects that could be named straightforwardly in ASL. For some sequences, the corre-
sponding ASL signs were formationally similar, whereas for others, they were forma-
tionally dissimilar. In the absence of a manual suppression task (eight-world) this
resulted in a phonological similarity effect being observed. However, when the sup-
pression task was performed concurrently with stimulus presentation, the size of the
phonological similarity effect was diminished. This mirrored findings in the spoken
language literature (Salamé/Baddeley 1982), and is understood to reflect the need to
employ the articulatory rehearsal routine in order to recode pictorial information into
a sound-based (or sign-based) code.

3.5. Word length effect

The word length effect is thought to reflect the capacity of the articulatory rehearsal
routine. Simply stated, the longer it takes to articulate a list of words, the fewer the
number of words successfully recalled. Longer words occupy the rehearsal routine to
a greater extent, and thus the limits of the rehearsal system are reached with fewer
stimuli. Baddeley, Thomson, and Buchanan (1975) demonstrated that this was the case
for spoken word lists where lists contained the same number of words but differed in
696 VI. Psycholinguistics and neurolinguistics

the time required to articulate them. Furthermore, the word length effect was attenu-
ated by using a concurrent articulatory suppression task regardless of whether materi-
als were presented in speech or visually (Baddeley/Lewis/Vallar 1984). This suggests
that the effect reflects a limitation imposed by the articulatory rehearsal routine itself.
Wilson and Emmorey (1998) presented participants with lists of signs that had either
long path movements (piano, bicycle) or short hand-internal movements (typewriter,
milk). The signs with long path movements took more time to articulate than those
with short, hand-internal movements. In the absence of a suppression task, more signs
were correctly recalled from the ‘short’ lists than from the ‘long’ lists (approx. 60 %
vs. 50 % of items recalled correctly). However, when participants performed the sup-
pression task during stimulus presentation, overall performance decreased and no dif-
ferences in recall performance were observed between the two list types (approx. 40 %
vs. 45 %). Again, as for speech, this suggests that the articulatory rehearsal routine has
a limited capacity determined by the time required to articulate material stored in the
short-term buffer. For spoken language, Baddeley, Thomson, and Buchanan (1975)
demonstrated that this reflected articulation time and not number of syllables, a claim
substantiated for sign language by Kyle (1986) in BSL using the syllabification ap-
proach for sign languages proposed by Liddell and Johnson (1989).

3.6. Irrelevant speech effect

It was mentioned previously that speech is given automatic access to the short-term
store within the Baddeley and Hitch working memory model. Evidence for this comes
from the irrelevant speech effect. Salamé and Baddeley (1982) showed that listening
to irrelevant speech while rehearsing a to-be-remembered list of words results in im-
paired recall. The irrelevant speech automatically enters the short-term buffer and
irrelevant items become confused with relevant list items. Concurrent presentation of
irrelevant signs is problematic as it causes an overt attentional confound ⫺ does the
participant look at the to-be-remembered signs or at the ‘irrelevant’ signs? Wilson and
Emmorey (2003) got around this confound by presenting deaf signers with lists of to-
be-remembered signs, which had to be retained for a 12-second retention period. Dur-
ing this retention period, participants either rehearsed the signs covertly (baseline),
viewed rotating shapes, or viewed a deaf person signing pseudo-signs (signs that do
not exist in the lexicon of ASL but are structurally legal in that they contain valid ASL
formational parameters combined in ways that conform to the sign formation rules of
ASL). Baseline performance was 61.3 % recall accuracy, with both shapes (54.5 %)
and pseudo-signs (49.3 %) resulting in impaired recall when presented during the list
retention period. Thus it appears that visual information ⫺ and particularly visual
linguistic (sign) information ⫺ has privileged and automatic access to the short-term
buffer in deaf signers.
Taken together, these data suggest that the same effects used to support the Badde-
ley and Hitch working memory model obtained in spoken language studies are also
evident in studies of sign language processing. Deaf people are able to represent sign
language in a visual, sign-based code, and rehearse that information using a covert
articulatory process. However, this is not the whole story, with there being some impor-
tant differences between working memory for speech and sign.
29. Processing 697

3.7. Spatial coding

An interesting study by Wilson and Emmorey (2001) looked at the spatial nature of
articulatory rehearsal of signs. The experimenters created lists of signs that differed in
whether or not they were articulated in the ‘neutral space’ in front of a signer’s body
or required articulation upon a body part. Take, for example, the signs clown and eye
shown in Figure 29.5. These two signs share the same handshape and movement pat-
tern but differ in the location at which they are articulated. If both signs are displaced
to a neutral location in front of the signer then they become indistinguishable. Wilson
and Emmorey reasoned that if signers displaced to-be-remembered signs into distinct
spatial locations in order to aid serial, ordered recall, then lists containing such signs
would be harder to recall. The motivation for this hypothesis was that while speech is
auditory and suited to serial representation and thus recall, sign is inherently spatial
and contains significant amounts of simultaneity. In other words, sign languages are
not suited to serial, ordered recall tasks, and signers must find additional mechanisms
that will allow them to perform such tasks. Displacing signs into serial, ordered loca-
tions in front of the signer’s body is one way in which this could be achieved. The data
obtained from deaf signers suggested that this type of spatial coding may indeed have
been exploited. Recall for lists of signs which required articulation on the signer’s body
was worse than for those that were normally articulated in neutral space. Displacement
of body-located signs to neutral space would have stripped them of their location speci-
fication during covert rehearsal, resulting in several potential ‘matches’ for each re-
hearsed sign at the time of recall.

Fig. 29.5 On the left are the BSL signs clown and eye. These signs share the same handshape
and movement, differing only in terms of their location on the body. Thus, when dis-
placed to neutral signing space in front of the signer (see image on right), the two signs
are indistinguishable. Wilson and Emmorey (2001) used the fact that this affects signs
with location parameters on the body, but not signs articulated in neutral signing space,
to investigate spatial coding in deaf signers of ASL.
698 VI. Psycholinguistics and neurolinguistics

3.8. Memory span

Another way in which memory for signs has been shown to differ from speech is in
terms of ‘memory span’. Span is a term used to refer to the maximum number of items
that can be encoded, rehearsed, and then recalled in correct serial order. A common
way of measuring short-term memory span is the ‘digit span’ procedure. In this task,
participants are presented with lists of digits that increase in length from trial to trial.
In this way, the maximum list length that an individual can recall in correct serial order
can be measured. This ‘span’ measurement is taken to be an index of the (limited)
capacity of the short-term memory system. A seminal study by Miller (1956) suggested
that the capacity of short-term memory for speakers of English is 7G2 items (often
termed Miller’s Magical Number 7).
There are now several studies to suggest that the digit span in ASL for deaf individ-
uals is significantly less than this, typically around 5G1 items (Bavelier et al. 2006,
2008; Boutla et al. 2004). These studies have typically compared spoken English digit
span with signed ASL letter span, because the digit signs of ASL have very high phono-
logical similarity (see above). Importantly, studies with hearing ASL-English bilinguals
have shown that their span for ASL letters is lower than their span for English digits
(Boutla et al. 2004) ruling out the possibility that the difference is due to deafness per
se. Furthermore, Boutla et al. (2004) ruled out explanations in terms of articulation
time ⫺ the recall rate of ASL items was equivalent to that of spoken English items,
and speeded naming of ASL letters (in sign) and English digits (in speech) was also
reported to be equivalent. Emmorey and Wilson (2004) suggested one explanation
may be that digits are somehow special, and processed in a different way to letters, thus
making the comparison of letter spans in ASL with digit spans in English problematic.
However, follow-up studies by Bavelier and colleagues (2006, 2008) compared English
letter span with ASL letter span using stimuli carefully selected to minimize phonologi-
cal similarity. They reported that the span difference between ASL and English per-
sisted (for studies involving Swedish Sign Language, see Rönnberg/Rudner/Ingvar
(2004) and Rudner/Rönnberg (2008); for studies on Italian Sign Language, see Geraci
et al. (2008) and Gozzi et al. (2010)).
It is important to note that this difference is observed in a serial, ordered recall
task. Bavelier et al. (2008) have suggested that this may be due to a bias towards serial
coding of information in the auditory domain (spoken English) which is replaced by a
spatial bias in the visual domain (signed ASL). This brings us back to the issue of
spatial coding discussed above. Bavelier et al. presented hearing ASL-English bilin-
guals with supra-span lists of letters that could be recalled in any order (free recall, as
opposed to ordered recall). Here they observed that the number of items recalled in
ASL was indistinguishable from that recalled in English (around ten items). However,
the maintenance of the original serial order varied as a function of language of presen-
tation within the same set of bilingual participants. ASL-English bilinguals were more
likely to preserve the serial order of the presentation list in their free recall for spoken
English lists than for signed ASL lists. They also demonstrated that, contrary to the
results reported by Wilson et al. (1997) in 10-year-old deaf children, deaf adults were
not at an advantage when asked to recall items in reverse serial order. In such a task,
a presented sequence such as m-k-g-l should be recalled as l-g-k-m. An advantage for
deaf children had originally been attributed to the ability to make use of spatial coding
29. Processing 699

to maximize backward recall. Bavelier et al. (2008) pointed out that this task still
requires processing of the serial, temporal sequence of the items, and that the data from
adults were consistent with this being problematic in ASL relative to spoken English.
These studies of digit and letter spans are important, as such tasks are often em-
ployed in neuropsychological assessments and instruments (such as the Wechsler Adult
Intelligence Scale). However, their importance for understanding sign language proc-
essing in everyday life may be substantially less. Boutla et al. (2004) reported data
from a ‘working memory span’ task that is proposed to better approximate the capacity
of short-term memory as it relates to actual language processing (Daneman/Merilke
1996). In this task, deaf signers and hearing speakers were given lists of words that
were to be recalled (in any order). At the time of recall, however, participants were
required to embed each list word in a separate sentence. Thus, given the sequence
bird-library-car, a correct response would be “A bird flew around the building”,
“I drove my car to the movie theater”, “Books can be read for free at the local library”.
Despite differences in digit span, deaf and hearing individuals performed equally well
on this ‘working memory span’ task (around two to three items correct on average).
These data suggest that when it comes to use working memory in the service of lan-
guage comprehension and production, there may be few if any capacity differences
between ASL signers and English speakers.

4. Lexical access

The studies reported above suggest that the formational properties of sign languages
have psychological validity ⫺ not only do they provide linguists with a way of under-
standing the formal structures of sign languages, they also appear to play a role in the
mental representation of language and the short-term memory capabilities of signers.
Researchers have also asked whether or not these formational parameters are involved
in lexical access. Lexical access is the mental process by which an observed sign is
mapped onto long-term representations of the relevant language and the meaning of
the sign is thereby accessed (along with other explicit information such as grammatical
category). Models of lexical access for spoken languages, such as cohort theory (Mar-
slen-Wilson/Welsh 1978) and the neighborhood activation model (Goldinger/Luce/Pis-
oni 1989), are intended to explain how a phonetic input is recognized as a token of a
lexical item (a specific word will have different phonetic productions at different times
and by different speakers) and how stored information about that item is retrieved.
Although contemporary models differ in certain respects, in others they are in broad
agreement. They all conceive of lexical recognition as the summation of evidence for
the hypothesis that a given phonetic input is a token of a certain lexical item. This
evidence is usually instantiated as activation of a node representing a lexical item, with
that activation being derived from bottom-up (phonetic) and top-down (contextual)
information. The initial stage of lexical access is feature extraction, with phonetic fea-
tures in the input stream being extracted. If a lexical node is associated with the input
features, then it receives activation; if not, then its activation is attenuated. Once a node
reaches some critical ‘threshold’ level of activation, then a long-term representation
corresponding to the phonetic input has been ‘accessed’. It is helpful to consider lexical
700 VI. Psycholinguistics and neurolinguistics

access as a two-stage process. The first stage, as indicated above, corresponds to detec-
tion of the formational parameters in the sign being observed. This can be thought of
as a pre-lexical stage of processing. At this stage, the system is not concerned with the
lexical status of the sign itself, but focuses upon the formational parameters of which
the input is constituted. For example, given the BSL sign afternoon, the formational
parameters for the R-handshape (index and middle finger extended and parallel), the
chin location, and the short forward path movement would be extracted by the visual
system and pre-lexical representations of these formational parameters would become
activated. At the second, lexical, stage of processing, these pre-lexical representations
would in turn activate representations of the signs that possess those parameters. Pre-
sumably, the representation of the sign afternoon would receive the most activation
and become the ‘winner’ of this processing stage, and the sign would be recognized.
The process of ‘winning’ this competition can be thought of as a product of facilitation
of activation by pre-lexical representations, as well as inhibition of competitors at the
lexical level. That is, as the lexical representation of afternoon receives increasing
amounts of activation from pre-lexical representations, it starts to inhibit the activation
of other lexical representations in what is termed a ‘winner-takes-all’ process.

4.1. Gating studies


A study by Emmorey and Corina (1990) used a ‘gating’ procedure to determine which
formational parameters of signs were extracted by signers, and in what order. In a gating
study, a stimulus (here an individual sign) is split up into several segments. Emmorey and
Corina divided their signs up into brief segments. Participants are shown the initial seg-
ment and asked to guess what the stimulus is. The second segment is then added to that
initial segment, and the stimulus is presented again. Once again, the participant makes
a guess as to what the stimulus might be. This procedure is repeated until either (a)
the entire stimulus has been presented, or (b) the participant correctly determines the
identity of the stimulus. By looking at the pattern of guesses as the stimulus unfolds,
it is possible to draw conclusions about which parameters are extracted and in which
order. On the basis of such patterning, Emmorey and Corina concluded that location
was the first parameter isolated, as evidenced by initial incorrect guesses sharing this
parameter with the target sign. This was followed by handshape, restricting the range
of guesses to those sharing location and handshape with the target, and finally the
identification of the sign’s movement leading to correct identification of the target sign.
This study suggests that the formational parameters of signs are extracted online as
they become available to the visual perceptual system. When a signer produces a sign,
the final handshape is not usually fully articulated until the signer’s hand has arrived
at the initial location of the sign, meaning that location information is reliably available
prior to accurate information concerning handshape. It is only when the handshape
has been fully articulated at the initial location that the sign’s movement is executed,
meaning that the sign movement is the last parameter to become available to the
signee.

4.2. Priming studies


As the formational parameters become available to the visual system in the order
location-handshape-movement, it is tempting to think of these as analogues to onset-
29. Processing 701

coda structures that are known to influence lexical access in spoken language (Marslen-
Wilson/Welsh 1978). As the perceptually initial parameter, location could be construed
as the sign’s ‘onset’ with handshape and/or movement constituting the sign’s ‘rhyme’.
If this were the case, then current models of lexical access in spoken language would
predict what are termed ‘priming effects’ as a result of the location parameter. In
priming studies, one examines the influence of an initial sign (the prime) on the proc-
essing of a subsequent sign (the target). The basic idea is that if pre-lexical representa-
tions of location are used to access the long-term store of lexical representations, then
all representations sharing that parameter will receive some activation. As more evi-
dence becomes available as the sign unfolds over time, the range of possible candidates
narrows until one representation receives enough activation for it to be declared the
‘winner’ and sign recognition (or sometimes misrecognition, see below) has taken
place. However, the rival lexical candidates who were knocked out of the race have
had their activation levels attenuated as a result of inhibition from the ‘winner’. As a
result, if a subsequent sign is one of those candidates that shared location with the
target, then it will take longer than normal to reach threshold and be declared a ‘win-
ner’. In other words, the first sign has primed recognition of the second sign, resulting
in an inhibitory slowing of sign recognition.
The first study to look at this type of priming in a sign language was reported by
Corina and Emmorey (1993). They presented sequences of signs to participants in
prime-target pairs, where prime and target shared one parameter (either handshape,
location, or movement). Some of the targets were real ASL signs whereas others were
nonsense signs that were legal given the sign formation rules of ASL but did not occur
in the language. The participants’ task was to indicate whether or not the target signs
were real or nonsense signs (termed a ‘lexical decision task’). The idea behind the task
is that if lexical access takes place, then participants can reliably indicate that they have
seen a ‘real’ sign. If no lexical access occurs, then a ‘nonsense’ response is initiated. If
a formational parameter is being used to define a set of potential candidates, then
‘real’ sign decisions for target signs should be slower when the prime and target share
that parameter. Corina and Emmorey reported that while shared handshape had no
effect on lexical decision speed, shared location resulted in a slower decision and
shared movement resulted in a faster decision. This is in line with the findings of
Emmorey and Corina (1990), who suggested that location is the first formational pa-
rameter to be extracted and thus presumably the parameter that defines the initial
cohort of possible lexical items.
Dye and Shih (2006) took this approach a stage further, based upon a study of
sign similarity judgments by Hildebrandt and Corina (2002) which suggested that a
combination of location and movement might be a salient parameter bundle for deaf
signers. Dye and Shih (2006) extended the Corina and Emmorey (1993) study by (i)
using another sign language ⫺ BSL, (ii) having nonsense signs as both primes and
targets, and (iii) factorially combining handshape, location, and movement parameters
to give prime-target pairs separated by 50 milliseconds (msec) that could share any
combination of one, two, or three parameters (the latter being the same sign repeated
as both prime and target). Their data indicated that when a prime and target shared
location and movement, a faster lexical decision was observed. Furthermore, this facili-
tation was only evident for sign-sign pairs. While Dye and Shih argued that their data
suggests a role for location-movement parameter bundles in lexical access, it is possible
702 VI. Psycholinguistics and neurolinguistics

Fig. 29.6: Each row represents a possible prime-target pair in a formational priming study. Each
pair uses the same sign as a target (apple), but the prime varies (here, candy = shared
location and movement; onion = shared handshape and movement; book = no param-
eters shared). In this way, the nature of the formational similarity between prime and
target can be systematically manipulated to determine the role of formational param-
eters in lexical access, controlling for differences in lexical access times for different
target signs.

that the long reaction times observed in their study suggest that their effect reflects a
later stage of processing. The finding that nonsense sign primes did not influence lexical
decision times, despite sharing location and movement with targets, suggests that the
effect is not pre-lexical, as facilitation at this stage of processing would be predicted
for nonsense sign primes as well as for real sign primes. The finding of facilitation, as
opposed to inhibition, contradicts the findings of Corina and Emmorey (1993), who
reported that shared location inhibited lexical decisions about a target sign. In this
regard, one important limitation of the Dye and Shih (2006) study should be noted.
Typically in priming studies, a target sign appears more than once and is preceded by
primes that differ in their relationship to that target sign. An example of such a set of
prime-target pairs is given in Figure 29.6. Keeping the target sign the same allows one
to consider the effects of different primes on the processing of the same target. In the
29. Processing 703

Dye and Shih study, however, different targets were used in the different prime condi-
tions.
A priming study conducted using ASL by Corina and Hildebrandt (2002) that con-
sidered the effects of different primes on the same target sign looked at pairs that were
unrelated, shared location, or shared movement. Prime-target pairs were presented
with either a 100 msec or a 500 msec gap between signs in a lexical decision task. With
an inter-stimulus interval (ISI) of 500 msec, no priming effects were obtained. How-
ever, with an ISI of 100 msec, there was some evidence (albeit statistically non-signifi-
cant) that sharing location or movement resulted in inhibition of lexical decision for
the target sign. The most recent study of formational priming comes from a group
studying yet another sign language ⫺ Spanish Sign Language (LSE). Carreiras et al.
(2008, Experiment 3) used an ISI of 100 msec, and also examined the influence of
different prime signs on the same target signs. In their study, they looked at the effects
of shared location and shared handshape on lexical decision times. They reported sig-
nificant inhibitory effects when a prime and target shared location, with the effect
confined to judgments about real signs (‘yes’ responses) rather than nonsense signs
(‘no’ responses). This finding of inhibition of lexical decision for real signs only is
strongly suggestive of a lexical effect, with a sign’s location defining the initial cohort
of possible signs. The presentation of the prime sign results in the lexical representation
of that sign ‘winning’ the lexical access competition, and in the process inhibiting the
activation of signs in the same location-defined cohort. When one of these signs subse-
quently appears as a target sign, lexical decision takes longer as a result. For hand-
shape, Carreiras et al. reported that nonsense target signs were rejected more rapidly
if they shared the same handshape as the prime. While it is tempting to interpret this
as a pre-lexical effect caused by priming of the formational parameter detectors for
handshapes, it is problematic that the effect did not emerge when the targets were
actual LSE signs.

4.3. Sign spotting

The current set of studies using a lexical decision task and sign language stimuli are
few in number, and it is difficult to draw firm conclusions based upon published data.
The current hypothesis is that a sign’s location parameter is the first to be extracted
by the visual system, and that this parameter serves as the basis for initial access of
the mental lexicon. One criticism of lexical decision tasks is that they do not reflect
normal language processing. It is highly unusual for speakers or signers to make deci-
sions about lexical status during natural language comprehension. Indeed, from a
psychological perspective, the lexical decision task adds an extra stage of processing
(the lexical decision itself) that needs to be accounted for when interpreting data. An
interesting study by Orfanidou et al. (2009) used a sign-spotting task, where partici-
pants viewed pairs of signs that either consisted of a nonsense sign and a BSL sign, or
two nonsense signs. The participant had to indicate when they saw a real BSL sign
rather than a nonsense sign and after indicating that a BSL sign had been observed,
the participant then had to produce the sign that they thought they had seen. This task
therefore requires lexical access in order for a sign to be recognized as a BSL sign, but
further requires the participant to produce a BSL sign based upon the result of that
704 VI. Psycholinguistics and neurolinguistics

Fig. 29.7: A reproduction of participant response error (follow) to a phonotactically incorrect


target sign (*follow) from a sign-spotting task used by Orfanidou et al. (2009). The
sign *follow is unallowable in BSL because the language requires signs with two mov-
ing hands to use only one handshape. After viewing this sign, the deaf signer misper-
ceived it as follow, which represents a correction of the phonotactic violation.

process rather than make a yes/no decision about lexicality. By focusing upon misper-
ceptions (where participants falsely reported seeing a BSL sign) and examining the
properties of the misperceived sign and the participant’s response, Orfanidou et al.
sought to make inferences about the sign recognition process. Figure 29.7 shows an
example of a sign misperception taken from the Orfanidou et al. study. In this example,
the nonsense sign stimulus has been misperceived and reported as being the BSL sign
follow. There were two key findings reported by Orfanidou et al. Firstly, their partici-
pants tended to ‘regularize’ nonsense signs on the basis of phonotactic constraints in
BSL sign formation. For example, the nonsense sign is produced by two moving hands
that have different handshapes ⫺ this sign was misperceived as having the same hand-
shape on each hand and thus conforming to the BSL sign follow. However, Orfanidou
et al. point out that not all of the misperception errors they obtained can be attributed
to phonotactic violations. The second key finding was that the formational parameters
differed in their susceptibility to misperception. Most of the misperceptions that they
recorded involved changes in handshape or movement. Misperceptions of a sign’s loca-
tion were relatively rare. Orfanidou et al. suggest that this may be attributable to the
early extraction of this parameter and its primacy in lexical access (see above) or to
the fact that location is the easiest of a sign’s parameters to be perceived (also see
Orfanidou et al. (2010); for studies on sign recognition involving signs from Sign Lan-
guage of the Netherlands, see Arendsen/van Doorn/de Ridder (2009) and Holt et al.
(2009)).
The number of studies of sign language processing is small, and much work needs
to be done. As for studies of working memory, the focus has been on understanding
sign recognition in terms of the models and accounts that have been developed for
spoken languages. At one level this makes good sense. There has been a lot of work
on developing these models in the spoken language literature, and there is no reason
a priori to suspect that the processing of sign languages ⫺ at least at the level of the
29. Processing 705

individual sign ⫺ will be highly influenced by the difference in modality. For now, it
seems clear that signs can be represented internally in terms of their formational pa-
rameters and that these representations play a role in not only memory but also in the
comprehension of individual signs. Of course, understanding sign language requires
much more than the comprehension of individual signs. The ways in which those signs
are combined to form sentence-like or phrase-like blocks of meaning is also important,
as is the way in which these blocks of meaning combine to provide an understanding
at the level of a whole discourse. Studies of such higher-level sign processing are few
(Morgan 2002, 2006) and represent a clear need for future study.

5. Exploiting the visual modality

While significant progress has been made by treating sign languages as just ‘languages’
and applying what has been learned from the study of spoken languages, there are
ways in which sign languages differ from spoken languages that may have implications
for how they are processed and understood by language users.
Perhaps the most obvious of these differences is the way in which sign languages
exploit the visual modality by using some degree of iconicity. Iconicity refers to the
resemblance between an object or action and the word or sign used to represent that
object or action (see chapter 18 for discussion). Klima and Bellugi (1979) provide an
excellent discussion of what is meant by a sign being ‘iconic’ and go to great lengths
to point out that (a) many signs in ASL (and other sign languages) are non-iconic, and
(b) that iconic signs vary from one sign language to another, and are thus to some
extent conventionalized forms. Poizner, Bellugi, and Tweney (1981) showed that highly
iconic signs are not more easily remembered than signs that are highly opaque; Atkin-
son et al. (2005) reported that signers with word-finding difficulties following stroke
found iconic signs no easier to retrieve that non-iconic signs; and the work of Richard
Meier (1991) has suggested that iconicity is not a factor in the early sign language
acquisition of deaf children. Thus, although some individual signs may appear to be
similar in form to their referents, it seems this has little-or-no impact on sign language
processing at the level of individual signs. However, some recent treatments of ASL
at the narrative or discourse level have suggested that in order to understand ASL,
the signee must process ‘surrogates’ (Liddell 2003) or ‘depictions’ (Dudis 2004) that
are being produced by the signer. These authors suggest that sign languages, at a higher
level of processing, are produced and understood in ways that are very different to
spoken languages. They argue that the signer creates a visual scene and ‘paints a pic-
ture’ for the addressee, utilizing the visual medium and the signing space to convey
meaning in ways that are difficult, if not impossible, in spoken languages. Once psycho-
linguistic studies of sign languages move past a focus on individual signs and start to
look at how meaning is constructed at the level of discourse or whole texts, the work
of linguists such as Liddell and Dudis will lead to interesting hypotheses about sign
language processing at these higher levels.
A related point concerns the productive lexicon of sign languages (Wallin 1990).
Whereas the spontaneous production of neologisms is rare in many spoken languages,
sign language users often create linguistic utterances that are novel and will not have
706 VI. Psycholinguistics and neurolinguistics

Fig. 29.8: Polymorphemic signs are complex signs where the formational parameters function as
morphemes. In the sign bird-on-a-wire, the dominant handshape represents a two-
legged animal entity, with the movement denoting that the entity is sitting on a thin
cylindrical object (denoted by the non-dominant handshape). In pile-of-papers, the
dominant handshape denotes a thin, flat object, with the movement being indicative of
quantity of that object. These sign constructions are common in many, if not all, sign
languages and can be created online often resulting in never-seen-before signs. Given
an appropriate context, however, they are easily understood. For current models of
lexical access to explain how these signs are understood, they would need to have lexical
entries for such constructions, and the processing system would need to be able to
distinguish formational parameters that are being used contrastively from those that are
being used morphemically.

been seen previously by the addressee. In most cases, the addressee will understand
the utterance clearly and without effort. Take, for example, the signs in Figure 29.8.
These are individual signs ⫺ each with a handshape, location, and movement ⫺ that
would not be considered lexicalized signs in ASL. That is, one would not expect that
a lexical entry exists that can be activated through the regular lexical access process as
described above. Rather than being meaningless parameters that are combined in a
rule-governed manner to create a sign, the handshape, location, and movement of
these ‘polymorphemic’ signs (Wallin 1990) each convey meaning in their own right.
Take the sign bird-on-a-wire: the hooked W-handshape denotes a two-legged creature,
the @-handshape acting as the sign’s location represents a long, thin cylinder, and the
movement denotes the former coming to rest upon the latter. In the polymorphemic
sign pile-of-papers, the signer uses a k-handshape to denote a two-dimensional surface
(given the context, a piece of paper) and then moves the handshape upwards to denote
quantity (a pile of papers). Clearly, when produced in isolation, there may be alterna-
tive interpretations of this sign. However, when it occurs in context ⫺ embedded within
a discourse ⫺ then interpretation is usually unambiguous. The context allows the for-
mational parameters (be they handshape, location, or movement) to function as mor-
phemes, blended together by the signer to create a single sign that is rich in meaning.
The ways in which signers create these signs, and the processes by which signers under-
stand them, present a challenge to theories of sign language processing that rely upon
29. Processing 707

the accessing of lexical representations to generate meaning. Future research will need
to address whether or not similar processes are used to access the meaning of lexical
and polymorphemic signs.
Again, related to both of the issues delineated above, models of sign language proc-
essing will need to address issues of simultaneity in sign languages. While spoken lan-
guages result in a temporal unfolding of information over time, this is perhaps less so
for sign languages. Clearly, there is some degree of temporal order in sign languages,
both in terms of how the signal unfolds over time (see the gating study of Emmorey
and Corina (1990)) and in how signs are ordered within an utterance (Neidle et al.
2000). However, it is also possible for a signer to produce several pieces of meaning
simultaneously. Perhaps most interesting in this regard, especially as it is often explic-
itly ignored in psycholinguistic studies of sign language processing, is the use of facial
expressions and other ‘non-manual’ features of sign languages (see Pfau/Quer (2010)
for an overview). Non-manual features are used for a variety of reasons in sign lan-
guages, including to convey emotion (McCullough/Emmorey 2009), modify the mean-
ing of verbs (Liddell 1980), denote question type (Zeshan 2004), indicate perspective
(Emmorey/Tversky/Taylor 2000), and to indicate verb agreement (Thompson/Emmo-
rey/Kluender 2006). If one watches a videotape of a signer telling a story, and covers
the head of the signer, the extent to which comprehension is impaired is remarkable.
Indeed, it could be argued that non-manual information in a signed discourse is more
important than handshape information, although this has yet to be tested empirically.
Models of sign language processing will, therefore, need to give an account of how
non-manual and manual features of signed utterances are integrated in order to extract
the meaning of a signed utterance. It remains to be seen whether or not processing
models of tone or melody in spoken languages will be useful in this regard.

6. Conclusion

Much progress has been made over the last 40 years in understanding how sign lan-
guages are processed and understood. There is now substantial evidence that the for-
mational parameters of sign languages are used by signers to represent their language
internally, and are also used to access the meaning of signs. To date, substantial efforts
have been made to draw parallels between spoken and sign language processing. This
perhaps originally stemmed from the concern that sign languages needed to be shown
to be real, natural languages. Much progress has been made with this approach, and
arguably it does not make sense to throw out all that we have learned from studies of
spoken languages. In many ways, it seems speech and sign are processed similarly.
However, future research is now free to ignore this constraint if it wishes. The identity
of sign languages as full, natural languages is now well established. Therefore, future
work can also start to look at ways in which sign languages are processed differently,
grounded in how sign differs from speech and how it exploits the visual modality in
ways that speech cannot. This will inform not only our understanding of sign language
processing, but also our understanding of language as a whole.
This chapter aimed to provide an introduction and summary of the key findings in
the field of sign language processing that is accessible to many readers who may not
708 VI. Psycholinguistics and neurolinguistics

have a background in psychology and/or linguistics. Readers who are interested in


methodological details and want to read about these topics in greater depth are re-
ferred to excellent reviews by Corina and Hildebrandt (2002) and Emmorey and Wil-
son (2004).

7. Literature
Arendsen, Jeroen/Doorn, Andrea J. van/Ridder, Huib de
2009 When Do People Start to Recognize Signs? In: Gesture 9(2), 207⫺236.
Atkinson, Jo R./Marshall, Jane/Woll, Bencie/Thacker, Alice
2005 Testing Comprehension Abilities in Users of British Sign Language Following CVA.
In: Brain and Language 94(2), 233⫺248.
Baddeley, Alan D.
1966 Short-term Memory for Word Sequences as a Function of Acoustic, Semantic, and
Formal Similarity. In: Quarterly Journal of Experimental Psychology 18(4), 362⫺365.
Baddeley, Alan D./Hitch, Graham J.
1974 Working Memory. In: Bower, Gordon A. (ed.), The Psychology of Learning and Moti-
vation, Vol 8. New York, NY: Academic Press, 47⫺90.
Baddeley, Alan D./Lewis, Vivien J./Vallar, Giuseppe
1984 Exploring the Articulatory Loop. In: Quarterly Journal of Experimental Psychology 36,
233⫺252.
Baddeley, Alan D./Thomson, Neil/Buchanan, Mark
1975 Word Length and the Structure of Short-term Memory. In: Journal of Verbal Learning
and Verbal Behavior 14, 575⫺589.
Bavelier, Daphne/Newport, Elissa L./Hall, Matt/Supalla, Ted/Boutla, Mrim
2006 Persistent Difference in Short-term Memory Span Between Sign and Speech: Implica-
tions for Cross-linguistic Comparisons. In: Psychological Science 17(12), 1090⫺1092.
Bavelier, Daphne/Newport, Elissa L./Hall, Matt/Supalla, Ted/Boutla, Mrim
2008 Ordered Short-term Memory Differs in Signers and Speakers: Implications for Models
of Short-term Memory. In: Cognition 107(2), 433⫺459.
Boutla, Mrim/Supalla, Ted/Newport, Elissa L./Bavelier, Daphne
2004 Short-term Memory Span: Insights from Sign Language. In: Nature Neuroscience 7(9),
997⫺1002.
Carreiras, Manuel/Gutierrez-Sigut, Eva/Baquero, Silvia/Corina, David
2008 Lexical Processing in Spanish Sign Language (LSE). In: Journal of Memory and Lan-
guage 58(1), 100⫺122.
Corina, David P./Emmorey, Karen
1993 Lexical Priming in American Sign Language. Poster Presented at the 34 th Annual Meet-
ing of the Psychonomics Society, Washington, DC, November 1993.
Corina, David P./Hildebrandt, Ursula
2002 Psycholinguistic Investigations of Phonological Structure in American Sign Language. In:
Meier, Richard P./Cormier, Kearsy A./Quinto-Pozos, David G. (eds.), Modality and Struc-
ture in Signed and Spoken Languages. Cambridge: Cambridge University Press, 88⫺111.
Daneman, M./Merilke, P.M.
1996 Working Memory and Language Comprehension: A Meta-analysis. In: Psychonomic
Bulletin and Review 3, 422⫺433.
Dudis, Paul G.
2004 Depiction of Events in ASL: Conceptual Integration of Temporal Components. PhD
Dissertation, University of California at Berkeley.
29. Processing 709

Dye, Matt W. G./Shih, Shui-I


2006 Phonological Priming in British Sign Language. In: Goldstein, Louis M./Whalen, Doug-
las H./Best, Catherine T. (eds.), Laboratory Phonology 8: Varieties of Phonological
Competence. Berlin: Mouton de Gruyter, 243⫺263.
Emmorey, Karen/Corina, David P.
1990 Lexical Recognition in Sign Language: Effects of Phonetic Structure and Morphology.
In: Perceptual and Motor Skills 71, 1227⫺1252.
Emmorey, Karen/Tversky, Barbara/Taylor, Holly
2000 Using Space to Describe Space: Perspective in Speech, Sign, and Gesture. In: Spatial
Cognition and Computation 2, 157⫺180.
Emmorey, Karen/Wilson, Margaret
2004 The Puzzle of Working Memory for Sign Language. In: Trends in Cognitive Sciences
8(12), 521⫺523.
Geraci, Carlo/Gozzi, Marta/Papagno, Costanza/Cecchetto, Carlo
2008 How Grammar Can Cope with Limited Short-term Memory: Simultaneity and Seriality
in Sign Languages. In: Cognition 106, 780⫺804.
Goldinger, Stephen D./Luce, Paul A./Pisoni, David B.
1989 Priming Lexical Neighbors of Spoken Words: Effects of Competition and Inhibition.
In: Journal of Memory and Language 28, 501⫺518.
Gozzi, Marta/Geraci, Carlo/Cecchetto, Carlo/Perugini, Marco/Papagno, Costanza
2010 Looking for an Explanation for the Low Sign Span. Is Order Involved? In: Journal of
Deaf Studies and Deaf Education 16(1), 101⫺107.
Hildebrandt, Ursula/Corina, David
2002 Phonological Similarity in American Sign Language. In: Language and Cognitive Proc-
esses 17(6), 593⫺612.
Holt, Gineke ten/Doorn, Andrea J. van/Ridder, Huib de/Reinders, Marcel J. T./Hendriks, Emile A.
2009 Which Fragments of a Sign Enable Its Recognition? In: Sign Language Studies 9(2),
211⫺239.
Klima, Edward S./Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Kyle, Jim G.
1986 Sign Processes in Deaf People in Working Memory (MRC Report No. G 83⫺25637-N).
University of Bristol: School of Education Research Unit.
Kyle, Jim G./Woll, Bencie
1985 Sign Language: The Study of Deaf People and Their Language. Cambridge: Cambridge
University Press.
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Liddell, Scott K./Johnson, Robert E.
1989 American Sign Language: The Phonological Base. In: Sign Language Studies 64, 195⫺
278.
Marslen-Wilson, William D./Welsh, Alan
1978 Processing Interactions and Lexical Access During Word Recognition in Continuous
Speech. In: Cognitive Psychology 10, 26⫺63.
McCullough, Stephen/Emmorey, Karen
2009 Categorical Perception of Affective and Linguistic Facial Expressions. In: Cognition
110(2), 208⫺221.
Meier, Richard P.
1991 Language Acquisition by Deaf Children. In: American Scientist 79, 60⫺70.
710 VI. Psycholinguistics and neurolinguistics

Miller, George A.
1956 The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for
Processing Information. In: Psychological Review 63(2), 343⫺355.
Morgan, Gary
2002 The Encoding of Simultaneity in Children’s BSL Narratives. In: Sign Language & Lin-
guistics 5(2), 131⫺165.
Morgan, Gary
2006 The Development of Narrative Skills in British Sign Language. In: Schick, Brenda/
Marschark, Mark/Spencer, Patricia (eds.), Advances in Sign Language Development in
Deaf Children. Oxford: Oxford University Press, 314⫺343.
Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert G.
2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Struc-
ture. Cambridge, MA: MIT Press.
Orfanidou, Eleni/Adam, Robert/McQueen, James M./Morgan, Gary
2009 Making Sense of Nonsense in British Sign Language (BSL): The Contribution of Differ-
ent Phonological Parameters to Sign Recognition. In: Memory and Cognition 37(3),
302⫺315.
Orfanidou, Eleni/Adam, Robert/Morgan, Gary/McQueen, James M.
2010 Recognition of Signed and Spoken Language: Different Sensory Inputs, the Same Seg-
mentation Procedure. In: Journal of Memory and Language 62, 272⫺283.
Pfau, Roland/Quer, Josep
2010 Nonmanuals: Their Grammatical and Prosodic Roles. In: Brentari, Diane (ed.), Sign
Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press,
381⫺402.
Poizner, Howard/Bellugi, Ursula/Tweney, Ryan
1981 Processing of Formational, Semantic and Iconic Information in American Sign Lan-
guage. In: Journal of Experimental Psychology: Human Perception and Performance 7,
1146⫺1159.
Rönnberg, Jerker/Rudner, Mary/Ingvar, Martin
2004 Neural Correlates of Working Memory for Sign Language. In: Cognitive Brain Research
20(2), 165⫺182.
Rudner, Mary/Rönnberg, Jerker
2008 Explicit Processing Demands Reveal Language Modality-specific Organization of
Working Memory. In: Journal of Deaf Studies and Deaf Education 13(4), 466⫺484.
Salamé, Pierre/Baddeley, Alan
1982 Disruption of Short-term Memory by Unattended Speech: Implications for the Struc-
ture of Working Memory. In: Journal of Verbal Learning and Verbal Behaviour 21,
150⫺164.
Stokoe, William C./Casterline, Dorothy C./Croneberg, Carl G.
1976 A Dictionary of American Sign Language on Linguistic Principles. Silver Spring, MD:
Linstok Press.
Thompson, Robin/Emmorey, Karen/Kluender, Robert
2006 The Relationship Between Eye Gaze and Verb Agreement in American Sign Language:
An Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604.
Wallin, Lars
1990 Polymorphemic Predicates in Swedish Sign Language. In: Lucas, Ceil (ed.), Sign Language
Research: Theoretical Issues. Washington, DC: Gallaudet University Press, 133⫺148.
Wilson, Margaret/Emmorey, Karen
1997 A Visuospatial “Phonological Loop” in Working Memory: Evidence from American
Sign Language. In: Memory and Cognition 25(3), 313⫺320.
Wilson, Margaret/Emmorey, Karen
1998 A “Word Length Effect” for Sign Language: Further Evidence for the Role of Lan-
guage in Structuring Working Memory. In: Memory and Cognition 26(3), 584⫺590.
30. Production 711

Wilson, Margaret/Emmorey, Karen


2001 Functional Consequences of Modality: Spatial Coding in Working Memory for Signs.
In: Dively, Valerie/Metzger, Melanie/Taub, Sarah/Baer, Anne Marie (eds.), Signed Lan-
guages: Discoveries from International Research. Washington, DC: Gallaudet University
Press, 91⫺99.
Wilson, Margaret/Emmorey, Karen
2003 The Effect of Irrelevant Visual Input on Working Memory for Sign Language. In: Jour-
nal of Deaf Studies and Deaf Education 8(2), 97⫺103.
Wilson, Margaret/Bettger, Jeffrey/Niculae, Isabela/Klima, Edward
1997 Modality of Language Shapes Working Memory: Evidence from Digit Span and Spatial
Span in ASL Signers. In: Journal of Deaf Studies and Deaf Education 2(3), 150⫺160.
Zeshan, Ulrike
2004 Interrogative Constructions in Signed Languages: Cross-linguistic Perspectives. In: Lan-
guage 80(1), 7⫺39.

Matthew W.G. Dye, Champaign, Illinois (USA)

30. Production
1. Introduction
2. Models of language production
3. The mental lexicon
4. Language production errors: slips of the hand compared to slips of the tongue
5. Monitoring
6. Interface conditions and modality
7. Relation of sign language production studies to other psycholinguistic and neurolinguistic
research
8. Literature

Abstract
This chapter is concerned with language production in spoken and sign languages, espe-
cially in German Sign Language. Besides results from tip-of-the-finger, gating, and prim-
ing studies, we discuss another relevant data class, namely slips of the tongue and slips
of the hand. On the basis of an extensive slip corpus, we show that the attested error
types are similar in both language modalities whereas the distribution of affected units,
in particular, is different. An investigation of monitoring reveals further interesting differ-
ences. The observed asymmetries are taken to result from the characteristic nature of
sign language phonology and morphology, viz. simultaneity and phonological reduction
in compounding processes. Given our basic assumption that the language processor is
amodal, the language production model has to be revised in two ways: firstly, sign lan-
guage has to be incorporated, and secondly, the external loop for error detection in
spoken languages loses importance when sign language production and monitoring is
considered.
712 VI. Psycholinguistics and neurolinguistics

1. Introduction
Since the very beginnings of modern psycholinguistic research, language production
errors, mostly speech errors, have played a major role as a rich source of evidence for
processes and models of language production. Numerous researchers have appreciated
the epistemological value of production errors as a “window to the mind” (Fromkin
1973) and have taken regularities found in the error patterns as evidence for the nor-
mal functioning of the language processor. The first speech-error corpus was compiled
by Rudolf Meringer (Meringer/Mayer 1895). Many of the corpora that followed were
of the same kind, that is, so-called ‘pen-and-paper corpora’. Speech errors have long
been a privileged data class, and the first models of language production were almost
exclusively based on them (Garrett 1975; Fromkin 1973, 1980; Dell/Reich 1981; Stem-
berger 1985; Levelt 1989). Later, methodical and methodological advances in psycho-
linguistics have led to a plurality of research methods, comprising “competing-plans
techniques” for inducing predictable slips in speech (Baars 1992), for instance, the
SLIP-technique (Spoonerisms of Laboratory-Induced Predispositions), reaction-time
measurements such as the word-picture interference paradigm (Levelt/Roelofs/Meyer
1999), and neurolinguistic methods such as fMRI, PET, and ERP.
Research on sign language production followed a similar pattern of development.
The first data to yield evidence for the sign production process came from studies of
spontaneous slips of the hand in American Sign Language (ASL) (Klima/Bellugi 1979;
Newkirk et al. 1980). In addition to this psycholinguistic reasoning, slips of the hand
were presented for more political reasons, namely in order to prove that sign languages,
too, had sub-lexical structure. Klima and Bellugi’s first collection of slips of the hand
(n = 131) was particularly rich in phonological errors in which only a single phonologi-
cal parameter (most frequently handshape) was affected. Such errors were considered
unambiguous proof that ASL ⫺ and, by implication, all other sign languages ⫺ has a
compositional structure that includes a level of phonology. Elements on this level could
be separately affected while others were spared, exactly as in phonological slips of
the tongue.
After Stokoe’s (1960) ground-breaking study on the (sub-lexical) structure of ASL,
Klima and Bellugi’s psycholinguistic proof was a further cornerstone on which modern
sign language research could base the by now widely accepted claim that sign languages
are truly natural languages. This acceptance had a most welcome impact on the theo-
retical debate on modularity, autonomy, and universality of language. If language is an
autonomous cognitive module (Fodor 1983), which itself has a modular structure,
the same structure should be found in sign languages as well, irrespective of the stri-
king modality difference. Sign language research, including psycholinguistic studies,
yielded increasing unambiguous evidence for common abstract notions and processes
despite superficial modality differences (Meier/Cormier/Quinto-Pozos 2002; Sandler/
Lillo-Martin 2006), thereby providing strong support for the universality claim (Hohen-
berger 2007, 2008). As for general theories of language, sign languages can now be
used for testing the modularity and universality claims of models of language produc-
tion that were previously based exclusively on evidence from spoken languages (Ho-
henberger/Happ/Leuninger 2002; Leuninger et al. 2004).
This chapter is organized as follows: in section 2, we will briefly sketch various
models of language production and discuss the degree to which they are also viable
30. Production 713

models for sign language production. In section 3, we will outline the structure of the
mental lexicon, how it is accessed in language production, and what kinds of evidence
have been adduced for two-stage lexical access. Ample evidence for all stages of the
sign language planning process, mostly stemming from slip of the hand data, will be
provided in section 4. We will show that qualitative slip categories and units previously
proposed for slips of the tongue likewise apply to slips of the hand, but that their
quantitative distribution differs. Section 5 is devoted to monitoring of sign language
production, that is, how signers supervise their sign production and repair erroneous
utterances. In section 6, we will address issues of modality, typology, and interface
conditions. Finally, in section 7, we will conclude with a summary of the relevance of
this topic to psycholinguistic and neurolinguistic research on sign languages.

2. Models of language production

Contemporary models of language production fall broadly into three classes: (i) serial-
modular, (ii) interactive spreading-activation, and (iii) cascading models. These models
make different assumptions about the nature of the language planning process, namely
(i) whether in lexical access grammatical and phonological information becomes avail-
able in a strictly serial order or at the same time, and (ii) whether there is feedback
between adjacent levels of planning. We will describe the three models and particularly
focus on the serial-modular one, since it is the one most widely used.
The discrete serial-modular model of lexical access proposed by Levelt and col-
leagues is the most comprehensive model of language production; it is depicted in
Figure 30.1 (Levelt 1989; Levelt/Roelofs/Meyer 1999).
According to the model in Figure 30.1, language is produced as follows: first, prever-
bal concepts are prepared which ideally already correspond to lexical concepts. The
output of this planning stage is lexical concepts. In the next step, the lemmas corre-
sponding to these lexical concepts are selected. Lemmas are lexical entries that are
specified only for grammatical features, for example, a verb’s argument structure and
its inflectional features (tense, aspect), or a noun’s φ-features (person, number, gen-
der), and case. With lemma selection, the mental lexicon is accessed for the first time.
Next, morphological information contained in the parts of the word is specified. The
morphemes eventually become encoded phonologically and the word is syllabified.
The output of phonological encoding is the phonological word. With the phonological
word, the core linguistic planning process is completed. In Figure 30.1, we have indi-
cated these core processes in grey. What follows is the phonetic encoding. In spoken
languages with a relatively small number of distinct syllables, these can be directly
drawn from a repository of syllables, the ‘syllabary’. Syllables contain information
about the phonetic gestural forms for the articulation of the word. In the model of
Levelt and colleagues, the final output of the language production model are ‘sound
waves’. We deliberately added ‘light waves’ to accommodate the model for sign lan-
guages.
Although Levelt et al. did not have sign language in mind when designing their
language production model, we will argue below that findings from sign language pro-
duction research, mainly from slips of the hand, confirm the basic architecture of the
714 VI. Psycholinguistics and neurolinguistics

Fig. 30.1: The model of lexical access (adapted from Levelt/Roelofs/Meyer (1999, 3)). The grey
boxes and circle indicate core language production processes (grammatical encoding
and lexical access). The hand icon indicates that the internal part of the self-monitoring
process is more important for sign language; the mouth icon indicates that the exter-
nal part of the self-monitoring process is more important for spoken language. More
detailed information is provided in the text (hand icon taken from http://www.fewo-
brodowin.de/images/hand.png; mouth icon taken from http://www.clipartguide.com ⫺
both pictures were accessed on 2 July 2011).
30. Production 715

Levelt model and the processes of sign language production. In section 6, we will argue
that the crucial difference lies not in the model itself but in the modality-related and
therefore different phonetic interfaces of spoken and sign languages. As can be seen
in Figure 30.1, the monitor feeds back information derived from planning stages as
early as the phonological word, that is, while still being in the phase of ‘covert speech’.
At later stages, when speech is articulated, the monitor feeds back ‘overt speech’ via
the speech perception system. As indicated by the small hand and the mouth icons,
which we added to the model, the internal part of the monitoring loop ⫺ the stage of
‘covert sign’ ⫺ is more important for sign language whereas for spoken language, the
external part of the monitoring loop ⫺ the stage of ‘overt speech’ ⫺ is more impor-
tant. This is not to imply, of course, that each language modality would make use of
only one type of feedback loop. Rather, it means that both modalities have specific
preferences concerning covert and overt monitoring processes. In section 5, we will
discuss various models of language monitoring, focusing on the ways in which the sign
language monitor operates similarly to and differently from the spoken language mon-
itor.
Interactive or spreading-activation models assume a network-like structure for the
language production system (Dell/Reich 1981; Stemberger 1985; Dell 1986; among
many others). Such models, like serial-modular models, assume stages of language
planning, such as conceptual, grammatical, and phonological encoding. However, they
do not assume a modular organization with a strictly serial process of input into a
module, encapsulated processing within the module, and subsequent output from that
module to the next. Thus, interactive models assume that a lexical entry’s word form
(morphological and phonological information) can be accessed simultaneously with its
lemma (grammatical information) and that the word form can influence the selection
of the lemma through a feedback mechanism. In contrast, in the Levelt model, the
selection of the lexical entry’s lemma temporarily precedes the subsequent activation
of its word form and cannot be influenced by the word form; hence, there is no feed-
back from later planning stages to earlier ones.
If the serial-modular model is taken as the ‘thesis’ and the interactive spreading-
activation model as its ‘antithesis’, the ‘cascading model’ can be considered as their
‘synthesis’. This hybrid model (Peterson/Savoy 1998) shares assumptions with both the
serial-modular and the interactive models. It shares with the serial-modular model the
assumption that there is no feedback between neighboring levels of processing, e.g.,
between the lemma and the word form level. It shares with the interactive model the
assumption that there is parallel activation on all levels of processing, that is, activation
“cascades” from one level to the other without having to await the final output of the
previous level of processing. In a critical review of the time course of activation of
syntactic, semantic, and word form information, Pechmann and Zerbst (2004) show
that despite a clear serial order between the three types of information ⫺ with syntax
preceding semantics and word form ⫺ there is also considerable temporal overlap
between the three. It seems then that the cascading model of language production can
account for most of the empirical evidence. A good overview of the various models of
language production is given in Jescheniak (1999), and a critical evaluation of the
evidence with respect to the three models can be found in Pechmann and Zerbst
(2004).
Although it seems as if interactive spreading-activation models could accommodate
the simultaneous nature of sign languages more naturally than serial models, simultane-
716 VI. Psycholinguistics and neurolinguistics

ity in the planning process and simultaneity in the overt expression of language are
not necessarily related. Therefore, the choice of one production model over the other
does not depend on the type of language ⫺ spoken or sign language.
The Levelt model of language production has been extended to accommodate
speech-gesture co-production (Krauss/Chen/Gottesman 2000). This, however, does not
mean that we should look at the gestural side of this extended model as a production
model for sign language. Gestures are not signs ⫺ they only share the same modality.
Rather, for the sake of parallelism, a full model of sign language production would
have to include an oral co-production system as well, where oral gestures would play
a similar role as manual gestures do in spoken language production (see chapter 27,
Gesture, for details).
An account of language production errors informed by grammar theory ⫺ Distrib-
uted Morphology ⫺ has recently been put forward by Pfau (2009). His approach is
noteworthy in as far as it does not use psycholinguistic data as external evidence for
motivating a theory of grammar but, on the contrary, uses grammar theory as a psycho-
logically realistic processing model.

3. The mental lexicon

During sentence production, the signer has to retrieve items from the mental lexicon.
Lexical retrieval has to be, and indeed is very fast, efficient, and failure proof. For
example, assuming that an educated adult speaker of English has an active vocabulary
of approximately 30,000 words and produces some 150 words per minute, the speaker
makes the right choice among these alternatives two to five times per second (Levelt
1989).
As Garrett (1975) and Levelt (1989) show, language processing data support the
assumption of a two-level mental lexicon: a so-called lemma lexicon, in which semantic
(thematic roles) as well as syntactic information (e.g. subcategorization frames and
syntactic features, such as gender) are represented, and a form (lexeme) lexicon, which
stores morphological and phonological structures. Evidence for this bipartite structure
comes from Tip-of-the-Tongue (ToT) phenomenona, slips of the tongue (semantic vs.
formal substitutions, blends), lexical frequency effects, and anomia (Jescheniak 1999).
Furthermore, evidence from lexical processing of morphologically complex words in
spoken languages suggests that regularly inflected words are retrieved by applying the
respective rules whereas irregular forms are stored in an associative network and are
not decomposed. From this, many researchers conclude that the brain is equipped with
two mechanisms, and a so-called Dual-Route-model (Pinker/Prince 1992) has been
postulated. Recently, however, this assumption has been challenged by Yang (2002),
who argues that both types of lexical forms are acquired and processed by rules.
Thus, with respect to sign language lexicons, the following questions have to be
answered: Is the mental lexicon of sign languages organized in an analogous way, that
is, as a semantic-conceptual and formal mental lexicon? How are lexical entries struc-
tured, specifically, how are morphologically complex words stored? How does lexical
access work? In the following four subsections, we will discuss relevant data from Tip-
30. Production 717

of-the-Finger (ToF) studies, iconicity, and gating and priming experiments concerning
the status of phonological vs. morpho-syntactic movement, and slips of the hand. Prim-
ing studies are discussed in more detail in chapter 29 on sign language processing.

3.1. The Tip-of-the Finger (ToF) phenomenon

For spoken English, the first experimental study testing the ToT phenomenon was
conducted by Brown and McNeill (1966). ToT is that state of a speaker in which he is
sure he knows a certain word and its meaning, but cannot retrieve it from his mental
lexicon. In aphasia, word-finding difficulties also occur, often called ‘anomia’. It has
been observed that aphasic speakers often offer a circumlocution or make comments
such as “I know what it is, but I don’t remember the word”. Interestingly, in languages
with a gender system, aphasics are able to access the (abstract) gender of a noun whose
word form they cannot retrieve. In Badecker, Miozzo, and Zanuttini’s (1995) study,
their aphasic “Dante” scored around 95 % correct in gender retrieval for regular as
well as irregular Italian nouns. This dissociation between preserved semantic and syn-
tactic knowledge, including gender, on the one hand, and inability to access the phono-
logical word form, on the other hand, provides evidence for a two-stage access to the
mental lexicon, in terms of lemma and lexeme.
ToT states probably occur in all spoken languages, and in sign languages, too, as
the experiments conducted by Thompson, Emmorey, and Gollan (2005) show. In this
study, deaf subjects with competence in ASL were tested retrieving fingerspelled and
lexical proper names. Proper names were used as they often cause ToT states, and
therefore it was hypothesized, also ToF states. Thompson et al. found that when finger-
spelled items were to be retrieved, the ToF states parallelled ToT states, with initial
handshapes retrieved first, just as initial phonemes are retrieved first in the ToT state.
However, ToF states and ToT states differ because of the high degree of simultaneity
in the phonological make-up of signs as opposed to the highly serial character of spo-
ken language phonology (see chapter 3, Phonology). The results did not show a specific
superiority of handshape retrieval for lexical signs but recall of as many as three of the
four phonological features. As soon as the fourth feature, viz. movement, is identified,
the sign is retrieved. These ToF states are evidence for the lexical storage of signs as
a set of arbitrary phonological features, just as in spoken languages.

3.2. Iconicity

Iconicity of signs is one of the prominent features of sign languages, with some signs
exhibiting a non-arbitrary relationship to their referent and hence the semantic content
(see chapter 18, Iconicity and Metaphor, for details). Iconicity is an important test case
for claims about the bipartition of the mental lexicon. Despite the iconicity of some of
the name signs used in the Thompson et al. (2005) study discussed in the previous
section (e.g. switzerland, where the hand draws a cross, as on the Swiss flag), iconicity
played no role in lexical retrieval. Iconic name signs were not more likely to be re-
trieved than non-iconic signs. This is clear counter-evidence to the proposal by Stokoe
718 VI. Psycholinguistics and neurolinguistics

(1991) that sign languages exhibit a ‘semantic phonology’ because of the iconicity of
signs, which should prevent ToF states from occurring. In addition, a number of studies
(cf. Emmorey/Mehta/Grabowski 2007) found no differences between the processing of
iconic and non-iconic signs in functional imaging studies. Thus, it can be concluded
that not only non-iconic, but also iconic signs are processed in the mental lexicon (see
also chapter 29, Processing).

3.3. Gating and priming studies

The bipartition of the mental lexicon is also corroborated by psycholinguistic investiga-


tions using the gating paradigm (Grosjean 1980; Emmorey/Corina 1990). Although
these studies are designed to test the perceptual relevance of phonological and mor-
phological structure of lexical items (words and signs), they also shed light on lexical
processing in general. In this paradigm, items are presented in fragments (two-thirds
at one temporal frame and the last third in two temporal frames). Two measures of
sign identification are calculated: (i) the isolation point of a sign, defined as the gate,
at which a subject guesses the correct sign and does not subsequently change his mind;
(ii) the recognition point defined as the point at which the subject guesses the correct
sign with 80 % confidence.
In Grosjean’s study, the isolation point for English spoken words was about 330
milliseconds (ms) whereas in Emmorey and Corina’s experiment, isolation of ASL
signs required only 240 ms. In contrast to spoken words where 83 % of phonological
information was needed to isolate the item, only 34 % of a sign had to be seen. Emmo-
rey and Corina argue that this asymmetry is due to the specific phonology of sign
languages, as more than one phonological feature appears to be identified simulta-
neously. Isolation of more than one phonological feature at a time narrows down the
lexical alternatives, which explains the higher speed of isolation in sign language.
As is well known from sign language research, movement features are phonologi-
cally distinctive. But movement also fulfills morphosyntactic functions as it is involved
in agreement inflection and aspectual marking of the verb (lines, arcs, and circles). In
these processes, movement is a bound morpheme and adding it to a stem results in a
polymorphemic sign (for details, see chapters 7 and 9 on agreement and aspectual
marking, respectively). The representation of these signs is to a high degree multilay-
ered, that is, various morphological structures are represented simultaneously. Poly-
morphemic signs, however, are not irregular, with rules attaching bound morphemes
to lexical base forms and bound morphemes (e.g. classifier morphemes, the aspectual
morphemes mentioned above, bound morphemes involved in derivational processes,
etc.).
An interesting case with respect to the status of signs as monomorphemic vs. poly-
morphemic is the following: besides storing base forms and morphological rules, the
mental lexicon also contains ‘established’ or ‘frozen’ complex forms, that is, forms that
originate from classifier predicates, but which are no longer productive. For example,
the handshape in funeral in ASL derives from a W-handshape classifier with the
meaning “people walking forward two by two”. But, contrary to the productive classi-
fier that can be changed to indicate three or more human beings, the handshape in
funeral cannot be altered (cf. Emmorey 2002). Such frozen forms are stored as single
30. Production 719

signs and are not composed during retrieval. Although frozen forms can be decom-
posed (for instance, in language games (cf. Sandler/Lillo-Martin 2006, 96 f.) or in slips
of the hand), the difference between morphological composition by rules and retrieval
of frozen forms can be explained by the assumption that there are two storage systems:
a rule driven (symbolic) system and an associative network which stores irregular and
frozen forms. As remarked above, such a model was proposed by Pinker and Prince
(1992) and has been confirmed by psycholinguistic studies on various spoken lan-
guages. The bipartition into decomposable polymorphemic and frozen forms in sign
language supports the idea that the Dual-Route-model is modality-independent (for de-
tails see chapter 8, Classifiers, and chapter 34, Lexicalization and Grammaticalization).
Gating studies shed light on the question of how polymorphemic signs are repre-
sented in the mental lexicon. In Emmorey and Corina’s (1990) study on ASL, the base
of inflected forms (temporal adverbials formed by changing the movement of the verb)
was isolated earlier than monomorphemic signs involving phonological movement.
This provides evidence for the assumption that the mental lexicon stores base forms
separately and contains productive morphological rules for inflection.
Comparable results were obtained in a repetition priming experiment by Emmorey
(1991). In this study, prime-target pairs were presented to deaf subjects for lexical
decision. Primes were inflected for agreement (dual, reciprocal, multiple) on the one
hand, and for aspect (habitual, continuous) on the other hand. The results revealed
stronger facilitation for aspect than for agreement inflection. This asymmetry in prim-
ing strength points to morphological productivity. Almost any verb in ASL can be
inflected for aspect, whereas agreement inflection is more restricted. This restriction
lies in the token frequency of verbs allowing agreement morphology, not in the princi-
pled application of agreement processes to verbs (as can be seen when new verbs enter
a sign language; the recently introduced German Sign Language (DGS) verb e-mail,
for instance, is an agreement verb).
As mentioned above, frequency effects are another data class confirming two-stage
access to the mental lexicon (Jescheniak 1999). In contrast to spoken languages, fre-
quency counts for sign languages are not based on large, mostly written corpora, but
on ratings from native signers. Emmorey (2002) reports that high-frequency ASL signs
(e.g. fine, like) are recognized faster than low frequency signs (e.g. drapes, dye), with
different thresholds and resting activation.

3.4. Slips of the hand

Slips of the hand clearly demonstrate the bipartition of the mental lexicon into a lemma
and lexeme component. Two error types show the role of meaning relations in the
mental lexicon: blends and semantic substitutions. In the phrasal blend from DGS
given in (1) (and depicted in Figure 30.2), two collocations which (in this context) are
semantically similar but do not share phonological properties are fused into one. The
two phrases are (ich)-habe-mich-getäuscht (‘I was wrong’) and (das)-stimmt-nicht
(‘That’s not right’).

(1) (ich)-habe-mich-getäuscht-(das)-stimmt-nicht [DGS]


comparable to: ‘I was wroght.’ ) ‘I was wrong.’/‘That’s not right.’
720 VI. Psycholinguistics and neurolinguistics

Fig. 30.2: DGS blend: (ich)-habe-mich-getäuscht-(das)-stimmt-nicht. (a) onset (b) end of the
blend (Leuninger/Hohenberger/Waleschkowski 2007, 328)

The first phrase involved in the blend in (1), (ich)-habe-mich-getäuscht, is a proposi-


tion composed of a single two-handed sign articulated with two M-handshapes. The
palms of the hands face the interlocutor so that upper and lower arm form approxi-
mately a 90° angle, and a symmetrical movement is carried out that brings both arms
into the horizontal plane in front of the body. The second phrase, (das)-stimmt-nicht,
is a single one-handed sign. The handshape is [W], the hand orientation is towards the
interlocutor, and the movement is a so-called α-movement, that is, the movement of
the hands traces the shape of the Greek letter α. This movement expresses negation.
In the blend in (1), features of both signs are blended (see Leuninger/Hohenberger/
Waleschkowski (2007) for a detailed analysis of blends in sign language). Figure 30.2
(a) shows the beginning of the blend, where the signer performs the inward movement
of the one-handed α-movement of the second phrase (das)-stimmt-nicht, however,
with the handshape and place of articulation of the first phrase (ich)-habe-mich-ge-
täuscht. Figure 30.2 (b) shows the end of the blend after the signer has carried out the
movement of the first phrase (ich)-habe-mich-getäuscht. In other words, the whole
utterance consists of a single one-handed sign, the handshape (M) and movement of
which come from (ich)-habe-mich-getäuscht, but combine with the hand rotation of
α-negation of (das)-stimmt-nicht.
Analogous considerations hold for the semantic substitution in (2), illustrated in
Figure 30.3.

(2) va(ter) – to(chter) – bub [DGS]


‘fa- dau- boy’

Here, the signer starts with the handshape and place of articulation of the first syllable
of vater (‘father’), corrects himself to handshape and place of articulation of sohn
(‘son’) (which, because of the compelling downward movement, happens to surface as
tochter (‘daughter’)), before finally articulating the intended sign bub (‘boy’). As all
phonological features of the signs differ from each other, the slip and the repairs can
only be motivated by semantic factors. Such step-wise repairs are known as “conduite
30. Production 721

va(ter) to(chter) bub


‘father’ ‘daughter’ ‘boy’
Fig. 30.3: DGS semantic substitution (Hohenberger/Happ/Leuninger 2002, 122)

d’approche” in aphasiology. For a detailed discussion of semantic substitutions, see


Hohenberger et al. (2002) and Leuninger et al. (2004). In the following section, we will
add to the picture other types of slips of the hand.

4. Language production errors: slips of the hand compared to slips


of the tongue
As mentioned in section 1, spoken language production models have strongly relied
on evidence from slips of the tongue. In particular, the bipartition of the mental lexicon
and the two-stage theory of lexical access have drawn on evidence from language slips
(section 3.4). Therefore, it is natural that studies of sign language production have
relied on evidence from slips of the hand. In this section, we will compare language
production errors across the two modalities on the background of the existing litera-
ture.
The first slips of the hand corpus included spontaneous slips of the hand in ASL
(n = 131) compiled by Newkirk et al. (1980; see also Klima/Bellugi 1979). They were
able to show, for the first time, that the phonological building blocks of signs (hand-
shape, place of articulation, and movement) could be separately affected in a slip of
the hand in a way similar to phonological errors in slips of the tongue. Besides phono-
logical errors, their data set also comprised morphological and lexical errors. Further-
more, Whittemore (1987) had collected “hand twisters” (analogous to ‘tongue twist-
ers’). This latter slip corpus only comprised phonological errors. Our own research
dates back to a comparative research project (1997⫺2003), where we compiled two
slip corpora, one for slips of the tongue (German) and one for slips of the hand (DGS).
Slips were elicited from speakers and signers who were asked to re-tell picture stories
under various stress conditions such as cumulative repetitions, unordered pictures, or
fast production (for details, see Hohenberger et al. 2002). Slip sequences were tran-
722 VI. Psycholinguistics and neurolinguistics

scribed, analyzed according to type of slip and affected unit, among others, and fed
into a multi-media data-base, along with a video of the original slip. Since only our
corpus comprises the whole range of slips of the hand (all slip types, all affected units),
we will draw from this rich data source in the following.

4.1. Slips of the hand: an example

Consider the DGS slip of the hand in (3), which comprises all relevant aspects of a
production error: the error itself, the realization of the error, an interruption, various
verbal and non-verbal editing expressions, and a complete repair (in this example, we
provide separate gloss lines for the right hand (rh) and the left hand (lh); [circ] =
circular movement; [stat] = static; [path] = path movement). The error is illustrated in
Figure 30.4.
hh
(3) rh: mädchen fahrrad[circ]// realized, smiling// vergebärdler// [DGS]
lh: fahrrad[stat] vergebärdler
girl bicycle slip of the hand
hh
rh: mädchen fahrrad[circ] oder roller[path] schieb[path]
lh: fahrrad[circ] roller[stat] schieb[path]
girl bicycle or scooter push
‘Does the girl push a bicycle or a scooter?’

The signer’s intention is to produce the yes-no question “Does the girl push a bicycle
or a scooter?” The slip occurs in the sign fahrrad (‘bicycle’). The DGS sign fahrrad
is a two-handed sign of Battison’s (1987) type 1, in which both hands have the same
handshape and execute the same (alternating) movement. The error (see Figure 30.4b)
consists of the non-dominant hand not moving but remaining static, while the dominant
hand correctly performs the circular movement. What is missing is the spreading of
the movement’s feature specification [circular] from the dominant to the non-dominant
hand. More formally, in feature-geometric terms, the association line between the two
hands for the movement feature is missing. Where does this error come from? The
source is the subsequent sign roller (‘scooter’), which serves as the prosodic template
for the error. roller is a two-handed sign of Battison’s type 2, in which both hands
have the same handshape but only the dominant hand moves while the non-dominant
hand remains static (see Figure 30.4i). In sum, the error is a syntagmatic-contextual,
anticipatory, phonological slip of the hand: syntagmatic-contextual because the source
of the error can be found in its syntactic context; anticipatory because the error antici-
pates a feature of an upcoming sign; phonological because a sub-lexical feature of the
sign ⫺ the movement specification of the non-dominant hand ⫺ is affected.
The signer realizes the error after completion of the erroneous sign and interrupts
her utterance, laughing (Figure 30.4c). Laughter is a non-verbal editing expression
which sometimes accompanies a production error. The interruption phase continues,
the signer bows her head and slaps her thighs in amused amazement (Figure 30.4d).
After this, she resumes her upright posture, now shaking her head, still laughing (Fig-
ure 30.4e). She then explicitly states that she has just produced a slip of the hand by
30. Production
Fig. 30.4: Slip sequence ‘Does the girl push a bicycle or a scooter?’; abbreviations: [circ] = circular movement; [stat] = static; [path] = path
movement; note that the nonmanual marking of the yes-no question has been omitted here, but see example (3).

723
724 VI. Psycholinguistics and neurolinguistics

signing vergebärdler (‘slip of the hand’, Figure 30.4f). Then, she starts all over again,
this time signing each word correctly: mädchen (‘girl’, Figure 30.4g), fahrrad (‘bicy-
cle’, Figure 30.4h), oder (‘or’, omitted in Figure 30.4), roller (‘scooter’, Figure 30.4i),
and schieb (‘push’, Figure 30.4j).

4.2. Quantitative and qualitative aspects of the DGS slips


of the hand corpus

Language production errors can be classified according to the category of the slip and
the affected unit. Slip categories fall into two main classes: paradigmatic slips (semantic
and formal substitutions and blends) and syntagmatic, contextual slips (anticipations,
perseverations, exchanges, and fusions). These two types of slips result from the wrong
selection of a unit and wrong serialization, respectively. Slip units include phrase, word,
morpheme, and phonological unit (segment, phonological feature). In the following sec-
tion, we will look at the distribution of slip categories and units in the DGS slips of
the hand corpus (n = 640) and then present a sample of slips, describing them qualita-
tively, and pinpointing at what processing stage in the Levelt model (Figure 30.1) they
can be located. The examples have been chosen such that all major slip categories and
all major planning units are included. All examples are cross-classified for category
and unit so that the sample gives evidence for both major categories and units.

4.2.1. Slip categories

Paradigmatic slips
Paradigmatic slips happen within a linguistic paradigm, that is, in a class of similar
words that can fill the same slot in an utterance. Among paradigmatic slips, semantic
word substitutions are most frequent. They make up around 17 % of all slips in the
DGS corpus. In a semantic substitution, a sign is substituted for another semantically
similar sign. An example has already been given in (2). Similarly, in example (4), zug
(‘train’) substitutes for lkw (‘van’); the error and its repair are illustrated in
Figure 30.5.

(4) semantic substitution, word:


zug // (laughs) lkw komm [DGS]
train // (laughs) van come
‘The van comes.’

A semantic substitution happens during the stage of lexical selection, when the mental
lexicon is accessed for the first time and the lemma of the word is retrieved (see
Figure 30.1 above). Semantic substitutions are evidence that our mental lexicon is orga-
nized semantically (see section 3), with words with similar meaning organized in a
common semantic field and competing with each other during lexical access. If a com-
petitor receives higher activation than the target word, it will be erroneously selected
30. Production 725

ZUG (‘train’) // (laughing) repair: LKW (‘van’)


Fig. 30.5: A semantic word substitution in DGS (from Leuninger et al. 2004, 225)

and a semantic substitution occurs. Since only those candidates that fit into the same
syntagmatic slot within the utterance compete, the syntactic category of the error word
is preserved: nouns substitute for nouns, verbs for verbs, and adjectives for adjectives.
In a semantic substitution, the whole target word is substituted. In contrast, in a word
blend, the substitution is only partial and parts of the two competing words, the target
and the error, fuse in a compromise form, the blend. Word blends in DGS are quite
common (10 % of the DGS corpus).
In (5), a word blend is exemplified. The signer wants to sign ‘wedding couple’. In
DGS, two quasi-synonyms, heirat (‘marriage’, see Figure 30.6b) and hochzeit (‘wed-
ding’, see Figure 30.6c), can equally well express the respective lexical concept. Both
signs are two-handed. The signer erroneously blends the sign hochzeit (on the domi-
nant hand) with the sign heirat (on the non-dominant hand), yielding the compromise
form depicted in Figure 30.6a (from Hohenberger et al. 2002, 126).

(5) semantic word blend:


heirat-hochzeit // heirat paar [DGS]
marriage-wedding // marriage couple
‘wedding couple’

Word blends are two-step errors. In the first step, at the level of lexical selection, two
word candidates (lemmas) are activated (this makes the error ‘paradigmatic’). The two
lemmas are not just taken from the same semantic field, but, furthermore, they are
equally appropriate for expressing a given lexical concept. Unlike substitutions, both
lemma candidates are selected and both word forms are activated simultaneously. In
the second step, the two word forms are fused (this makes the error ‘syntagmatic’). In
spoken word blends, a serialization of the two word forms has to be achieved; for
instance, if an error similar to (5) had happened in English, the result might have been
‘warriage’ or ‘medding’. In example (5), however, parts of both signs are distributed
over the two hands, which results in a truly simultaneous blend. Word blends, and
sign language blends like (5) in particular, support the ‘cascading’ model of language
production (section 2). They show that there is parallel processing, indeed, not only of
lemmas but also of word forms.
726 VI. Psycholinguistics and neurolinguistics

heirat-hochzeit heirat hochzeit


marriage-wedding marriage wedding
(blend) (correct) (correct)
Fig. 30.6: A word blend in DGS: a blend of the signs (b) and (c) results in sign (a)

Contextual/syntagmatic slips
Contextual, or syntagmatic, slips happen when a planning unit occurs in the wrong
place in the utterance. They are called ‘contextual’ or ‘syntagmatic’ because the error
source is somewhere in the syntagmatic context of the target unit. Defective serializa-
tion is a bigger threat to the language production system than wrong selection. There-
fore, in slip corpora, syntagmatic slips are usually more frequent than paradigmatic
slips. In the DGS corpus, around 62 % of all slips are contextual. Example (3) is such
a contextual slip, namely a phonological anticipation. In (6) we present another type
of paradigmatic error, a word perseveration. In a perseveration, a previously produced
unit is re-used at a later point in the utterance, instead of the target unit. Here, the
second occurrence of the sign ball (‘ball’) perseverates from the first, correct occur-
rence (Leuninger et al. 2004, 227).

(6) contextual error: perseveration (context: A boy is looking for his lost shoe)
puppe, kasperl, ball, bär klein, alles-reinwerfen. [DGS]
doll clown ball bear little everything-throw.into
hh hh
b(all) // schuh, nichts.
ball shoe nothing
‘The doll, the clown, the ball, the little bear, everything has been thrown into
(the box). But the ball// the shoe? Nothing!’

Figure 30.7 illustrates the relevant part of example (6). Note that the perseverated sign
ball (‘ball’) is not fully articulated. Only the onset of the syllable ⫺ the static hand
configuration ⫺ is visible, but no movement occurs (left picture). Also note that the
non-manual facial expression that conveys the sentence type, that is, raised eyebrows
marking a yes/no-question, is fully expressed. After a moment of hesitation, during
which the sign is frozen and the signer realizes his lexical error, he corrects it to schuh
(‘shoe’) (right picture).
Contextual word errors like this perseveration occur when phonologically specified
words are aligned with empty slots in the syntagmatic string that has been built up
30. Production 727

Fig. 30.7: A word perseveration in DGS (Leuninger et al. 2004, 227)

concurrently from the retrieved lemmas. Verbs play a decisive role here since they
project slots for their nominal arguments. Occasionally, word forms are aligned with
wrong slots.
Morphological contextual errors are rare in the DGS corpus (3.5 %). Example (7)
illustrates the perseveration of a bound handling classifier on a verb (adapted from
Leuninger et al. 2004, 243); erroneous and target verb are illustrated in Figure 30.8.

(7) perseveration of a handling classifier [DGS]


schüssel aufschlagen-clEi milch stellen-clEi ) stellen-clFlasche
bowl crack-clegg milk put-clegg put-clbottle
‘(The woman) cracks an egg into the bowl and places the milk bottle next to it.’

Fig. 30.8: Perseveration of a handling classifier in DGS (Leuninger et al. 2004, 243)

Following the sign milch (‘milk’), the signer should have used the handling classifier
for ‘bottle’ (:-hand) with the verb stellen (‘to put’; right picture in Figure 30.8).
However, he perseverates the handling classifier for ‘egg’ (clawed X-hand; left picture
in Figure 30.8). Since handling morphemes are expressed by handshapes, the slip is
728 VI. Psycholinguistics and neurolinguistics

potentially ambiguous between a phonological and a morphological interpretation. It


has been classified as morphemic because it creates the contrast between stellen-clEi
and stellen-clFlasche. Contextual morpheme errors like this perseveration take place
at the level of morphological encoding (as represented in Figure 30.1). The scarcity of
such errors in DGS may be due to the fact that they result in an ungrammatical utter-
ance, with the classifier failing to agree with the antecedent noun. Such ungrammatical
utterances are probably monitored at a pre-articulatory stage and thus are prevented
from surfacing. Apart from handling classifiers, other bound morphemes such as mouth
gestures and facial expressions, and also free morphemes such as size and shape specifi-
ers can be affected in slips of the hand (cf. Leuninger et al. 2004).
Contextual errors that affect adjacent signs are fusions. In a fusion, parts of both
signs are fused into a single word frame, that is, some phonological specifications stem
from the first, some from the second sign. Remember that a fusion is also the second
step in a word blend, as in example (5) above, or a phrasal blend, as in example (1)
above. Fusions presumably happen at a late planning stage, at the level of phonetic
encoding in Figure 30.1, when the syllabic structure is accessed. Fusions comprise 8 %
of the DGS corpus. For a detailed analysis of fusions in sign language, see Leuninger,
Hohenberger, and Waleschkowski (2007).

4.2.2. Slip units

Planning units of various sizes can be affected by a slip: phrase, word, morpheme,
segment, phonological feature. The most commonly affected units are phonological
units. Whereas the most common phonological errors in spoken language concern sin-
gle segments (phonemes), in sign language, phonological slips mostly concern phono-
logical features (that is, members of a phonological feature class: handshape, orienta-
tion, movement, and place of articulation). In fact, in the DGS corpus, phonological
features are involved in 41 % of all slips. Most errors are contextual (with an equal
number of anticipations and perseverations) and affect handshape. In contrast, mor-
pheme errors are rare (6 %). The largest category of errors is constituted by word
errors (50 %), with phrasal errors being very rare (1 %).
The frequency of two kinds of slip units, namely lexical and phonological units, has
also been described for the two ASL corpora mentioned above. Due to methodological
differences between the three corpora, only phonological slips can be compared (Kel-
ler/Hohenberger/Leuninger 2003). The most significant finding of this comparison is
that handshape is the phonological parameter most frequently affected (40⫺65 % of
slips in all corpora) (Knapp/Corina 2006). Place of articulation, movement, hand orien-
tation, hand configuration, and various other units are also affected but less frequently
(7⫺32 %). Three interrelated reasons may explain this fact. Firstly, handshapes are
discrete. They cannot be underspecified as the other phonological parameters. Planning
handshapes requires fully specified, discrete, phonological representations which may
therefore be more error-prone than less specified and less discrete ones. The discrete-
ness of handshapes (in contrast to the gradedness of location) has been demonstrated
independently in empirical studies of categorical perception (Emmorey 2002). Sec-
ondly, the handshape inventory is quite large (DGS, for example, has 32 distinctive
handshapes) which increases the likelihood of a mis-selection. Thirdly, the motor repre-
30. Production 729

sentations for handshapes underlying their phonological representation must also be


quite specific and therefore competition may be high. In sum, the available cross-
linguistic comparisons for phonological slips indicate a uniform production process at
the level of phonology, with handshape as the most prominent and most error-prone
feature class.

4.3. Comparison of slips of the hand and tongue

Concurrently with the corpus of slips of the hand (n = 640), a corpus of elicited slips
of the tongue in spoken German was created (n = 944) (Leuninger et al. 2004). This
allowed us to compare the frequency distributions for slip categories and affected units
of the two corpora. Slip categories reveal processes of language production (selection,
serialization), whereas slip units reveal the information packaging, that is, which chunks
of linguistic information are processed.
Few differences were found between slip categories in German and DGS. Qualita-
tively, all slip categories identified for spoken language were also attested in the sign
language corpus. Quantitatively, the frequency distributions for most categories were
strikingly similar, too. Only two categories differed significantly: blends and fusions.
There were many more blends in German than in DGS (20 % vs. 10 %). This asymme-
try is the result of the striking absence of phrasal blends in DGS as compared to
German (1 % vs. 16 %). There were also a number of fusions in DGS while there was
not a single instance in German (8 % vs. 0 %). This difference can be related to the
overall fusional-simultaneous character of sign languages. Here, ‘fusional’ refers to the
tendency of adjacent linguistic units, like morphemes and words, to be blended into a
single prosodic frame, e.g. a phonological word; ‘simultaneous’ refers to the fact that
members of the different phonological feature classes ⫺ handshape, place of articula-
tion, and movement ⫺ temporally co-occur. Fusion is a general and ubiquitous phono-
logical process in sign language that can be observed in the entire morpho-phonological
domain (especially in compounding; see chapter 5 on word formation). As argued
above, fusion (in regular grammatical as well as in erroneous slip processes) is a phe-
nomenon that occurs towards the end of the linguistic encoding process. Fusions are,
therefore, not directly relevant for the core planning processes. Yet, they are highly
informative with respect to the different output constraints on the production process.
In this sense, monosyllabicity (Brentari 1998), i.e. the tendency of sign languages to
form one major prosodic chunk, the phonological word, strongly constrains the output
of sign language production (Hohenberger 2008).
In contrast to slip categories, a comparison of slip units in German and DGS, re-
vealed many differences, both qualitative and quantitative. Qualitatively, the phono-
logical units that correspond to segments (phonemes) in spoken languages are phono-
logical features in sign languages. Therefore, rather than segmental errors, featural
errors are found. Quantitatively, phrasal errors (almost exclusively found in phrasal
blends) rarely occur in DGS but are frequent in spoken language (see above). Simi-
larly, morphological errors were much more frequent in German than in DGS (18 %
vs. 6 %). The reason for both the absence of phrasal errors and the low number of
morpheme errors lies in the simultaneous, non-concatenative character of sign lan-
guage. Manual and non-manual articulators can convey information simultaneously
730 VI. Psycholinguistics and neurolinguistics

to a much higher degree than in spoken language, and on all levels: phonological,
morphological, and syntactic. In spoken language, morphemes and words are predomi-
nantly concatenated into larger units. This difference will be subject to further discus-
sion in section 6 on modality differences.

5. Monitoring

Monitoring is an integral part of the language production system. The monitor super-
vises the language production process and, if necessary, can interrupt an utterance that
is found to be inappropriate or faulty. In the Levelt model, monitoring mainly proceeds
via the speech comprehension system (not shown in the current model, but see Levelt
(1989)). This type of monitor is called the ‘perceptual loop monitor’ (see Postma (2000)
for a comprehensive review of language monitoring models). However, even before
articulation, some of the internal processing outputs can be monitored. As can be seen
in Figure 30.1, the monitor has access to products of the speech planning process start-
ing from the phonological word (Levelt/Roelofs/Meyer 1999). At the stage of the pho-
nological word, speech is still internal and monitoring proceeds via an ‘internal feed-
back loop’. At the stage of articulation, speech is overt and monitoring proceeds via
an ‘external feedback loop’.
In the case of a language production error, various steps in the monitoring process
can be distinguished. Example (3) illustrates a full monitoring cycle. As has been dis-
cussed in section 4.1, in this example, the signer realizes (Figure 30.4c) that she has
produced an error (Figure 30.4b). The utterance is interrupted as soon as possible
(Figure 30.4d), followed by linguistic and non-linguistic editing expressions comment-
ing on the error (Figures 30.4d⫺f). Non-linguistic editing expressions include laughter,
head-shake, limb and body movements, etc.; linguistic editing expressions directly or
indirectly comment on the defective utterance. In this example, the signer explicitly
diagnoses the error as a vergebärdler (‘slip of the hand’) (Figure 30.4f). A repair is
initiated (Figure 30.4g), and the utterance is conveyed correctly (Figures 30.4g⫺j).
Various aspects of monitoring can be measured, among them the overall repair rate
and the ‘locus of repair’. In the DGS corpus, 48 % of all slips of the hands were re-
paired, in the German corpus 51 % (see Table 30.1 below). Both values lie within the
range characteristic of such corpora (Leuninger et al. 2004). It is interesting to explore
the distribution of repairs according to the locus of repair, that is, the point at which
the utterance is interrupted after the error has occurred (Levelt 1989). We distin-
guished five categories with respect to the main unit, the word: (i) before word, (ii)
within word, (iii) after word, (iv) delayed, and (v) other. Category (i) was specifically
introduced for DGS since a substantial number of repairs were observed before the
error had actually been produced. The possibility of observing such early repairs is a
feature specific to sign language production, since the articulators are fully visible, in
contrast to the mostly hidden vocal tract in spoken language. As can be seen in Ta-
ble 30.1, this modality-specific characteristic affects the distribution of the repair locus.
As is evident from Table 30.1, most repairs in both corpora occur within or right
after the error word. However, the distribution is biased towards early detection in
DGS and towards late detection in German, given the structural reference point of the
30. Production 731

Tab. 30.1: Locus of repair for language production errors in German and DGS (cf. Leuninger et
al. 2004, 260)
Locus of repair Spoken German DGS
N % N %
Before word 40 12.9
Within word 227 47 122 39.4
After word 144 29.9 125 40.3
Delayed 93 19.3 23 7.4
Other 18 3.7
Σ slip repairs 482 100 310 100
Ratio of repairs/slips 482/944 51 310/640 48.4

word. In the German corpus, no repairs occur before the word is articulated, and
repairs are more often delayed, occurring some time after articulation of the error. As
for the sign language monitor, the average length of the error word was shorter than
the average sign (478 ms vs. 572 ms), while the repair sign was longer (611 ms) (Leu-
ninger et al. 2004). This confirms the result of the structural analysis above ⫺ that sign
language monitoring and repair takes place before the end of the erroneous sign.
Data from sign language monitoring offers a unique opportunity to test whether
monitoring is structure-sensitive or -insensitive, that is, if it respects lexical, morpholog-
ical, or prosodic boundaries. Based on speech monitoring, this is hard to tell since the
prosodic units ⫺ syllables ⫺ are very short (about 250 ms) and the monitor is likely
to encounter a syllabic boundary at the cut-off point. Therefore, it may be a coinci-
dence that a structural boundary is preserved. Sign language with its comparably long
monosyllabic words provides a unique opportunity to test these alternatives. If the
monitor is truly structure-sensitive, then it should await the completion of the sign
language syllable/word, even if it takes longer to complete it than in spoken language.
If, however, monitoring is structure-insensitive, no such consideration is necessary and
the monitor will just cut off the sign as soon as possible. Our results clearly confirm
this latter alternative (see, for instance, example (6), in which the incorrect sign ball
(‘ball’) is cut off even before the movement starts). If monitoring is universal, the same
process can also be assumed for spoken language. In this way, the study of sign lan-
guage production may contribute important insights to the general monitoring process.
Furthermore, the results cast doubt on a role for an external feedback loop in sign
language monitoring. Note that the signer in example (3) did not look at her hands at
all while she signed the error but rather straight into the camera (Figure 30.4b), seeing
her hands only in the periphery of her visual field, if at all. Nevertheless, she recognizes
the slip and repairs it (Figures 30.4c⫺j) without monitoring her hands. We therefore
argue that the internal feedback loop is of greater importance in sign language, allow-
ing for quick internal monitoring (Leuninger et al. 2004), as indicated by the hand icon
inserted into the internal branch of the monitor loop in Figure 30.1. Emmorey (2005)
suggests that internal monitoring has a role even after signs have been overtly articu-
lated: signers generally monitor internal representations and not perceptual feedback.
Furthermore, she conjectures that the mechanisms through which self-monitoring pro-
ceeds may not universally coincide with the mechanism of perceiving the language
732 VI. Psycholinguistics and neurolinguistics

production of others. While in spoken language they coincide ⫺ speakers hear the
speech of others and themselves ⫺ in sign language, they diverge: signers predomi-
nantly monitor their own signing internally but perceive the signing of others visually.
Internal monitoring assesses internal linguistic representation at the phonological/
phonetic level in the Levelt model (Figure 30.1). Apart from visual feedback, what
other monitors may exist in sign language production? Postma (2000) discusses various
other monitors that are good candidates for use in sign language production as well:
(i) a buffer-articulation timing monitor that keeps track of the time course of the en-
coding of stored units from an articulatory output buffer into efferent motor com-
mands; (ii) an efference monitor that checks the efference copy of the articulation
against the efferent command; (iii) a proprioception monitor that informs the signer
about the spatial position of the articulators; and (iv) a taction monitor that feeds back
tactile information about contact between the articulators.
In summary, sign language monitoring is an important area of research that yields
evidence for modality-independent as well as modality-dependent aspects. In particu-
lar, it is argued that (i) the monitor in sign as well as in spoken language is structure-
insensitive, (ii) the relation between the perception of one’s own production and the
perception of the production of others may be different for sign and spoken language
in that (iii) signers predominantly rely on an internal (non-visual) feedback loop
whereas speakers rely more on the external (auditory) feedback loop. Differences be-
tween sign and spoken language monitoring in terms of interface conditions are dis-
cussed in the following section.

6. Interface conditions and modality

According to generative approaches to language theory, the output of syntactic and


morphological computation is delivered to the two language interfaces: phonetic form
(PF) and logical form (LF). The only condition a language has to fulfil is that its
representations must be accessible to these systems (Chomsky/Belletti/Rizzi 2002). Op-
timal interaction with the two interface components is what Chomsky calls “perfection
of language”. With respect to sign language, perfection is grounded in its simultaneous
phonology and morphology (Leuninger 2006). The visual system responsible for per-
ception of signed utterances is superior in simultaneous pattern recognition whereas
the auditory system is superior in sequential recognition. When it comes to production,
the high degree of simultaneous representations results in signed utterances of dura-
tions comparable to those of spoken utterances. Evidence for the efficiency of sign
language parsing comes from monitoring processes (section 5). Self-repaired slips of
the hand constitute another data class confirming the early identification of signs. In
Leuninger et al. (2004), loci of restarts in repairs were analysed for German and DGS.
The distribution showed a clear asymmetry: whereas in German, no repair loci before
the word could be identified, in DGS such early repair loci were observed in 12.9 %
of all repaired slips (see Table 30.1). In (8), we provide one example illustrating the
category ‘repair before word’.
30. Production 733

(mutter) vater
Fig. 30.9: A repair before the word in DGS (Leuninger et al. 2004, 261)

(8) (mutter) − vater [DGS]


‘(mother) father’

The intended sign is vater (‘father’). This error shows the signer starting with a lax
@-handshape, an anticipation of the handshape of the sign for ‘mother’ (Figure 30.9,
left picture). During the transitional movement towards the place of articulation of
vater (the forehead), he corrects the handshape into the correct [-handshape for vater
(right picture). Such early repairs are not observed in German (although they may
occur; see Levelt (1989)). The repair is classified as ‘before word’ because the produc-
tion of mutter, a two-syllable sign with the syllable structure [Hold-Movement; Hold-
Movement], does not reach the first Hold. The transitional movement itself is not part
of the lexical representation of mutter, but serves only to reach the place of articula-
tion where the sign starts. The other main loci of repair in DGS are within and after
the word, whereas delayed repairs occur far more frequently in German. This is ex-
pected since spoken words require less production time than signs. In combination
with repairs on transitional movements, as shown in Figure 30.9, this asymmetry can
only be explained by modality factors.
The different frequency of editing expressions in repairs may also reflect modality
factors. According to Levelt (1989), editing expressions like ‘er’ (English) or ‘äh’ (Ger-
man) signal to the hearer that the utterance was erroneous. In the German slip corpus,
15 % of repairs included editing expressions whereas only approximately 5 % of the
repairs in DGS were marked by editing expressions.
Example (2) above illustrates a repair without editing expressions in DGS. This slip
is interesting with respect to its timing: The error, the first repair attempt, and the
repair span only three syllables. The first syllable comprises the onset of vater (‘fa-
ther’), the repair route via the handshape change, and a shortened movement of toch-
ter (‘daughter’); the reduplicated (bisyllabic) intended sign bub (‘boy’) comprises the
other two syllables.
More than 80 % of spoken editing expressions were of the type “äh”, followed by
“nein” (‘no’) with a frequency of 14 %. The latter corresponds to the non-manual
marking of an erroneous expression by a headshake, which was the most frequent
editing expression in DGS (60 %). Other editing expressions in DGS included signs
734 VI. Psycholinguistics and neurolinguistics

like entschuldigung (‘sorry’), raised eyebrows accompanying the erroneous sign, and
pauses filled by freezing the erroneous sign (see example (6) above). The rarity of
editing expressions in DGS is likely to represent a modality difference.
What can be concluded from comparative research on slips of the hand and tongue
with respect to the impact of modality on language processing and its consequences
for language interfaces (PF, LF)? The qualitative finding that all slip categories and
affected units previously reported for spoken language are also found in sign languages
is strong evidence for an amodal language processor. The quantitative finding that the
affected slip units show characteristic differences, however, hints at modality differen-
ces which arise from differences in information packaging in sign and spoken language.
In sign languages, information is conveyed in fewer, relatively information-rich chunks,
resulting from the higher degree of simultaneity in phonology, morphology, and syntax;
in spoken languages, information is conveyed in many, relatively information-lean
chunks, resulting from the higher degree of concatenation in phonology, morphology,
and syntax. The difference can be described in terms of different dimensions of proc-
essing: ‘vertical’ (stacked representations) for sign languages and ‘horizontal’ (serial-
ized) for spoken languages.
Focusing on the word as the central processing unit in language, the canonical word
form for sign languages in general is polymorphemic and monosyllabic, whereas it can
be mono- or polymorphemic and mono- or polysyllabic in typologically different spo-
ken languages (Brentari 1998; Hohenberger 2008). When looking at single signs, the
production of the manual components generally takes longer than the production of
single spoken words. Yet measured in propositions, signed utterances are produced as
fast as spoken utterances (Hohenberger et al. 2002; Leuninger et al. 2004). Addition-
ally, the disadvantage in production speed in sign language is compensated for by
the special design of phonology and morphology, both in manual and non-manual
components. This design guarantees an easy computation at the productive PF-inter-
face. With respect to perception, the simultaneous input to the interface is optimally
adapted to the capacities of visual processing with its superiority in pattern recognition.
Consequently, the asymmetrical distribution of loci of repairs is predictable based on
the higher speech rate in spoken language compared to the lower production rate in
sign language. In combination with repairs on transitional movements, as in (8), and the
rare occurrence of editing expression as in (3), this asymmetry can only be explained by
modality factors.
To summarize, modality-related processing differences between sign and spoken
languages are reflected in different yet equally successful adaptations to the interface
conditions imposed on the language, specifically, PF conditions. Insofar as languages
in either modality conform to these conditions, they can be called ‘perfect’; it is the
occurrence of ‘imperfect’ language slips that lead us to this conclusion.

7. Relation of sign language production studies to other


psycholinguistic and neurolinguistic research
With the rapid advance of neuroscience methodologies, such as fMRI, PET, ERP, TMS,
etc., the field of psycholinguistics has developed in new ways. Such methods are begin-
30. Production 735

ning to provide ways of monitoring brain activity during the production and perception
of linguistic violations.
The findings presented in this chapter are also closely related to other areas in
psycholinguistics. In relation to sign language acquisition, to date no studies have inves-
tigated slips of the hand produced by children acquiring a sign language as a first
language (see Jaeger (2005) for a comparable study of children’s slips in spoken lan-
guage). However, the organization of the mental lexicon is related to language acquisi-
tion insofar as children gradually organize their lexicon in terms of meaning and form.
Children also, at some point in time, start to produce slips. It would be interesting to
see whether children acquiring a sign language start to produce slips of the hand and
repairs at around the same time as children acquiring a spoken language. Children’s
slips of the hands would provide information about the emergence of structure of the
mental lexicon and corresponding knowledge representations in phonology, morphol-
ogy, and syntax.
Language processing comprises both language comprehension/perception and pro-
duction. Discussions about the link between the two areas have recently been revived
in light of the discovery of mirror neurons (Rizzolatti/Craighero 2004), in particular in
relation to the hypothesized evolution of language (signed and spoken) from action
representations through the mirror neuron system (see chapter 23 for discussion). In
this respect, Emmorey (2005) has pointed out an important characteristic of sign lan-
guage, namely the visibility of the articulators, which yields a more direct correspond-
ence between perception and production as compared to spoken languages. Future
research is likely to further develop models in which production and perception are
integrated and which apply across modalities.

8. Literature

Baars, Bernard
1992 A Dozen Competing Plans: Techniques for Inducing Predictable Slips in Speech and
Action. In: Baars, Bernard J. (ed.), Experimental Slips and Human Error: Exploring the
Architecture of Volition. New York: Plenum Press, 129⫺150.
Badecker, William/Miozzo, Michele/Zanuttini, Raffaella
1995 The Two-stage Model of Lexical Retrieval: Evidence from a Case of Anomia with
Selective Preservation of Grammatical Gender. In: Cognition 57, 193⫺216.
Battison, Robbin M.
1987 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Brown, Roger/McNeill, David
1966 The “Tip-of-the-Tongue” Phenomenon. In: Journal of Verbal Learning and Verbal Be-
havior 5, 325⫺337.
Chomsky, Noam/Belletti, Adriana/Rizzi, Luigi
2002 On Nature and Language. Cambridge: Cambridge University Press.
Dell, Gary S.
1986 A Spreading-activation Theory of Retrieval in Sentence Production. In: Psychological
Review 93(3), 283⫺321.
736 VI. Psycholinguistics and neurolinguistics

Dell, Gary S./Reich, Peter A.


1981 Stages in Sentence Production: An Analysis of Speech Error Data. In: Journal of Verbal
Learning and Verbal Behavior 20, 611⫺629.
Emmorey, Karen
1991 Repetition Priming with Aspect and Agreement Morphology in American Sign Lan-
guage. In: Journal of Psycholinguistic Research 20, 365⫺388.
Emmorey, Karen
2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Emmorey, Karen
2005 Signing for Viewing: Some Relations Between the Production and Comprehension of
Sign Language. In: Cutler, Anne (ed.), Twenty-first Century Psycholinguistics: Four Cor-
nerstones. Mahwah, NJ: Lawrence Erlbaum, 293⫺309.
Emmorey, Karen/Corina, David
1990 Lexical Recognition in Sign Language: Effects of Phonetic Structure and Morphology.
In: Perceptual and Motor Skills 71, 1227⫺1252.
Emmorey, Karen/Mehta, Sonya/Grabowski, Thomas J.
2007 The Neural Correlates of Sign Versus Word Production. In: Neuroimage 36, 202⫺208.
Fodor, Jerry A.
1983 The Modularity of Mind. Cambridge, MA: MIT Press.
Fromkin, Victoria A. (ed.)
1973 Speech Errors as Linguistic Evidence. The Hague: Mouton.
Fromkin, Victoria A. (ed.)
1980 Errors in Linguistic Performance: Slips of the Tongue, Ear, Pen, and Hand. New York:
Academic Press.
Garrett, Merrill F.
1975 The Analysis of Sentence Production. In: Wales, Roger/Walker, Edward (eds.), New
Approaches to Language Mechanisms. Amsterdam: North Holland Publishing Com-
pany, 231⫺256.
Grosjean, Francois
1980 Spoken Word Recognition Processes and the Gating Paradigm. In: Perception and Psy-
chophysics 28, 267⫺283.
Hohenberger, Annette
2007 The Possible Range of Variation Between Sign Languages: Universal Grammar, Modal-
ity, and Typological Aspects. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.),
Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de
Gruyter, 341⫺383.
Hohenberger, Annette
2008 The Word in Sign Language: Empirical Evidence and Theoretical Controversies. In:
Linguistics 46(2), 249⫺308.
Hohenberger, Annette/Happ, Daniela/Leuninger, Helen
2002 Modality-dependent Aspects of Sign Language Production: Evidence from Slips of the
Hands and Their Repairs in German Sign Language. In: Meier, Richard P./Cormier,
Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Lan-
guages. Cambridge: Cambridge University Press, 112⫺142.
Hohenberger, Annette/Waleschkowski, Eva-Maria
2005 Language Production Errors as Evidence for Language Production Processes: The
Frankfurt Corpora. In: Kepser, Stephan/Reis, Marga (eds.), Linguistic Evidence. Empiri-
cal, Theoretical and Computational Perspectives. Berlin: Mouton de Gruyter, 285⫺305.
Jaeger, Jeri J.
2005 Kids’ Slips: What Young Children’s Slips of the Tongue Reveal About Language Acquisi-
tion. Mahwah, NJ: Lawrence Erlbaum.
30. Production 737

Jescheniak, Jörg
1999 Accessing Words in Speaking: Models, Simulations, and Data. In: Klabunde, Ralf/von
Stutterheim, Christiane (eds.), Representations and Processes in Language Production.
Wiesbaden: Deutscher Universitäts Verlag, 237⫺257.
Keller, Jörg/Hohenberger, Annette/Leuninger, Helen
2003 Sign Language Production: Slips of the Hand and Their Repairs in German Sign Lan-
guage. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Cross-lin-
guistic Perspectives in Sign Language Research. Selected Papers from TISLR 2000. Ham-
burg: Signum, 307⫺333.
Klima, Edward S./Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Knapp, Heather P./Corina, David P.
2006 Sign Language: Psycholinguistics. In: Brown, Keith (ed.), Encyclopedia of Language
and Linguistics. Amsterdam: Elsevier, 343⫺349.
Krauss, Robert M./Chen, Yihsiu/Gottesman, Rebecca F.
2000 Lexical Gestures and Lexical Access: A Process Model. In: McNeill, David (ed.), Lan-
guage and Gesture. Cambridge: Cambridge University Press, 261⫺283.
Leuninger, Helen
2006 Sign Languages: Representation, Processing, and Interface Conditions. In: Lleó, Conx-
ita (ed.), Interfaces in Multilingualism. Acquisition and Representation. Amsterdam:
Benjamins, 231⫺260.
Leuninger, Helen/Hohenberger, Annette/Waleschkowski, Eva-Maria/Happ, Daniela/Menges, Elke
2004 The Impact of Modality on Language Production: Evidence from Slips of the Tongue
and Hand. In: Pechmann, Thomas/Habel, Christopher (eds.), Multidisciplinary Ap-
proaches to Language Production. Berlin: Mouton de Gruyter, 219⫺277.
Leuninger, Helen/Hohenberger, Annette/Waleschkowski, Eva-Maria
2007 Sign Language: Typology vs. Modality. In: Schütze, Carson T./Ferreira, Victor S. (eds.),
The State of the Art in Speech Error Research. Proceedings of the LSA Institute Work-
shop (Special Issue of MIT Working Papers in Linguistics 53). Cambridge, MA:
MITWPL, 317⫺345.
Levelt, Willem J. M.
1989 Speaking. From Intention to Articulation. Cambridge, MA: MIT Press.
Levelt, Willem J. M./Roelofs, Ardi/Meyer, Antje S.
1999 A Theory of Lexical Access in Speech Production. In: Behavioral and Brain Sciences
22, 1⫺75.
Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.)
2002 Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge Uni-
versity Press.
Meringer, Rudolf/Mayer, Karl
1895 Vesprechen und Verlesen: Eine Psychologisch-linguistische Studie. Stuttgart: Göschense
Verlagsbuchhandlung. [Reprinted as: Cutler, Anne/Fay, David (eds.) (1978), Classics in
Psycholinguistics (Vol. 2). Amsterdam: Benjamins]
Newkirk, Don/Klima, Edward S./Pedersen, Carlene C./Bellugi, Ursula
1980 Linguistic Evidence from Slips of the Hand. In: Fromkin, Victoria A. (ed.), Errors in
Linguistic Performance: Slips of the Tongue, Ear, Pen, and Hand. New York: Academic
Press, 165⫺197.
Pechmann, Thomas/Zerbst, Dieter
2004 Syntactic Constraints on Lexical Selection in Language Production. In: Pechmann,
Thomas/Habel, Christopher (eds.), Multidisciplinary Approaches to Language Produc-
tion. Berlin: Mouton de Gruyter, 278⫺301.
Peterson, Robert R./Savoy, Pamela
1998 Lexical Selection and Phonological Encoding During Language Production: Evidence
for Cascaded Processing. In: Journal of Experimental Psychology: Learning, Memory,
and Cognition 24, 539⫺557.
738 VI. Psycholinguistics and neurolinguistics

Pfau, Roland
2009 Grammar as Processor: A Distributed Morphology Account of Spontaneous Speech Er-
rors. Amsterdam: Benjamins.
Pinker, Steven/Prince, Alan
1992 Regular and Irregular Morphology and the Psychological Status of Rules of Grammar.
In: Sutton, Laurel A./Johnson, Christopher/Shields, Ruth (eds.), Proceedings of the
17 th Annual Meeting of the Berkeley Linguistics Society. Berkeley, CA, 230⫺251.
Postma, Albert
2000 Detection of Errors During Speech Production: A Review of Speech Monitoring
Models. In: Cognition 77, 97⫺131.
Rizzolatti, Giacomo/Craighero, Laila
2004 The Mirror-neuron System. In: Annual Review of Neuroscience 27, 169⫺192.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Stemberger, Joseph P.
1985 The Lexicon in a Model of Language Production. New York: Garland.
Stokoe, William C.
1960 Sign Language Structure: An Outline of the Visual Communication System of the
American Deaf. In: Studies in Linguistics Occasional Papers 8. Buffalo: University of
Buffalo Press. [Re-issued 2005, Journal of Deaf Studies and Deaf Education 10(1), 3⫺
37]
Stokoe, William C.
1991 Semantic Phonology. In: Sign Language Studies 71, 99⫺106.
Thompson, Robin/Emmorey, Karen/Gollan, Tamar H.
2005 “Tip of the Fingers” Experiences by Deaf Signers. In: Psychological Science 16, 856⫺
860.
Whittemore, Gregory L.
1987 The Production of ASL Signs. PhD Dissertation, University of Texas at Austin. Ann
Arbor, MI: University Microfilms International.
Yang, Charles D.
2002 Knowledge and Learning in Natural Language. Oxford: Oxford University Press.

Annette Hohenberger, Ankara (Turkey)


Helen Leuninger, Frankfurt/Main (Germany)
31. Neurolinguistics 739

31. Neurolinguistics
1. Hemisphere specialization
2. Sign language aphasia
3. Neuroimaging studies
4. Role of the cerebellum
5. Morphometric studies
6. Conclusion
7. Literature

Abstract
Sign languages of the deaf are naturally-evolving linguistic systems exhibiting the full
range of linguistic complexity found in speech. Neurolinguistic studies have helped iden-
tify brain regions that are critical for sign language and have documented the dissolution
of sign language in cases of sign aphasia. Recent findings from work in neuroimaging
and electrophysiology have confirmed and extended our understanding of the intricacies
of the neural systems underlying sign language use. Taken together, these studies provide
a privileged avenue for understanding the generality of the cognitive constraints evident
in language processing and the biological basis for human language. In this section, we
discuss how studies of sign language aphasia have informed the question of hemispheric
specialization for human languages. We explore how this characterization has evolved
since the advent of neuroimaging studies of brain intact signers and how an appreciation
of multiple networks involved in human languages has developed. Finally, we examine
how these sign language data accord with emerging models of spoken language function.

1. Hemisphere specialization
Our understanding of the neural representation of human language has been greatly
enriched by the consideration of sign languages of the deaf. Outwardly, this language
form poses an interesting challenge for theories of cognitive and linguistic neural spe-
cialization, which classically have regarded the left hemisphere as being specialized for
linguistic processing, and the right hemisphere as being specialized for visual-spatial
abilities. Given the importance of putatively visual-spatial properties of sign forms
(e.g., movement trajectories and paths through 3-dimension space, facial expressions,
memory for abstract spatial locations, assessments of location and orientation of the
hands relative to the body, etc.), one might expect a greater reliance of right hemi-
sphere resources during sign language processing. However, despite major differences
in the modalities of expression, striking parallels in the psycholinguistic and cognitive
processing of these languages emerge once we acknowledge the structural homologies
of spoken and sign language forms (see Corina/Knapp (2008) for a recent review). The
commonalities in function suggest a uniformity in the neural systems that mediate sign
and spoken language processing.
740 VI. Psycholinguistics and neurolinguistics

This commonality has been largely confirmed through studies of deaf signers who
have incurred brain injury and clinical procedures requiring assessment of eloquent
language cortex (see chapter 32, Atypical Signing). At the same time, advances in our
understanding of the linguistic complexity and properties of sign languages, which pos-
sess avenues of expression that are qualitatively different from spoken language, raise
new questions with respect to neural mechanisms underlying these different forms of
human communication.
Case studies of deaf signing individuals with acquired brain damage and neuroimag-
ing studies of healthy deaf subjects have provided confirming evidence for the impor-
tance of left hemisphere systems in the mediation of sign language and the similarity
of core left hemisphere regions in the mediation of sign and spoken languages. Deaf
signers, like hearing speakers, exhibit language disturbances when left-hemisphere cor-
tical regions are damaged (e.g., Hickok/Love-Geffen/Klima 2002; Marshall et al. 2004;
Poizner/Klima/Bellugi 1987; for a review, see Corina 1998a,b). In addition, there is
good evidence that within the left hemisphere, cerebral organization in deaf signers
follows the familiar anterior/posterior dichotomy for language production and compre-
hension that we see in spoken language users. Figure 31.1 is an image of the left hemi-
sphere, showing the various lobes in the cerebrum and the two important areas in the
left hemisphere (Broca’s area ⫺ the anterior area related to language production and
Wernicke’s area ⫺ the posterior area related to language comprehension).

Fig. 31.1: Left hemisphere of the brain

Right hemisphere damage, while disrupting visual spatial properties (including some
involved in sign processing), nevertheless does not produce sign aphasia. Recently, the
traditional organization of the hemispheric specialization for sign language is under-
scored by the case report of an extremely rare but otherwise compelling case of re-
versed dominance in a user of sign language (Pickell et al. 2005).
31. Neurolinguistics 741

2. Sign language aphasia

2.1. Sign language production

In spoken language aphasia, chronic language production impairments are typically


associated with left-hemisphere frontal anterior lesions that involve the lower posterior
portion of the left frontal lobe, for instance, Broca’s area (see Figure 31.1). These
lesions often extend in depth to the periventricular white matter (e.g., Mohr et al.
1978; Goodglass 1993). The anterior insula has also been implicated in chronic speech
production problems (Dronkers/Redfern/Knight 2000). In the context of understanding
prefrontal contributions to sign language (see chapter 32, Atypical Signing), a pertinent
example of left-hemisphere language mediation is that of patient G.D., reported in
Poizner, Klima, and Bellugi (1987). G.D. is a deaf signer with a large lesion in a left,
anterior frontal region encompassing BA 44/45, who presented with non-fluent, aphasic
signing with intact sign comprehension. As with hearing Broca’s aphasics, this signer’s
comprehension of others’ language productions was undisturbed by her lesion. Her
comprehension was on par with control subjects at both at the single sign and sentence
level. That this deficit is not simply motoric in nature is indicated by the fact that the
deficits were exhibited on both her motorically and non-motorically affected limb (i.e.,
ipsilesional) limb.

2.2. Sign paraphasia

Sign language breakdown following left hemisphere damage is not haphazard, but af-
fects independently motivated linguistic categories. This observation provides support
for viewing aphasia as a unique and specific cognitive deficit rather than as a subtype
of a more general motor or symbolic deficit. An example of the systematicity in sign
and spoken language breakdown is illustrated through consideration of paraphasia
errors (Corina 2000). In the spoken language domain, the substitution of an unex-
pected word for an intended target is known as verbal paraphasia. Most verbal para-
phasias have a clear semantic relationship to the desired word and represent the same
part of speech, hence, they are referred to as “semantic paraphasias” (Goodglass 1993).
In contrast, phonemic or “literal” paraphasia refers to the production of unintended
sounds or syllables in the utterance of a partially recognizable word (Blumstein 1973;
Goodglass 1993). Phonemic sound substitution may results in another real word, re-
lated in sound but not in meaning (e.g., telephone becomes television). There are also
cases in which the erroneous word shares both sound characteristics and meaning with
the target (broom becomes brush) (Goodglass 1993). Several reports of signing para-
phasia can be found in the sign aphasia literature (Poizner/Bellugi/Klima 1987; Bren-
tari/Poizner/Kegl 1995; Corina/Vaid/Bellugi 1992).
Further clues to within hemisphere specialization for sign language production
come from rare clinical cases of cortical stimulation mapping (CSM) performed in
awake neurosurgical patients for the treatment of epilepsy. During the language map-
ping portion of the procedure, a subject is required to name pictures or read written
words. Disruption of the ability to perform the task during stimulation is taken as
742 VI. Psycholinguistics and neurolinguistics

evidence of cortical regions integral to the language task (Stemmer/Whitaker 1998).


Corina et al. (1999) report the effects of cortical stimulation on sign language produc-
tion in a deaf individual undergoing an awake CSM procedure. The patient was to sign
the names of line drawings of pictures. All elicited signs were one-handed, and the
subject signed each with his left hand. As this subject was undergoing left hemisphere
surgery, language disruption as a result of cortical stimulation could not be attributed
to a primary motor deficit.
Stimulation to two anatomical sites led to consistent naming disruption. One of
these sites, an isolated frontal opercular site, corresponds to the posterior aspect of
Broca’s area, BA 44. A second site, located in the parietal opercular region, also re-
sulted in robust object-naming errors. This parietal area corresponds to the supramar-
ginal gyrus (SMG, BA 40). Importantly, the nature of these errors was qualitatively
different. Stimulation of Broca’s area resulted in errors involving the motor execution
of signs. These errors are characterized by a lax articulation of the intended sign with
nonspecific movements (repeated tapping or rubbing) and a reduction in handshape
configurations to a lax closed fist handshape. Interestingly, there was no effort on the
part of S.T. to self-correct these imperfect forms. In addition, such errors were observed
during trials of sign and non-sign repetition. Our results are consistent with the charac-
terization of the posterior portion of Broca’s area as participating in the motoric execu-
tion of complex articulatory forms, especially those underlying the phonetic level of
language structure.
The sign errors observed with stimulation of the SMG are qualitatively different.
With stimulation to this site, S.T. produced both formational and semantic errors in
this picture-naming task. Formational errors are characterized by repeated attempts to
distinctly articulate the intended targets, and successive formational approximations of
the correct sign were common. For example, the American Sign Language (ASL) sign
peanut is normally signed with a closed fist and outstretched thumb with a movement
comprised of an outward wrist rotation (the thumb flicking off the front of the teeth).
Under stimulation, this sign began as an incorrect, but clearly articulated 1-handshape
(closed fist with a protruding bent index finger) articulated at the correct location, but
with an incorrect inward rotation movement. In two successive attempts to correct this
error, the subject first corrected the handshape, and then went on to correct the move-
ment as well. Notably, we do not find the lax and reduced articulations characteristic
of signing under conditions of stimulation to Broca’s area. Instead, as these examples
illustrate, under stimulation to the SMG, the subject’s signing exhibits problems involv-
ing the selection of the individual components of sign forms (that is, handshape, move-
ment, and, to a lesser extent, location). Adding to the specificity of these errors is the
observation that in contrast to sign-naming, sign and non-sign repetition were unaf-
fected with stimulation to the SMG.
Semantic errors were also observed with stimulation of the SMG, and the form of
these errors is particularly noteworthy. Specifically, all of these errors involve semantic
substitutions that are formationally quite similar to the intended targets. For example,
the stimulus picture “pig” elicited the sign farm; the stimulus picture “bed” was signed
as sleep; and the stimulus picture “horse” was signed as cow. In ASL, these semantic
errors contain considerable formational overlap with their intended targets. For exam-
ple, the signs pig and farm differ in movement, but share an identical articulatory
location (the chin) and each are made with similar handshapes; the signs bed and sleep
31. Neurolinguistics 743

share handshapes and are both articulated about the face; finally, the signs cow and
horse differ only in handshape.
In summary, these findings suggests that stimulation to Broca’s area has a global
effect on the motor output of signing, whereas stimulation to a parietal opercular site,
the SMG, disrupts the correct selection of the linguistic components (including both
phonological and semantic elements) required in the service of naming. The prepon-
derance of formationally motivated semantic errors raise further questions regarding
the coupling of semantic and phonological properties in sign languages whereby some
sign forms may be historically influenced by iconic properties of their referents.

2.3. Sign language comprehension

Spoken language comprehension deficits are well attested after left-hemisphere tempo-
ral lobe damage (Naeser et al. 1987; Benson/Ardila 1996). For example, Wernicke’s
aphasia, which is expressed as impaired language comprehension accompanied with
fluent, but often paraphasic (semantic and phonemic) output, is often associated with
damage to the posterior regions of the left superior temporal gyrus (see Figure 31.1).
More recent work has suggested the contribution of the posterior middle temporal
gyrus in cases of chronic Wernicke’s aphasia (Dronkers/Redfern/Ludy 1995; Dronkers/
Redfern/Knight 2000).
Signers with left-hemisphere posterior lesions also evidence fluent sign aphasia with
associated comprehension deficits. There is, however, controversy in regards to the
degree of anatomical overlap observed in comprehension problems in spoken and sign
languages. In addition there are questions raised about the status of forms and func-
tional mechanisms involved in some linguistic constructions observed in sign languages.
The fact that emergent linguistic forms in sign languages capitalize upon the affordan-
ces and constraints of the visual system rather than the auditory system leaves open
the possibility that there may be divergences and subsequent specializations of neural
systems that underlie sign and spoken language.
With respect to anatomical overlap, the relative contribution of posterior temporal
regions versus inferior parietal regions has been called into question. For example, a
group study presented by Hickok, Love-Geffen, and Klima (2002) compares the sign
language comprehension abilities of left- and right-hemisphere damaged signers. Sign-
ers with left-hemisphere posterior temporal lobe damage were found to perform worse
than any other group, exhibiting significant impairments on single sign, and sentence
performance as accessed by an ASL translation of the token test (De Renzi/Vignolo
1962). While the authors emphasize the involvement of the damaged temporal lobe in
these comprehension deficits, lesions additionally extended into the parietal lobe in all
cases. It is noteworthy that the cases described by Chiarello, Knight, and Mandel (1982)
and Poizner, Bellugi, and Klima (1987) and the case study described by Corina, Vaid,
and Bellugi (1992), exhibited fluent aphasia with severe comprehension deficits. Le-
sions in these case studies did not occur in cortical Wernicke’s area proper, but rather
involved more frontal and inferior parietal areas. In both of these cases, lesions ex-
tended posteriorly to the supramarginal gyrus. This is interesting, as lesions associated
with the supramarginal gyrus alone in users of spoken language do not typically result
in severe speech comprehension deficits. These observations have led some to suggest
744 VI. Psycholinguistics and neurolinguistics

that sign language comprehension may be more dependent than speech on left-hemi-
sphere inferior parietal areas, that is, regions associated with somatosensory and visual
motor integration (Leischner 1943; Chiarello/Knight/Mandel 1982; Poizner/Klima/Bel-
lugi 1987; Corina 1998a,b) while spoken language comprehension might weigh more
heavily on posterior temporal lobe association regions whose input includes networks
intimately involved with auditory speech processing. As will be discussed, neuroimag-
ing work further points to modality conditioned effects of neural implementation of
sign languages (Newman et al. 2002; Emmorey et al. 2002; Emmorey/Mehta/Grabowski
2007; MacSweeney et al. 2006). A modality influenced view of language comprehension
is contrasted with an amodal account whereby the posterior temporal lobe is seen
as a unifying computational hub in language understanding independent of language
modality (see Hickok/Love-Geffen/Klima (2002) for some discussion).
A second issue calls into question the similarity of the functional mechanisms re-
quired for comprehension of sign language and spoken language. The issue here is that
sign languages routinely capitalize upon the postural properties of the body and the
manual articulators, as well as the spatial affordances of the visual system, to convey
complex meanings including grammatical roles (subject-object), prepositional meaning,
locative relations, and speaker view-point in ways that may not have direct parallels in
spoken languages. For example, certain classes of sign forms have been described as
depictive. That is, some verbs have, in addition to their usual function as verbs, the
ability to depict the event they encode (Liddell 2003). As described by Dudis (2004,
2007), the contrast between a non-depicting and agreement verb such as give versus a
depicting verb such as hand-to illustrates some of these property differences. The
handshape and movement in the verb give are not conditioned by the object being
given (though the direction of the movement may be conditioned by grammatical struc-
ture). In contrast, the verb hand-to can be used to describe only the transfer of objects
that can be held between the thumb and the four fingers of the handshape ⫺ a paper
document or credit card, but certainly not a kitchen blender. This is one way that
the verb’s iconicity constrains its usage. Additionally, the palm’s continuously upward
orientation and the path of the hand created via the elbow emulate the physical motion
of the transfer event. Dudis further suggests that it is not solely the morphophonologi-
cal differences in the handshape that differentiate these forms, but rather the verb’s
ability to portray a dynamic and visual representation of transfer, which is a demonstra-
tion rather than plain description. One way the verb can be used is akin to a re-enact-
ment by an actor, but with just the signer’s upper body used to create the only visible
part of the depiction, the “giver” (Dudis 2007).
In a similar fashion, the conveyance of spatial prepositional relations between ob-
jects such as “on”, “above”, “under”, etc. can be conveyed via the depiction of the
physical relation itself rather than encoded by a discrete lexical item. For example, an
ASL translation of the English sentence “The pen rests on a book” may, in part, in-
volve the use of the two hands whereby one hand configuration with an outstretched
finger (representing the pen) is placed on the back of a flat open hand (representing
the book). This configuration encodes the spatial meaning “on”, but without the need
for a lexical preposition whose conventional form is “on” (see chapter 19, Use of Sign
Space, for further discussion).
Many sign languages express locative relationships and events in this manner and
have discrete inventories of highly productive grammatical forms, traditionally referred
31. Neurolinguistics 745

to as classifiers or classifier predicates, which participate in these depictive construc-


tions (see chapter 8, Classifiers, for details). Uncontroversially, the use of these forms
constitutes a major contrastive linguistic function in sign languages (see Emmorey
(2003) and papers therein); their theoretical status remains a point of vibrant discussion
and debate (for contrastive views, see (Liddell 1995, 2000; Sandler/Lillo-Martin 2006).
This controversy however has important implications for our understanding of neu-
rolinguistics of sign languages as several studies have found differential disruptions in
the use and comprehension of sentences that involve usage of “depictive” forms. The
broader point is whether aphasic deficits should be solely defined as those that have
clear homologies to the left hemisphere maladies that are evidenced in spoken lan-
guages, or whether the existence of sign languages will force us to reconsider the con-
ception of linguistic deficits.

2.4. Right hemisphere damage: discourse and classifiers

Studies of deaf signers with right hemisphere lesions present a complementary picture;
these individuals often exhibit visual-spatial deficits with an absence of aphasia (Poiz-
ner/Klima/Bellugi 1987). Thus, this profile is similar to that observed in hearing non-
signers.
Typically, signers with damage to the right hemisphere are reported as having well-
preserved language skills. However, as is the case with hearing individuals, right hemi-
sphere damage in signers may disrupt the meta-control of language use and result in
disruptions of discourse abilities (Brownell et al. 1990; Kaplan et al. 1990; Rehak et
al. 1992).
While left hemisphere damage commonly results in disturbances of syntactic proc-
essing of ASL, unexpectedly signers with right hemisphere damage have also exhibited
problems of this nature. Subjects S. M. and G. G. (right-hemisphere damaged subjects
tested by Poizner/Klima/Bellugi 1987) performed well below controls on two tests of
spatial syntax. Indeed, as the authors point out, “right lesioned signers do not show
comprehension deficits in any linguistic test, other than that of spatialized syntax.”
Poizner, Klima, and Bellugi (1987) speculated that the perceptual processing in-
volved in the comprehension of spatialized syntax involves both left and right hemi-
spheres; certain critical areas of both hemispheres must be relatively intact for accurate
performance. The syntactic comprehension deficits found in right and left hemisphere
damaged subjects raise an interesting theoretical question: are these deficits aphasic in
nature, or are they secondary impairments arising from a general cognitive deficit in
spatial processing? Further work is required to tease apart these complicated theoreti-
cal questions.
In summary, aphasia studies provide ample evidence for the importance of the left
hemisphere in mediation of sign language in the deaf. Following left hemisphere dam-
age, sign language performance breaks down in a linguistically significant manner. In
addition, there is growing evidence for the role of the right hemisphere in aspects of
ASL discourse, classifier use, and syntactic comprehension. Descriptions of sign lan-
guage structure have been useful in illuminating the nature of aphasia breakdown as
well as raising new questions concerning the hemispheric specificity of linguistic proc-
essing.
746 VI. Psycholinguistics and neurolinguistics

3. Neuroimaging studies

3.1. Early findings

Historically, early functional imaging studies in deaf signers parallel studies of sign
aphasia and have remarked on the substantial similarity between neural regions medi-
ating sign and spoken languages. Later studies noted both similarities and differences
and began to explore the subtle and not so subtle differences in neural activation. In
these efforts researchers began to address “sign specific” topics and examine domains
that appear to underlie differences in sign and spoken languages, such as the use of
spatial locatives, or classifier use. In addition, researchers began to explore how the
transmission modality of sign language factors into the observed patterns of neural
activation by exploring contributions of human movement, faces, and non-linguistic
actions. Additionally, recent morphometric studies have begun to evaluate the anatom-
ical differences evident in the brains of congenitally deaf individuals.
Early functional imaging studies of sign languages focused on describing general
patterns of neural activation, especially with respect to classically defined speech re-
gions such as left-hemisphere inferior frontal gyrus and temporal auditory association
regions (Söderfeldt et al. 1997; McGuire et al. 1997; Petitto et al. 2000; Nishimura et
al. 1999; Neville et al. 1998; Bavelier et al. 1998). For example, Pettito et al. (2000) and
McGuire et al. (1997) reported significant activation in left inferior frontal regions such
as Broca’s area (BA 44 and 45; see Figure 31.1) and anterior insula during tasks of
sign generation (either physically produced or covert execution). A number of re-
searchers have confirmed the activation of BA 45 in the production of sign and spoken
language (Braun et al. 2001; Corina et al. 2003; Emmorey/Mehta/Grabowski 2007; Kas-
subek/Hickok/Erhard 2004). Additional work using more refined cytoarchitectonic ref-
erences has indicated that BA 45 is activated during both sign and speech, and can be
differentiated from complex oral/laryngeal and limb movements which result in more
ventral activation of BA 44. This interpretation accords well with the cortical stimula-
tion finding reported in Corina et al. (1999). This finding implicates BA 45 as the part
of Broca’s area that is fundamental to the modality-independent aspects of language
generation (Horwitz et al. 2003).
Petitto et al. (2000) and Nishimura et al. (1999) reported significant activation in
left superior temporal regions often associated with auditory processing in response to
single signs. MacSweeney et al. (2002a) validated the activation of auditory association
areas in response to signing through comparisons of deaf and hearing native signers.
Their findings revealed greater activation in temporal auditory association regions dur-
ing the perception of signed sentences in deaf signers. These findings are taken as
evidence of cross-modal plasticity whereby auditory association cortex may be modi-
fied in the absence of auditory input. Left hemisphere auditory association activation
observed from sign language stimuli are presumed to reflect aspects of linguistic proc-
essing, consistent with findings from aphasia. However, at this time the true functional
significance of these activations is not well understood, as additional studies have re-
ported activation of primary auditory cortex in response to low-level visual stimuli in
deaf individuals (Fine et al. 2005; Finney et al. 2003).
31. Neurolinguistics 747

3.2. Sign language production

Studies of sign language production reveal further commonalities in the neural systems
underlying core properties of language function in sign and speech (also see chapter 30).
In an analysis of sign and word naming studies in deaf signers and hearing non-signers,
Emmorey, Mehta, and Grabowski (2007) reported common overlap in neural activa-
tion for single-sign and word production. In this analysis, stimuli included animals,
manipulable tools, and object stimuli relative to a sensori-motor task in which partici-
pants were asked to indicate whether faces were presented upright or upside down.
This large-scale analysis, which collapsed naming response over several object cat-
egories, relative to the low-level sensori-motor visual spatial task, identified regions
supporting modality independent lexical access. Common regions implicated included
the left mesial temporal cortex and the left inferior frontal gyrus. Emmorey et al.
(2007) suggest that left temporal regions reflect conceptually driven lexical access (In-
defrey/Levelt 2004). For both speakers and signers, activation within the left inferior
temporal gyrus may reflect prelexical conceptual processing of the pictures to be
named, while activation within the more mesial temporal regions may reflect lemma
selection, prior to phonological code retrieval. These results argue for a modality-
independent fronto-temporal network that subserves both sign and word production
(Emmorey et al. 2004). A third common region within the occipital cortex is attributed
to non-language task specific demands required during picture naming which likely
include visual attention and visual search processes.
Differences in activated regions in speakers and signers were also observed. Within
the left parietal lobe, two regions were more active for sign than for speech: the supra-
marginal gyrus (SMG) and the superior parietal lobule (SPL). Emmorey (2007) specu-
lated that these regions may be linked to modality-specific output parameters of sign
language. Specifically, activation within left SMG may reflect aspects of phonological
processing in ASL (e.g., selection of hand configuration and place of articulation fea-
tures), whereas activation within SPL may reflect proprioceptive monitoring of motoric
output. The categorization of SMG based on imagining accords well with the stimula-
tion data reported in Corina et al. (1999b).
In two studies of ASL verb generation, San José-Robertson et al. (2004) reported
left-lateralized activation within perisylvian frontal and subcortical regions commonly
observed in spoken language generation tasks. In an extension of this work, Corina et
al. (2005) reported that the observed left-lateralized patterns were not significantly
different when the production of a repeat-generate task was conducted with a signer’s
dominant versus the signer’s non-dominant hand. This finding is consistent with studies
of sign aphasic errors which may be observed on the patient’s non-dominant hand
following left hemisphere insult.
A study of discourse production in ASL-English native bilinguals further under-
scores the similarities between speech and sign (Braun et al. 2001). In this study, spon-
taneous generation of autobiographical narratives in ASL and English revealed com-
plementary progression from early stages of concept formation and lexical access to
later stages of phonological encoding and articulation. This progression proceeds from
bilateral to left lateralized representations, with posterior regions ⫺ especially poster-
ior cingulate, precuneous, and basal-ventral temporal regions ⫺ activated during en-
coding of semantic information (Braun et al. 2001).
748 VI. Psycholinguistics and neurolinguistics

3.3. Sentence comprehension

Studies of sentence processing in sign languages have repeatedly reported left-hemi-


sphere activations that parallel those found for spoken languages. These activation
patterns include inferior frontal gyrus (including Broca’s area and insula), precentral
sulcus, superior and middle temporal cortical regions, posterior STS, AG, and SMG
(e.g., MacSweeney et al. 2006; Lambertz et al. 2005; Sakai 2005; MacSweeney et al.
2002a; Newman et al. 2002; Petitto et al. 2000; Neville et al. 1998). Activations in these
regions are often found in studies of auditory language processing (Davis/Johnsrude
2003; Schlosser et al. 1998; Blumstein 1994). Thus the majority of the early functional
imaging studies in sign language confirmed the importance of the left hemisphere in
sign processing and emphasized the similarity between effects seen for sign and spo-
ken languages.

3.4. Right hemisphere activation

In addition to the more familiar left hemisphere activations, studies of sentence proc-
essing in sign language have also noted significant right hemisphere activation. For
example, activations in right hemisphere superior temporal, inferior frontal, and pos-
terior parietal regions have been reported (e.g., MacSweeney et al. 2006; MacSweeney
et al. 2002a; Newman et al. 2002; Neville et al. 1998). The question of whether these
patterns of activation are unique to sign has been the topic of much debate (see, for
example, Corina et al. 1998; Hickok et al. 1998), as studies of auditory and audiovisual
speech have observed right hemisphere activations that appear similar to those re-
ported in signing (e.g., Davis/Johnsrude 2003; Schlosser et al. 1998). Mounting evidence
suggests that right-hemisphere lateral superior temporal activations may not be sign-
specific. Capek et al. (2004) showed that for monolingual speakers of English, audiovis-
ual English sentence processing elicited not only left-dominant activation in language
regions, but also bilateral activation in the anterior and middle lateral sulcus. Previous
studies have shown that right hemisphere superior temporal regions are sensitive to
facial, and especially mouth, information (Puce et al. 1998; Pelphrey et al. 2005). It is
well known that deaf signers focus attention on mouth regions while attending to signs
(Muir/Richardson 2005). These studies suggest that right hemisphere lateral temporal
activation patterns are not sign-specific but are likely driven by general processing
requirements of physical human forms.
In contrast, posterior-parietal and posterior-temporal regions may play a special
role in the mediation of sign languages (Newman et al. 2002; Bavelier et al. 1998). In
these studies, deaf and hearing native signers, hearing non-signers, and hearing late
learners of ASL viewed sign language sentences contrasted with sign gibberish. Deaf
and hearing native signers showed significant activation in right hemisphere posterior-
parietal and posterior-temporal regions including a homologue of the posterior tempo-
ral Wernicke’s area. These activation patterns were not seen in non-signers, nor were
they observed in hearing late learners of sign language. A group analysis of native and
late learning hearing signers confirmed that the right angular gyrus was found to be
active only when hearing native users of ASL performed the task. When hearing sign-
31. Neurolinguistics 749

ers who learned to sign after puberty performed the same task, the right angular gyrus
failed to be recruited. Newman et al. (2002) argued that the activation of this neural
region during sign language perception may be a neural ‘signature’ of sign being ac-
quired during the critical period for language. Many researchers have speculated that
right hemisphere parietal activation in signers is associated with the linguistic use of
space (Newman et al. 2002; Bavelier et al. 1998) and recent studies have sought to
clarify the contributions of spatial processing in right hemisphere activations.
MacSweeney et al. (2002b) compared the role of parietal cortices in an anomaly
detection task in British Sign Language (BSL). They tested deaf and hearing native
signers in a paradigm that utilized BSL sentence contexts that either made use of
topographic signing space or did not require topographic mapping. Across both groups
comprehension of topographic BSL sentences recruited the left parietal (BA 40 and
SPL-BA 7) and bilateral posterior middle temporal cortices to a greater extent than
did non-topographic sentences. Activation during anomaly detection in the context of
topographic sentences was maximal for left hemisphere inferior parietal lobule (IPL)
in skilled signers. MacSweeney et al. (2002b) suggest these activation patterns may be
related to requirements for spatial processing of hands, as studies of non-signers have
observed left IPL activation in response to imagery of hand movements and hand
position (Gerardin et al. 2000; Kosslyn et al. 1998; Hermsdörfer et al. 2001). A second
left parietal region in these studies, BA 7, is suggested to reflect similar mechanisms
of hand or finger movement (e.g., Weeks et al. 2001). This region appears similar to
mesial activation of IPL reported in Emmorey et al. (2007). An alternative explanation
suggests that neural regions for action processing may have been more prevalent dur-
ing the topographic sentences. Several researchers have observed SPL action in re-
sponse to human action and action vocabulary more generally (Grezes/Decety 2001;
Damasio et al. 2001; Hauk/Johnsrude/Pulvermüller 2004; Pulvermüller/Shtyrov/Ilmoni-
emi 2005).
Similar to the findings reported by Newman et al. (2002), native deaf signers in the
MacSweeney et al. (2002b) study did show activation in the right angular gyrus (BA
39). Parietal activation in the hearing native signers, however, was modulated by accu-
racy on the task, with more accurate subjects showing greater activation. This finding
suggests that proficiency, rather than age of acquisition, may be a critical determinant
of right hemisphere engagement. Importantly, activation in right hemisphere temporal-
parietal regions was specific to BSL and was not observed in hearing non-signers
watching audiovisual English translations of the same sentences.
More work is needed to disentangle the factors that recruit these putatively sign-
specific regions. A PET study by Emmorey et al. (2002) is noteworthy in this regard.
In a production study using PET, Emmorey et al. (2002) required subjects to examine
line drawings of two spatially arrayed objects and produce either a classifier description
or a description of the spatial relationship using ASL lexical prepositions. This study
found evidence for right hemisphere SMG activation for both prepositional forms and
classifiers, compared to object naming; however, the direct comparison between classi-
fier constructions and lexical prepositions in sign revealed only left hemisphere inferior
parietal lobule activation. This implies that the right hemisphere activation must be
related to some common process, perhaps the spatial analysis of the stimulus to be
described, rather than a special spatial-linguistic property of ASL classifiers per se.
Kassubek, Hickok, and Erhard (2004) also reported significant right occipital-temporal
750 VI. Psycholinguistics and neurolinguistics

and superior parietal lobule activation in a covert ASL object naming task that was
not observed in covert spoken language object naming. Thus accruing evidence indi-
cates greater right posterior-temporal and parietal activation in sign language may be
observed in production tasks, more so than observed in complementary studies of
spoken languages. A reasonable hypothesis is that aspects of right hemisphere poster-
ior activation may be engaged when external object properties and spatial object rela-
tions must be translated into body-centered manual representations (Corina et al.
1998).

3.5. Faces and action

An outsider’s observation of fluent signers leads to an appreciation that meaningful


articulation encompasses not only the movement of the hands and arms, but of body
postures and the face as well. Facial expressions in sign languages have been termed
dichotomous, with facial signals not only conveying the well-known universal emo-
tional expressions (Ekman/Friesen 1975), but also discrete linguistics functions. This
characterization is likely to be too simplistic as research continues to expand our under-
standing of the human physiognomy in communication. The complexities inherent in
facial displays used during communication entail not only a signer’s current emotional
state, but the ability to represent emotional states and attitudes of others in cases of
direct quotations, etc. In addition, facial expression processing is required in lip reading
and is well known to be an integral part of speech perception. Moreover, with growing
awareness of sign languages other than ASL, the role of facial expressions across sign
languages indicates that the status of various facial behaviors show language specific
variation (see, for example, Sutton-Spence/Woll 1999; Herrmann 2007).
Nevertheless, the multiplicity of facial behaviors has led to several investigations of
the neural substrates of facial information processing in deaf signers. Early studies
included reports of double dissociations in linguistic and emotional expression execu-
tion in left and right hemisphere damaged signers. In addition, productive asymmetries
and perceptual studies have led to indication of differential hemispheric processing in
deaf signers compared to hearing non-signers (Corina/Bellugi/Reilly 1999). Neuroimag-
ing studies of facial processing in hearing non-signing individuals identify early ventral-
temporal visual areas, such as the fusiform face area (FFA) that may be uniquely
specified for facial recognition, while lateral temporal regions integrate social proper-
ties of facial displays. Studies report a role of bilateral STS in the detection of eye gaze
(Puce et al. 1998; Allison/Puce/McCarthy 2000). Recent work has indicated that STS
activity is modulated by the context within which eye-gaze shifts occur, suggesting that
this region is involved in social perception via its role in analysis of the intentions of
observed actions (Mosconi 2005; Materna/Dicke/Thier 2008a,b).
Recent fMRI data has provided new evidence of specialization for facial expression
recognition in deaf signers (McCullough/Emmorey/Sereno 2005). Studies of deaf sign-
ers and hearing non-signers have reported bilateral superior temporal sulcus (STS)
activation in response to viewing emotional facial expressions in deaf subjects and right
hemisphere dominance for hearing subjects. In addition, activation in FFA was left
lateralized for deaf signers for both emotional and linguistic forms of expression,
whereas activation was bilateral for both expression types in the hearing non-signing
31. Neurolinguistics 751

subjects. It is interesting to note that group differences in STS activation between deaf
and hearing subjects in the McCullough, Emmorey, and Sereno (2005) study emerged
only when the linguistic expressions were presented within the context of linguistic
manual signs. This is consistent with the growing evidence that posterior STS regions
involved in facial processing may serve interpretative functions. This may be an indica-
tion that this neural region in the deaf is modulated by linguistic communicative intent.
In contrast, the activation in FFA, though hemispherically distinct from hearing sub-
jects, was not modulated by the contextual manual sign cues. This is consistent with
reports that indicate that FFA may serve more foundational roles in structural encod-
ing of face specific information (Kanwisher/McDermott/Chun 1997; Kanwisher/Yovel
2006). McCullough, Emmorey, and Sereno (2005) suggest that hemispheric differences
observed in these studies may reflect perceptual differences which invoke greater local
featural processing (as opposed to global-configural processing) in the deaf subjects.
A recent fMRI study by Capek et al. (2008) compared the processing of speechread-
ing and sign in deaf native signers of BSL. Subjects were presented with videos clips
of either English words in a speechreading condition or with BSL signs, and performed
a target-detection task. Similarities in activation patterns for both speechreading Eng-
lish words and viewing BSL signs were seen in perisylvian regions, consistent with the
idea that language systems are engaged in viewing these items. Differences in activa-
tions, reflecting language form, were also seen. In the speechreading condition, greater
activation was produced in anterior and superior regions of the temporal cortices, in-
cluding left mid-superior temporal and lateral sulci, than was observed in the BSL
condition. BSL processing elicited greater activation at the temporo-parieto-occipital
junction bilaterally, including the posterior superior temporal sulcus and the middle
temporal motion area (MT). This pattern of activation for the sign condition may
indicate greater reliance on motion processing in sign language versus speech, which
is consistent with previous findings (MacSweeney et al. 2002a).
Capek et al. (2008) further studied these patterns, exploring the effects of signs
accompanied by different types of mouth action. When signs were accompanied by
speech-like mouth actions, greater superior temporal activation in both hemispheres
and activation in the left inferior frontal gyrus was observed. In contrast, when signs
were not accompanied with mouth actions (i.e., were manual-only), activation was
seen in the right posterior temporo-occipital boundary. This is consistent with previous
findings showing activation in this region due to manual movements (Pelphrey et al.
2005). In conditions where signs were accompanied by non-speech-like mouth move-
ments, activation was seen in posterior and inferior temporal regions, which more
closely resembled activation seen in the manual-only condition, rather than the spee-
chreading or speech-like mouth movement conditions.
Taken together, these findings indicate that sign language and speechreading rely
on different parts of the language processing system ⫺ speech relying to a greater
extent on left inferior frontal and superior temporal areas, and sign language predomi-
nantly activating posterior superior, middle, and inferior temporal areas. Further, these
findings suggest differential cortical organization for language processing, depending
on which articulators are being perceived. Thus, within the domain of facial processing,
researchers continue to find that the neural representations for sign languages may lie
more posteriorly than spoken language, as previously indicated in studies of aphasia.
752 VI. Psycholinguistics and neurolinguistics

3.6. Human actions

Recent fMRI studies have also begun to explore dissociations of manual action percep-
tion and sign language processing (MacSweeney et al. 2004; Corina et al. 2007). Mac-
Sweeney et al. (2004) examined the contrast between the perception of BSL signs and
a set of non-linguistic gestures – racecourse bookmakers’ code, “Tic Tac” – in deaf and
hearing subjects. This study was designed to provide a close comparison between a
natural sign language (BSL) and a similar gesture display. The conventionalized Tic
Tac stimuli share many gestural and rhythmic qualities of natural sign languages, al-
though they do not comprise a linguistic system. Highly similar neural activation was
seen for BSL sentences and Tic Tac sentences in both signing participants and hearing
non-signers. Both groups displayed widespread bilateral posterior temporal-occipital,
STS, and inferior temporal gyrus and inferior frontal lobe activation.
In the signing subjects, the direct comparison between the BSL sentences relative
to the Tic Tac condition revealed a left-lateralized pattern of activation consistent with
prototypical language effects. This activation included left perisylvian language regions,
including temporal-ventral and inferior fusiform activations. In addition, sign stimuli
recruited dorsal parietal-frontal regions including the left SMG, where activation is
consistently observed in studies of sign comprehension. This inferior parietal involve-
ment may reflect unique properties of sign language processing (Corina et al. 1998b).
Conversely, the comparison of Tic Tac relative to BSL resulted in activation that was
focused in a right hemisphere posterior temporal/occipital region anterior to the extra-
striate body area (EBA). Given the desire to match complexity of movement and form
in these studies, the relatively sparse differences in activation between the BSL and Tic
Tac conditions suggest that deaf signers may have treated Tic Tac forms as linguistically
possible but non-occurring signs. Many spoken researchers have reported comparable
neural activation during the processing of possible but non-occurring word forms com-
pared to real words (Mechelli/Gorno-Tempini/Price 2003; Heim et al. 2005).
A Positron Emission Tomography (PET) experiment furthers our understanding of
differences between the neurological processing of signs and naturalistic human actions
(Corina et al. 2007). This study sought to determine whether the focus and extent of
neural activity during passive viewing of naturalistic human actions is modulated as a
function of the type of action observed and the language experience of the viewer.
Deaf signers and hearing non-signers observed three classes of actions: self-oriented
(e.g., scratching one’s head), object-oriented (e.g., throwing a ball), and communicative
movements (ASL linguistic gestures). These stimulus forms were cast against a com-
mon complex baseline condition.
For hearing, sign language-naïve subjects, the passive viewing of these different
classes of actions, relative to a common baseline, produced a very similar pattern of
neural activity despite the inherent differences in the content of these actions. Primary
foci included regions previously identified as critical to a human action recognition
system: most notably, superior parietal (BA 40/7), ventral premotor (BA 6), and infe-
rior regions of the middle frontal gyrus (BA 46).
For deaf signers, a different pattern was apparent. While the neural responses to
self- and object-oriented actions showed a fair degree of similarity to one another, the
neural responses to ASL were quite different. Sign language viewing largely engen-
dered neural activity in frontal and posterior superior temporal language areas, includ-
31. Neurolinguistics 753

ing left inferior frontal (BA 46/9) and superior temporal (BA 41) regions and the insula
(BA 13). Thus, in this study, as in the MacSweeney et al. (2004) study, contrasting
linguistic with non-linguistic actions reveals the participation of left-hemisphere peri-
sylvian and inferior frontal cortical regions in the perception of sign language. When
non-linguistic actions were contrasted with ASL, prominent activity was found bilater-
ally in middle occipital posterior visual association areas (BA 19/18), similar to findings
reported by MacSweeney et al. (2004). Additionally, bilateral activation was found in
ventral inferior temporal lobe (BA 20) and superior frontal (BA 10) regions. Prominent
right hemisphere activity also included anterior regions of middle and superior tempo-
ral gyrus.
Taken together, these neuroimaging data suggests that human action processing in
the deaf differentiates linguistic and non-linguistic actions, especially in cases of natu-
ralistic human actions. During the processing of signs, deaf signers show a greater
reliance on top-down processing in the recognition of linguistic human actions, leading
to more automatic and efficient visual processing of highly familiar linguistic featural
information. In contrast, non-linguistic gesture recognition may be driven by bottom-
up processing, in which preliminary visual analysis is crucial to interpretation of the
forms. In contrast, form based-processing of linguistic signs may be highly efficient and
require less neural effort, providing a more direct mapping to meaning.

4. Role of the cerebellum


While traditionally viewed as playing a prominent role in motor system behaviors,
growing research has revealed that the cerebellum may also be important in cognitive
behaviors (Marien et al. 2001; Fiez 1996; Ivry/Baldo 1992; Ivry/Justus 2001). Activation
within the right lateral cerebellum, often in concert with left inferior frontal activation,
has been consistently observed in a wide range of language tasks including word read-
ing (Fiez/Petersen 1998), verb generation, semantic retrieval tasks (Petersen et al. 1998;
Klein et al. 1995; Klein et al. 1999; McCarthy et al. 1993), verbal memory, and stem
completion (Buckner et al. 1995). The functional significance of cerebellar activation
related to language processes, however, is not well understood.
Researchers have suggested that the right-lateralized cerebellar activation observed
in semantic retrieval and verbal short-term memory tasks may reflect the contribution
of the cerebellum to articulatory preparation and/or covert subvocal articulatory re-
hearsal (Ivry/Justus 2001; Desmond/Gabrieli/Glover 1998). Others have suggested a
less motoric role; a recent study has reported activation in the right lateral cerebellum
in a conjunction analysis comparing verb generation versus rest, and story listening
versus rest (Papathanassiou et al. 2000). The presence of this activation in both lan-
guage production and comprehension provides some evidence for a non-motoric inter-
pretation of right cerebellum activity during linguistic tasks. Others have postulated a
less speech-specific function to the right cerebellum; for example, Noppeney and Price
(2002) suggest that the right lateral cerebellum is part of the network involved in
semantic-executive systems required for the effortful and strategic retrieval of semantic
information. As this activation may be observed in differing response modes (i.e., ar-
ticulation or keypress), it provides additional evidence for a lack of dependence upon
speech-specific processes.
754 VI. Psycholinguistics and neurolinguistics

A limited number of studies have examined activation of the cerebellum in response


to sign tasks (Corina et al. 2003). Studies have shown right cerebellar activation in
response to verb generation in signing (San José-Robertson et al. 2004). In addition,
in the context of a verb generation paradigm under conditions in which signers make
exclusive use of their left hand to produce signs, right-lateralized cerebellar activation
is also observed. Adding to this literature, Gizewski et al. (2005) have reported bilateral
activation of cerebellar Crus I in response to a passive narrative viewing in German
Sign Language (DGS), in which no overt actions were required. Importantly, these
results indicate that this cerebellar involvement is not limited to speech-based articula-
tory factors (i.e., subvocal rehearsal or spoken word retrieval), but must be more ab-
stractly construed and represents processes that are independent of language modality.

5. Morphometric studies
A small number of morphometric studies have been conducted to examine whether
there are any frank anatomical differences in the brains of deaf and hearing subjects.
Two studies have reported reduced white matter in the left posterior superior temporal
gyrus adjacent to language cortex in deaf subjects, but no difference in grey matter
volume of temporal auditory and speech areas (Emmorey et al. 2003; Shibata 2007).
It is speculated that the reduced white matter volume may indicate a hypoplasia in the
development of specific tracts related to speech. The finding that auditory cortices
show no differences in grey matter volume have been taken as evidence for preserved
functionality of these regions, perhaps in the form of cross modal plasticity, which have
shown activation of auditory cortex in response to visual stimuli and visual language.
It is also interesting to note the differences in the Shibata (2007) study: deaf subjects
showed trends for larger grey matter differences in superior frontal gyrus, BA 6,
thought to reflect differences related to the use of manual language in this right-handed
cohort of signers.
In a study examining morphology of the insula in deaf and signing populations,
Allen et al. (2008) report volumetric differences attributed to both auditory deprivation
and sign experience. Deaf subjects exhibited a significant increase in the amount of
grey matter in the left posterior insular lobule which may be related to dependence
upon lip-reading and articulatory (rather than auditory-based) representation. In con-
trast to non-signers, both deaf and hearing signers exhibited increased volume in white
matter in the right insula. This later difference was attributed to increased reliance on
cross-modal sensory integration in sign compared with spoken languages (Allen et
al. 2008).

6. Conclusion
The advent of cognitive neuroscience techniques coupled with lesion based studies
have begun to elucidate further complexities of the neural processing of human lan-
guages. New theoretical proposals, that strive to move beyond the classic functional-
anatomical Wernickes-Geschwind model, are evident and are providing new explana-
31. Neurolinguistics 755

tory power and make new and more explicit predictions for language function, break-
down and recovery (see, for example, Hagoort 2005; Hickok/Poeppel 2007; Friederici/
Alter 2004; Scott/Johnsrude 2003). The studies of neural basis of sign languages provide
critical data to further adjudicate and extend these competing proposals. The data
from sign languages reviewed here reveal several central issues in the development of
neurofunctional models of language. One clear indication is the need to acknowledge
the effects of language modality on neural representation of language.
As reviewed, there are strong indications that manual, facial, and body articulations
and associated somatosensory and visuo-sensory feedback circuits associated with sign
production and perception may recruit left hemisphere inferior and superior parietal
regions in cortex in ways that differ from the representations for spoken languages. In
addition to these modality effects, the studies of sign languages force consideration of
how language-specific structural properties of a linguistic system may alter or influence
cortical representation. For example, the reliance on visual-spatial mechanisms for the
expression and decoding of depictive forms in sign languages may require engagement
of unique right hemisphere resources in the case of signing. The interesting theoretical
question is whether one might expect language-specific deficits based upon unique
processing requirements of a given language.

Acknowledgements: This work was supported in part from grants NIH-NIDCD 2ROI-
DC030911 and NSF SBE-1041725 Gallaudet University’s Science of Learning Center
on Visual Language and Visual Learning.

7. Literature

Allen, John. S./Emmorey, Karen/Bruss, Joel/Damasio, Hanna


2008 Morphology of the Insula in Relation to Hearing Status and Sign Language Experience.
In: The Journal of Neuroscience 28(46), 11900⫺11905.
Allison, Truett/Puce, Aina/McCarthy, Greg
2000 Social Perception from Visual Cues: Role of the STS Region. In: Trends in Cognitive
Science 4(7), 267⫺278.
Bavelier, Daphne/Corina, David/Jezzard, Peter/Clark, Vince/Karni, Avi/Lalwani, Anil/Rausch-
ecker, Josef P./Braun, Allen/Turner, Robert/Neville, Helen J.
1998 Hemispheric Specialization for English and ASL: Left Invariance ⫺ Right Variability.
In: Neuroreport 9(7), 1537⫺1542.
Benson, D. Frank/Ardila, Alfredo
1996 Aphasia: A Clinical Perspective. New York: Oxford University Press.
Blumstein, Sheila E.
1973 A Phonological Investigation of Aphasic Speech. The Hague: Mouton.
Blumstein, Sheila E.
1994 Impairments of Speech Production and Speech Perception in Aphasia. In: Philosophical
Transactions of the Royal Society of London. Series B, Biological Sciences 346(1315),
29⫺36.
Braun, Allen R./Guillemin, Andre/Hosey, Lara/Varga, Mary
2001 The Neural Organization of Discourse: An H2 15O-PET Study of Narrative Production
in English and American Sign Language. In: Brain 124(10), 2028⫺2044.
756 VI. Psycholinguistics and neurolinguistics

Brentari, Diane/Poizner, Howard/Kegl, Judy


1995 Aphasic and Parkinsonian Signing: Differences in Phonological Disruption. In: Brain
and Language 48(1), 69⫺105.
Brownell, Hiram H./Simpson, Tracy L./Bihrle, Amy M./Potter, Heather H./Gardner, Howard
1990 Appreciation of Metaphoric Alternative Word Meanings by Left and Right Brain-dam-
aged Patients. In: Neuropsychologia 28(4), 375⫺383.
Buckner, Randy L./Petersen, Steven E./Ojemann, Jeffrey G./Miezin, Francis M./Squire, Larry R./
Raichle, Marcus E.
1995 Functional Anatomical Studies of Explicit and Implicit Memory Retrieval Tasks. In:
The Journal of Neuroscience 15(1), 12⫺29.
Capek, Cheryl M./Bavelier, Daphne/Corina David/Newman, Aaron J./Jezzard, Peter/Neville,
Helen J.
2004 The Cortical Organization of Audio-visual Sentence Comprehension: An fMRI Study
at 4 Tesla. In: Brain Research: Cognitive Brain Research 20(2), 111⫺119.
Capek, Cheryl M./MacSweeney, Mairead/Woll, Bencie/Waters, Dafydd/McGuire, Philip K./David,
Anthony S./Brammer, Michael J./Campbell, Ruth
2008 Cortical Circuits for Silent Speechreading in Deaf and Hearing People. In: Neuropsy-
chologia 46(5), 1233⫺1241.
Chiarello, Christine/Knight, Robert/Mandel, Mark
1982 Aphasia in a Prelingually Deaf Woman. In: Brain 105(1), 29⫺51.
Corina, David P.
1998a Aphasia in Users of Signed Language. In: Coppens, Patrick/Lebrun, Yvan/Basso, Anna
(eds.), Aphasia in Atypical Populations. Mahwah, NJ: Lawrence Erlbaum, 261⫺309.
Corina, David P.
1998b The Processing of Sign Language: Evidence from Aphasia. In: Whitaker, Harry/Stem-
mer, Brigitte (eds.), Handbook of Neurology. San Diego, CA: Academic Press, 313⫺
329.
Corina, David P.
2000 Some Observations Regarding Paraphasia in American Sign Language. In: Emmorey,
Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor
Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 493⫺507.
Corina, David P./Bellugi, Ursula/Reilly, Judy S.
1999 Neuropsychological Studies of Linguistic and Affective Facial Expressions in Deaf Sign-
ers. In: Language and Speech 42, 307⫺331.
Corina, David/Chiu, Yi-Shiuan/Knapp, Heather/Greenwald, Ralf/San José-Robertson, Lucia/
Braun, Allen
2007 Neural Correlates of Human Action Observation in Hearing and Deaf Subjects. In:
Brain Research 1152, 111⫺129.
Corina, David P./Gibson, Erin K./Martin, Richard/Poliakov, Andrew/Brinkley, James/Ojemann,
George A.
2005 Dissociation of Action and Object Naming: Evidence from Cortical Stimulation Map-
ping. In: Human Brain Mapping 24(1), 1⫺10.
Corina, David P./Knapp, Heather
2008 Signed Language and Human Action Processing: Evidence for Functional Constraints
on the Human Mirror-neuron System. In: Annals of the New York Academy of Science
1145, 100⫺112.
Corina, David P./McBurney, Susan L./Dodrill, Carl/Hinshaw, Kevin/Brinkley, James/Ojemann,
George
1999 Functional Roles of Broca’s Area and SMG: Evidence from Cortical Stimulation Map-
ping in a Deaf Signer. In: Neuroimage 10(5), 570⫺581.
Corina, David P./Neville, Helen J./Bavelier, Daphne
1998 Response to Hickok, Bellugi, and Klima. In: Trends in Cognitive Sciences 2, 12.
31. Neurolinguistics 757

Corina, David P./San José-Robertson, Lucila/Guillemin, Andre/High, Julia/Braun, Allen R.


2003 Language Lateralization in a Bimanual Language. In: Journal of Cognitive Neuroscience
15(5), 718⫺730.
Corina, David P./Vaid, Jyotsna/Bellugi, Ursula
1992 The Linguistic Basis of Left Hemisphere Specialization. In: Science 255(5049), 1258⫺
1260.
Damasio, Hanna/Grabowski, Thomas J./Tranel, Daniel/Ponto, Laura L.B./Hichwa, Richard D./
Damasio, Antonio R.
2001 Neural Correlates of Naming Actions and of Naming Spatial Relations. In: Neuroimage
13(6 Pt. 1), 1053⫺1064.
Davis, Matthew H./Johnsrude, Ingrid S.
2003 Hierarchical Processing in Spoken Language Comprehension. In: Journal of Neurosci-
ence 23(8), 3423⫺3431.
De Renzi, Ennio/Vignolo, Luigi A.
1962 The Token Test: A Sensitive Test to Detect Receptive Disturbances on Aphasics. In:
Brain 85, 665⫺678.
Desmond, John E./Gabrieli, John D.E./Glover, Gary H.
1998 Dissociation of Frontal and Cerebellar Activity in a Cognitive Task: Evidence for a
Distinction Between Selection and Search. In: Neuroimage 7(4:1), 368⫺376.
Dronkers, Nina F./Redfern, Brenda B./Knight, Robert T.
2000 The Neural Architecture of Language Disorders. In: Gazzaniga, Michael S. (ed.), The
New Cognitive Neurosciences. Cambridge, MA: MIT Press, 949⫺958.
Dronkers, Nina F./Redfern, Brenda B./Ludy, Carl A.
1995 Lesion Localization in Chronic Wernicke’s Aphasia. In: Brain and Language 51(1),
62⫺65.
Dudis, Paul
2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15(2), 223⫺238.
Dudis, Paul
2007 Types of Depiction in ASL. Manuscript, Gallaudet University.
Ekman, Paul/Friesen, Wallace. V.
1975 Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Cambridge,
MA: Prentice Hall.
Emmorey, Karen (ed.)
2003 Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erl-
baum.
Emmorey, Karen/Damasio, Hanna/McCullough, Stephen/Grabowski, Thomas/Ponto, Laura L. B./
Hichwa, Richard D./Bellugi, Ursula
2002 Neural Systems Underlying Spatial Language in American Sign Language. In: Neuroim-
age 17(2), 812⫺824.
Emmorey, Karen/Grabowski, Thomas/McCullough, Stephen/Damasio, Hanna/Ponto, Laura L. B./
Hichwa, Richard D./Bellugi, Ursula
2004 Motor-iconicity of Sign Language Does not Alter the Neural Systems Underlying Tool
and Action Naming. In: Brain and Language 89(1), 27⫺37.
Emmorey, Karen/Mehta, Sonya/Grabowski, Thomas J.
2007 The Neural Correlates of Sign Versus Word Production. In: Neuroimage 36(1), 202⫺
208.
Friederici, Angela D./Alter, Kai
2004 Lateralization of Auditory Language Functions: A Dynamic Dual Pathway Model. In:
Brain and Language 89(2), 267⫺276.
Fiez, Julie A.
1996 Cerebellar Contributions to Cognition. In: Neuron 16(1), 13⫺15.
758 VI. Psycholinguistics and neurolinguistics

Fiez, Julie A./Petersen, Steven E.


1998 Neuroimaging Studies of Word Reading. In: Proceedings of the National Academy of
Sciences of the United States of America 95(3), 914⫺921.
Fine, Ione/Finney, Eva M./Boynton, Geoffrey M./Dobkins, Karen R.
2005 Comparing the Effects of Auditory Deprivation and Sign Language Within the Audi-
tory and Visual Cortex. In: Journal of Cognitive Neuroscience 17(10), 1621⫺1637.
Finney, Eva M./Clementz, Brett A./Hickok, Gregory/Dobkins, Karen R.
2003 Visual Stimuli Activate Auditory Cortex in Deaf Subjects: Evidence from MEG. In:
Neuroreport 14(11), 1425⫺1427.
Gerardin, Emmanuel/Sirigu, Angela/Lehéricy, Stéphane/Poline, Jean-Baptiste/Gaymard, Ber-
trand/Marsault, Claude/Agid, Yves/Le Bihan, Denis
2000 Partially Overlapping Neural Networks for Real and Imagined Hand Movements. In:
Cerebral Cortex 10(11), 1093⫺1104.
Gizewski, Elke R./Lambertz, Nicole/Ladd, Mark E./Timmann, Dagmar/Forsting, Micaela
2005 Cerebellar Activation Patterns in Deaf Participants for Perception of Sign Language
and Written Text. In: Neuroreport 28(16/17), 1913⫺1917.
Goodglass, Harold
1993 Understanding Aphasia. San Diego, CA: Academic Press.
Grezes, Julie/Decety, Jean
2001 Functional Anatomy of Execution, Mental Simulation, Observation, and Verb Genera-
tion of Actions: A Meta-analysis. In: Human Brain Mapping 12(1), 1⫺19.
Hagoort, Peter
2005 On Broca, Brain, and Binding: A New Framework. In: Trends in Cognitive Sciences
9(9), 416⫺423.
Hauk, Olaf/Johnsrude, Ingrid/Pulvermüller, Friedemann
2004 Somatotopic Representation of Action Words in Human Motor and Premotor Cortex.
In: Neuron 41(2), 301⫺307.
Heim, Stefan/Alter, Kai/Ischebeck, Anja/Amunts, Katrin/Eickhoff, Simon/Mohlberg, Hartmut/
Zilles, Karl/Cramon, Yves von/Friederici, Angela. D.
2005 The Role of the Left Brodmann’s Areas 44 and 45 in Reading Words and Pseudowords.
In: Cognitive Brain Research 25, 982⫺993.
Hermsdorfer, Joachim/Goldenberg, Georg/Wachsmuth, C./Conrad, Bastian/Ceballos-Baumann,
Andres O./Bartenstein, Peter/Schwaiger, Markus/Boecker, Henning
2001 Cortical Correlates of Gesture Processing: Clues to the Cerebral Mechanisms Underly-
ing Apraxia During the Imitation of Meaningless Gestures. In: Neuroimage 14(1),
149⫺161.
Herrmann, Annika
2007 The Expression of Modal Meaning in DGS and ISL. In: Perniss, Pamela/Pfau, Roland/
Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language
Structure. Berlin: Mouton de Gruyter, 245⫺278.
Hickok, Gregory/Bellugi, Ursula/Klima, Edward S.
1998 What’s Right About the Neural Organization of Sign Language? A Perspective on
Recent Neuroimaging Results. In: Trends in Cognitive Sciences 2, 465⫺468.
Hickok, Gregory/Love-Geffen, Tracy/Klima, Edward S.
2002 Role of the Left Hemisphere in Sign Language Comprehension. In: Brain and Lan-
guage 82(2), 167⫺178.
Hickok, Gregory/Poeppel, David
2007 The Cortical Organization of Speech Processing. In: Nature Reviews Neuroscience 8(5),
393⫺402.
Horwitz, Barry/Amunts, Katrin/Bhattacharyya, Rajjan/Patkin, Debra/Jeffries, Keith/Zilles, Karl/
Braun, Allen R.
2003 Activation of Broca’s Area During the Production of Spoken and Signed Language: A
Combined Cytoarchitectonic Mapping and PET Analysis. In: Neuropsychologia 41(14),
1868⫺1876.
31. Neurolinguistics 759

Indefrey, Peter/Levelt, Willem J. M.


2004 The Spatial and Temporal Signatures of Word Production Components. In: Cognition
92(1/2), 101⫺144.
Ivry, Richard B./Baldo, Juliana V.
1992 Is the Cerebellum Involved in Learning and Cognition? In: Current Opinion in Neuro-
biology 2(2), 212⫺216.
Ivry, Richard B./Justus, Timothy C.
2001 A Neural Instantiation of the Motor Theory of Speech Perception. In: Trends in Neuro-
sciences 24(9), 513⫺515.
Kanwisher, Nancy/McDermott, Josh/Chun, Marvin M.
1997 The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face
Perception. In: The Journal of Neuroscience 17(11), 4302⫺4311.
Kanwisher, Nancy/Yovel, Galit
2006 The Fusiform Face Area: A Cortical Region Specialized for the Perception of Faces.
In: Philosophical Transactions of the Royal Society of London. Series B, Biological
Sciences 361(1476), 2109⫺2128.
Kaplan, Joan A./Brownell, Hiram H./Jacobs, Janet R./Gardner, Howard
1990 The Effects of Right Hemisphere Damage on the Pragmatic Interpretation of Conver-
sational Remarks. In: Brain and Language 38(2), 315⫺333.
Kassubek, Jan/Hickok, Gregory/Erhard, Peter
2004 Involvement of Classical Anterior and Posterior Language Areas in Sign Language
Production, as Investigated by 4T Functional Magnetic Resonance Imaging. In: Neuro-
science Letters 364(3), 168⫺172.
Klein, Denise/Milner, Brenda/Zatorre, Robert J./Meyer, Ernst/Evans, Alan C.
1995 The Neural Substrates Underlying Word Generation: A Bilingual Functional-imaging
Study. In: Proceedings of the National Academy of Sciences of the United States of
America 92(7), 2899⫺2903.
Klein, Denise/Milner, Brenda/Zatorre, Robert J./Zhao, Viviane/Nikelski, Jim
1999 Cerebral Organization in Bilinguals: A PET Study of Chinese-English Verb Genera-
tion. In: Neuroreport 10(13), 2841⫺2846.
Kosslyn, Stephen M./DiGirolamo, Gregory J./Thompson, William L./Alpert, Nathaniel M.
1998 Mental Rotation of Objects Versus Hands: Neural Mechanisms Revealed by Positron
Emission Tomography. In: Psychophysiology 35(2), 151⫺161.
Lambertz, Nicole/Gizewski, Elke R./Greiff, Armin de/Forsting, Michael
2005 Cross-modal Plasticity in Deaf Subjects Dependent on the Extent of Hearing Loss. In:
Brain Research: Cognitive Brain Research 25(3), 884⫺890.
Leischner, Anton
1943 Die “Aphasie” der Taubstummen. In: Archiv fur Psychiatrie und Nervenkrankheiten
115, 469⫺548.
Liddell, Scott K.
1995 Real, Surrogate, and Token Space: Grammatical Consequences in ASL. In: Emmorey,
Karen/Reilly, Judy S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence
Erlbaum, 19⫺41.
Liddell, Scott K.
2000 Blended Spaces and Deixis in Sign Language Discourse. In: McNeill, David (ed.), Lan-
guage and Gesture. Cambridge: Cambridge University Press, 331⫺357.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
MacSweeney, Mairéad/Campbell, Ruth/Woll, Bencie/Brammer, Michael J./Giampietro, Vincent/
David, Anthony S./Calvert, Gemma A./McGuire, Philip K.
2006 Lexical and Sentential Processing in British Sign Language. In: Human Brain Mapping
27(1), 63⫺76.
760 VI. Psycholinguistics and neurolinguistics

MacSweeney, Mairéad/Campbell, Ruth/Woll, Bencie/Giampietro, Vincent/David, Anthony S./


McGuire, Philip K./Calvert, Gemma A./Brammer, Michael J.
2004 Dissociating Linguistic and Nonlinguistic Gestural Communication in the Brain. In:
Neuroimage 22(4), 1605⫺1618.
MacSweeney, Mairéad/Woll, Bencie/Campbell, Ruth/Calvert, Gemma A./McGuire, Philip K./Da-
vid, Anthony S./Simmons, Andrew/Brammer, Michael J.
2002b Neural Correlates of British Sign Language Comprehension: Spatial Processing De-
mands of Topographic Language. In: Journal of Cognitive Neuroscience 14(7), 1064⫺
1075.
MacSweeney, Mairéad/Woll, Bencie/Campbell, Ruth/McGuire, Philip K./David, Anthony S./Wil-
liams, Steven C./Suckling, John/Calvert, Gemma A./Brammer, Michael J.
2002a Neural Systems Underlying British Sign Language and Audio-visual English Processing
in Native Users. In: Brain 125(7), 1583⫺1593.
Marien, Peter/Engelborghs, Sebastiaan/Fabbro, Franco/De Deyn, Peter P.
2001 The Lateralized Linguistic Cerebellum: A Review and a New Hypothesis. In: Brain and
Language 79(3), 580⫺600.
Marshall, Jane/Atkinson, Jo/Smulovitch, Elaine/Thacker, Alice/Woll, Bencie
2004 Aphasia in a User of British Sign Language: Dissociation Between Sign and Gesture.
In: Cognitive Neuropsychology 21(5), 537⫺554.
Materna, Simone/Dicke, Peter W./Thier, Peter
2008a The Posterior Superior Temporal Sulcus Is Involved in Social Communication not Spe-
cific for the Eyes. In: Neuropsychologia 46(11), 2759⫺2765.
Materna, Simone/Dicke, Peter W./Thier, Peter
2008b Dissociable Roles of the Superior Temporal Sulcus and the Intraparietal Sulcus in Joint
Attention: A Functional Magnetic Resonance Imaging Study. In: Journal of Cognitive
Neuroscience 20(1), 108⫺119.
McCarthy, Gregory/Blamire, Andrew M./Rothman, Douglas L./Gruetter, Rolf/Shulman, Robert G.
1993 Echo-planar Magnetic Resonance Imaging Studies of Frontal Cortex Activation During
Word Generation in Humans. In: Proceedings of the National Academy of Sciences of
the United States of America 90(11), 4952⫺4956.
McCullough, Stephen/Emmorey, Karen/Sereno, Martin
2005 Neural Organization for Recognition of Grammatical and Emotional Facial Expres-
sions in Deaf ASL Signers and Hearing Nonsigners. In: Brain Research: Cognitive Brain
Research 22(2), 193⫺203.
McGuire, Philip K./Robertson, Dene M./Thacker, Alice/David, Anthony S./Kitson, Nicholas/
Frackowiak, Richard S./Frith, Chris D.
1997 Neural Correlates of Thinking in Sign Language. In: Neuroreport 8(3), 695⫺698.
Mechelli, Andrea/Gorno-Tempini, Maria Luisa/Price, Cathy J.
2003 Neuroimaging Studies of Word and Pseudoword Reading: Consistencies, Inconsisten-
cies, and Limitations. In: Journal of Cognitive Neuroscience 15(2), 260⫺271.
Mohr, Jay P./Pessin, Michael S./Finkelstein, Stanley/Funkenstein, H. Harris/Duncan, Gary W./
Davis, Kenneth R.
1978 Broca Aphasia: Pathologic and Clinical. In: Neurology 28(4), 311⫺324.
Mosconi, Matthew W./Mack, Peter B./McCarthy, Gregory/Pelphrey Kevin A.
2005 Taking an “Intentional Stance” on Eye-gaze Shifts: A Functional Neuroimaging Study
of Social Perception in Children. In: Neuroimage 27(1) 247⫺252.
Muir, Laura J./Richardson, Iain E. G.
2005 Perception of Sign Language and Its Application to Visual Communications for Deaf
People. In: Journal of Deaf Studies and Deaf Education 10(4), 390⫺401.
Naeser, Margaret A./Helm-Estabrooks, Nancy/Haas, Gale/Auerbach, Sanford/Srinivasan, Malukote
1987 Relationship Between Lesion Extent in ‘Wernicke’s Area’ on Computed Tomographic
Scan and Predicting Recovery of Comprehension in Wernicke’s Aphasia. In: Archives
of Neurology 44(1), 73⫺82.
31. Neurolinguistics 761

Neville, Helen J./Bavelier, Daphne/Corina, David/Rauschecker, Josef P./Karni, Avi/Lalwani, Anil/


Braun, Allen/Clark, Vince/Jezzard, Peter/Turner, Robert
1998 Cerebral Organization for Language in Deaf and Hearing Subjects: Biological Con-
straints and Effects of Experience. In: Proceedings of the National Academy of Sciences
of the United States of America 95(3), 922⫺929.
Newman, Aaron J./Bavelier, Daphne/Corina, David/Jezzard, Peter/Neville, Helen J.
2002 A Critical Period for Right Hemisphere Recruitment in American Sign Language Proc-
essing. In: Nature Neuroscience 5(1), 76⫺80.
Nishimura, Hiroshi/Hashikawa, Kazuo/Doi, Katsumi/Iwaki, Takako/Watanabe, Yoshiyuki/Kusu-
oka, Hideo/Nishimura, Tsunehiko/Kubo, Takeshi
1999 Sign Language ‘Heard’ in the Auditory Cortex. In: Nature 397(6715), 116.
Noppeney, Uta/Price, Cathy J.
2002 A PET Study of Stimulus- and Task-induced Semantic Processing. In: Neuroimage
15(4), 927⫺935.
Papathanassiou, Dimitri O./Etard, Olivier/Mellet, Emmanuel/Zago, Laure/Mazoyer, Bernard/
Tzourio-Mazoyer, Natalie
2000 A Common Language Network for Comprehension and Production: A Contribution
to the Definition of Language Epicenters with PET. In: Neuroimage 11(4), 347⫺357.
Pelphrey, Kevin A./Morris, James P./Michelich, Charles R./Truett, Allison/McCarthy, Gregory
2005 Functional Anatomy of Biological Motion Perception in Posterior Temporal Cortex:
An fMRI Study of Eye, Mouth and Hand Movements. In: Cerebral Cortex 15(12),
1866⫺1876.
Petersen, Steven E./Mier, Hanneke van/Fiez, Julie A./Raichle, Marcus E.
1998 The Effects of Practice on the Functional Anatomy of Task Performance. In: Proceed-
ings of the National Academy of Sciences of the United States of America 95(3), 853⫺
860.
Petitto, Laura Ann/Zatorre, Robert J./Gauna, Kristine/Nikelski, Jim/Dostie, Deanna/Evans,
Alan C.
2000 Speech-like Cerebral Activity in Profoundly Deaf People Processing Signed Languages:
Implications for the Neural Basis of Human Language. In: Proceedings of the National
Academy of Sciences of the United States of America 97(25), 13961⫺13966.
Pickell, Herbert/Klima, Edward/Love, Tracy/Kritchevsky, Mark/Bellugi, Ursula/Hickok, Gregory
2005 Sign Language Aphasia Following Right Hemisphere Damage in a Left-hander: A Case
of Reversed Cerebral Dominance in a Deaf Signer? In: Neurocase 11, 194⫺203.
Poizner, Howard/Klima, Edward/Bellugi, Ursula
1987 What the Hands Reveal About the Brain. Cambridge, MA: MIT Press.
Puce, Aina/Truett, Allison/Bentin, Shlomo/Gore, John C./McCarthy, Gregory
1998 Temporal Cortex Activation in Humans Viewing Eye and Mouth Movements. In: The
Journal of Neuroscience 18(6), 2188⫺2199.
Pulvermüller, Friedemann/Shtyrov, Yury/Ilmoniemi, Risto
2005 Brain Signatures of Meaning Access in Action Word Recognition. In: Journal of Cogni-
tive Neuroscience 17(6), 884⫺892.
Rehak, Alexandra/Kaplan, Joan A./Weylman, Sally T./Kelly, Brendan/Brownell, Hiram H./Gard-
ner, Howard
1992 Story Processing in Right-hemisphere Brain-damaged Patients. In: Brain and Language
42(3), 320⫺336.
Sakai, Kuniyoshi L./Tatsuno, Yoshinori/Suzuki, Kei/Kimura, Harumi/Ichida, Yasuhiro
2005 Sign and Speech: Amodal Commonality in Left Hemisphere Dominance for Compre-
hension of Sentences. In: Brain 128(6), 1407⫺1417.
San José-Robertson, Lucia/Corina, David P./Ackerman, Debra/Guillemin, Andre/Braun, Allen R.
2004 Neural Systems for Sign Language Production: Mechanisms Supporting Lexical Selec-
tion, Phonological Encoding, and Articulation. In: Human Brain Mapping 23(3),
156⫺167.
762 VI. Psycholinguistics and neurolinguistics

Sandler, Wendy/Lillo-Martin, Diane


2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Schlosser, Michael J./Aoyagi, Nobuhisa/Fulbright, Robert K./Gore, John C./McCarthy, Gregory
1998 Functional MRI Studies of Auditory Comprehension. In: Human Brain Mapping 6(1),
1⫺13.
Scott, Sophie K./Johnsrude, Ingrid S.
2003 The Neuroanatomical and Functional Organization of Speech Perception. In: Trends in
Neuroscience 26(2), 100⫺107.
Shibata, Darryl K.
2007 Differences in Brain Structure in Deaf Persons on MR Imaging Studied with Voxel-
based Morphometry. In: American Journal of Neuroradiology 28(2), 243⫺249.
Söderfeldt, Birgitta/Ingvar, Martin/Rönnberg, Jerker/Eriksson, Lars/Serrander, Monica/Stone-
Elander, Sharon
1997 Signed and Spoken Language Perception Studied by Positron Emission Tomography.
In: Neurology 49(1), 82⫺87.
Stemmer, Brigitte/Whitaker, Harry A.
1998 Handbook of Neurolinguistics. San Diego: Academic Press.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Weeks, Robert A./Honda, Manabu/Catalan, Maria Jose/Hallett, Mark
2001 Comparison of Auditory, Somatosensory, and Visually Instructed and Internally Gener-
ated Finger Movements: A PET Study. In: Neuroimage 14(1), 219⫺230.

David Corina & Nicole Spotswood, Davis, California (USA)

32. Atypical signing


1. Introduction
2. Acquired impairments in sign language
3. Developmental impairments in sign language
4. Other studies
5. Discussion: Issues of modality
6. Conclusion
7. Literature

Abstract

Similarities and differences between impairments across modalities can help answer the
question of whether sign language and spoken language are processed identically and
independently of the perceptual and articulatory channels in which they are instantiated,
or whether they reflect how language is shaped by modality. This chapter begins with a
32. Atypical signing 763

brief definition and overview of the notion of ‘atypical sign language’. The discussion of
acquired impairments briefly reviews the neurolinguistics of language impairment, and
the contribution of various abilities/impairments outside the ‘language module’. It then
considers motor (including motor planning), linguistic, cognitive, and neuro-psychiatric
impairments. Within the field of motor impairments, disorders such as Parkinson’s dis-
ease and other degenerative conditions are considered in terms of their impact on the
articulation of signing. Linguistic impairments arising from stroke are dealt with in case
studies and group reports of left- and right-lesioned signers, in relation to modality-
dependence/independence issues. Within the field of developmentally atypical signing,
various individual and small group studies are presented and discussed in depth: Specific
Language Impairment, Williams syndrome, Down syndrome, Autistic Spectrum Disor-
der, Tourette’s syndrome, and Landau-Kleffner syndrome. The chapter also includes a
brief discussion of the impact on sign language of neuropsychiatric impairments such
as dementia and schizophrenia and concludes with consideration of whether language
impairments reside in a specific modality or are modality-independent.

1. Introduction

Since the early studies of Broca in the 19th century, insights into the brain and its
functions have come from the study of individuals with language and communication
impairments. When language is not acquired effectively, or there is a breakdown in
skills, then identifying the sources of impairment offers crucial insights into structure
and processing. In the context of sign language, studies of atypical language provide a
unique window on the relationship of language and modality (Woll/Morgan 2011).
Similarities and differences between impairments across modality can help answer the
question of whether sign language and spoken language are processed identically and
independently of the perceptual and articulatory channels in which they are instanti-
ated, whether they reflect how language is shaped by modality, and the extent to which
language relies on other cognitive processes.
Interest in atypical signing goes back much further than contemporary readers
might suppose. In the first issue of Brain (1878), John Hughlings Jackson commented
“No doubt by disease of some part of his brain the deaf-mute might lose his natural
system of signs” (p. 328), and there are several accounts of impairments in signers
following stroke throughout the remainder of the 19th and the first half of the
20th centuries. Grasset (1896) described a patient who had developed an impairment
in sign language following a stroke which was not attributable to motor impairment of
his right arm, concluding “he really is, therefore, genuinely aphasic in the right hand,
in the true and only meaning of the word”. Other early authors include Burr (1905)
and Leischner (1943). Leischner analysed a congenitally deaf trilingual (sign language,
Czech, and German) patient following a stroke affecting the lower left parietal and
superior temporal lobes. He described impairments in his patient’s oral, written, and
sign language.
Critchley (1938) describes acquired impairments in fingerspelling in the case of
W. H. H., aged 42 years, who had learned fingerspelling and lip-reading after becoming
deaf in childhood, but who also had some speech. Within four weeks of his left hemi-
764 VI. Psycholinguistics and neurolinguistics

sphere stroke, power in the right arm and speech were restored. He could not write,
calculate, or fingerspell (although his ability to understand these symbols did return).
Interestingly, the initial examination of the patient was assisted by the chaplain to the
school for the deaf. Despite partial recovery, confusion between -a- and -e- is reported
(in the British two-handed manual alphabet, these are articulated by touching with
the dominant index finger the index and middle fingers of the non-dominant hand
respectively). His fingerspelling was also described as telegraphic and lacking grammar;
he could not recite a list of vowels or the alphabet, although he could write, type,
and read.
There was also early interest in apraxia (the inability to execute a voluntary motor
movement despite being able to demonstrate normal muscle function) of signing and
the dissociation between apraxia and aphasia in signers as in speakers. Critchley (1938)
and Tureen, Smolik, and Tritt (1951) both state that there was no apraxia in their
signing aphasics. Leischner’s patient (1943) was aphasic but not apraxic: he could per-
form a series of movements to command requiring the manipulation of real objects,
for instance, light a candle with a match, take glasses off and put them into their case,
etc. Sarno, Swisher, and Sarno (1969) report that their patient could imitate correctly
the movements of using a toothbrush, using a pencil, sweeping, etc., and also that he
could correctly manipulate objects such as a comb and a razor, and thus describe him
as not showing “non-language apraxia”. Kimura, Battison, and Lubert (1976), in the
first modern study of a signer following a stroke, explored both aphasia and apraxia.
Although not impaired on the usual apraxia tests, he was impaired, relative to non-
aphasic deaf controls, in the imitation of complex non-linguistic hand movements, sug-
gesting the need for specialised assessments of apraxia in signers.
Leischner (1943) was unwilling to label impairments in signing as aphasia and pro-
posed the term “asymbolia” instead, although Douglass and Richardson (1959) sup-
ported the concept of a “language zone” in the dominant hemisphere of the congeni-
tally deaf, comparable to that for spoken language in hearing people. Sarno, Swisher,
and Sarno (1969) were the first to conclude that “aphasia in the congenitally deaf is
entirely equivalent to that in normal hearing people”. Although this perspective has
largely been accepted, there has been recent interest in whether there are indeed mo-
dality-related differences (see section 5).
Apart from aphasia and apraxia, there has been relatively little interest until re-
cently in developmental and acquired impairments of sign language. Following a de-
tailed discussion of impairments following stroke below, a range of impairments in
sign language will be discussed. For the most part, these are case studies or small
group studies.

2. Acquired impairments in sign language


2.1. Impairments resulting from stroke

2.1.1. Left hemisphere damage

Case studies of deaf signing individuals with acquired brain damage, and neuroimaging
studies of healthy deaf subjects, have provided confirming evidence for left hemisphere
32. Atypical signing 765

dominance in processing sign languages as well as spoken languages (see chapter 31,
Neurolinguistics). Deaf signers, like hearing speakers, exhibit language disturbances
when left-hemisphere cortical regions are damaged (e.g., Hickok/Love-Geffen/Klima
2002; Marshall et al. 2004; Poizner/Klima/Bellugi 1987; for a review, see Corina 1998a,b
and MacSweeney et al. 2008). In addition, there is good evidence that cerebral organi-
sation in deaf signers within the left hemisphere follows the familiar anterior/posterior
dichotomy for language production and comprehension that is found in spoken lan-
guage users (see Figure 31.1 in chapter 31). Right hemisphere damage, although it can
disrupt visual-spatial abilities (including some involved in sign language processing),
nevertheless does not produce sign aphasia (Atkinson et al. 2005).
Poizner, Klima, and Bellugi (1987) present the first modern set of studies of signers
(of American Sign Language, ASL) with aphasia arising from left hemisphere lesions.
In relation to production, damage to the left anterior frontal lobe (Broca’s area) is
associated with aphasia in signers as it is in speakers. For example, Poizner, Klima, and
Bellugi (1987) report on patient G. D., who had a large lesion in Broca’s area and
who had aphasia, with dysfluent single-sign agrammatic utterances, but unimpaired
comprehension. She also produced reduced linguistic facial expression relative to affec-
tive expression (Corina/Bellugi/Reilly 1999).
In relation to comprehension, the left temporal cortex plays an important role in
sign language as it does in comprehension of spoken language. Hickok, Love-Geffen,
and Klima (2002) and Atkinson et al. (2005) compare the sign language comprehension
abilities of left- and right-hemisphere damaged ASL and British Sign Language (BSL)
signers respectively. Signers with left-hemisphere posterior temporal lobe damage were
found to perform worst, exhibiting significant impairments on single sign and sentence
comprehension. Fluent aphasia ⫺ impaired language comprehension accompanied by
fluent production, although often with semantic and phonological paraphasias ⫺ has
been reported in both spoken language and sign language. Although less common than
aphasias with dysfluencies, cases are reported by Chiarello, Knight, and Mandel (1982),
Poizner, Klima, and Bellugi (1987), and Corina et al. (1992).

2.1.2. Right hemisphere damage

Typically, signers with damage to the right hemisphere are reported as having well-
preserved language skills, even when they exhibit visual-spatial deficits (Poizner/Klima/
Bellugi 1987; Atkinson et al. 2004). This resembles findings for hearing non-signers but
is surprising in view of the role of vision and space in sign language processing. The
case study of J. H., a deaf signer with a right hemisphere lesion involving frontal, tem-
poral, and parietal regions (Corina/Kritchevsky/Bellugi 1996), presents a striking exam-
ple of profoundly impaired visual-spatial skills but preserved sign language abilities.
Despite the absence of aphasia in right hemisphere-lesioned signers, there is evi-
dence that right hemisphere structures may play a crucial role in the production and
comprehension of classifiers and in the processing of non-manual linguistic elements.
In tests requiring classifier descriptions, Poizner et al.’s (1987) subject D. N. showed
problems in the depiction of movement direction, object relations, and object orienta-
tion. In situations where two-hand articulations were required, D. N. displayed great
hesitancy and often made multiple attempts to correctly represent the spatial relation-
766 VI. Psycholinguistics and neurolinguistics

ship of the referents, although D. N. had no motor weakness to account for these errors.
Although syntactic processing is generally intact in right hemisphere-lesioned signers,
in contrast to the common disturbances of syntactic processing in left hemisphere-
lesioned signers, there are reports of signers with right hemisphere damage exhibiting
syntactic problems. Subjects S. M. and G. G. (right hemisphere-damaged subjects tested
by Poizner/Klima/Bellugi 1987) performed well below controls on two tests of spatial
syntax. Atkinson et al. (2005) report on the difficulties experienced by right hemi-
sphere-lesioned signers on tests of comprehension of spatial syntax involving locative
sentences and classifiers.
As is the case with hearing individuals (Marini et al. 2005), right hemisphere dam-
age in signers may result in disruption of discourse abilities. One problem identified
for right hemisphere-lesioned signers is in the processing of non-manual linguistic el-
ements (Corina/Bellugi/Reilly 1999; Atkinson et al. 2004). Atkinson and colleagues
report a group study comparing processing of manual and non-manual negation in
BSL in right- and left-lesioned signers (see chapter 15, Negation, for details). Both
groups performed comparably to controls in processing manually marked negation,
but right hemisphere-lesioned signers performed below chance on negation marked
non-manually through head shake and facial expression only. The authors analyse non-
manual negation as prosodic at surface level and attribute the failure of right hemi-
sphere-lesioned signers to the difficulties faced by this group with prosodic process-
ing generally.
Other types of disturbance in discourse are reported in two right hemisphere-le-
sioned patients: J. H. (Corina/Kritchevsky/Bellugi 1996) and D. N. (Emmorey/Corina
1993; Emmorey/Corina/Bellugi 1995; Poizner/Kegl 1992). These two cases reveal con-
trasting impairments. J. H. showed occasional non sequiturs and abnormal attention to
detail in signing and picture description tasks, behaviours that are typically found in
the discourse of hearing patients with right hemisphere lesions. D. N. showed a differ-
ent pattern of discourse disruption; her within-sentence use of spatial indexing was
unimpaired, but her use of space across sentences was inconsistent. In order to main-
tain intelligibility, D. N. used a compensatory strategy in which she restated the noun
phrase in each sentence, resulting in a repetitive discourse style. The cases of J. H. and
D. N. suggest that right hemisphere lesions in signers can differentially disrupt dis-
course content (as in the case of J. H.) and discourse cohesion (as in the case of D. N.).

2.1.3. Differences between spoken language and sign language

Although the above studies suggest that comparable impairments in spoken language
and sign language arise from lesions in the same locations, there is some controversy
in regard to the degree of anatomical overlap observed in comprehension problems in
spoken and sign languages. These in turn have been related to existing questions about
the mechanisms involved in some aspects of sign language, in particular the relation-
ship between gesture and linguistic structure. The fact that linguistic forms in sign
languages exploit visual-gestural systems rather than the auditory-oral systems of spo-
ken language leaves open the possibility that there may be different neural systems
underlying language in these two modalities.
32. Atypical signing 767

It is noteworthy that the cases described by Chiarello, Knight, and Mandel (1982)
and Poizner, Klima, and Bellugi (1987) and the case study described by Corina, Vaid,
and Bellugi (1992), exhibited fluent aphasia with severe comprehension deficits. Le-
sions in these case studies did not occur in Wernicke’s area, but rather involved more
frontal and inferior parietal areas. In these cases, lesions extended posteriorly to the
supramarginal gyrus. This is notable, as lesions associated with the supramarginal gyrus
alone in users of spoken language do not typically result in severe speech comprehen-
sion deficits. These observations have led some to suggest that sign language compre-
hension may be more dependent than speech on left-hemisphere inferior parietal areas
(regions associated with somatosensory and visual motor integration) (Leischner 1943;
Chiarello/Knight/Mandel 1982; Poizner/Klima/Bellugi 1987; Corina 1998b) while spo-
ken language comprehension might rely more heavily on posterior temporal associa-
tion regions whose input includes networks intimately involved with auditory speech
processing (see section 4.2 below on Landau-Kleffner syndrome). As Corina and Spot-
swood discuss in chapter 31, neuroimaging research also points to modality-condi-
tioned effects on neural processing of sign languages (Newman et al. 2002; Emmorey
et al. 2002; Emmorey et al. 2005; MacSweeney et al. 2006).
A second issue is whether the functional mechanisms required for comprehension of
sign language and spoken language are identical. Many sign languages express locative
relationships and events with spatialised structures and have inventories of highly pro-
ductive grammatical forms, classically referred to as classifiers or classifier predicates
(see chapter 8 for discussion), which participate in these constructions. Liddell (2003)
has argued that such constructions are partially gestural in nature. The theoretical
status of such spatialised grammar is a subject of debate which has important implica-
tions for the understanding of the neurolinguistics of sign language. Dissociations be-
tween gesture and sign language abilities have been reported in a case study of an
aphasic signer (Marshall et al. 2004) suggesting that these are separate. However, sev-
eral studies have found differential disruptions in the use and comprehension of sen-
tences that involve usage of spatialised grammar compared to other grammatical con-
structions. For example, Atkinson and colleagues (2005) conducted a group study of
left and right hemisphere-damaged signers of BSL. They devised tests that included
single sign and single predicate-verb constructions (e.g., throw-dart), simple and com-
plex sentences that varied in argument structure and semantic reversibility, locative
constructions encoding spatial relationships and constructions involving lexical preposi-
tions, and a final test of classifier placement, orientation, and rotation. Their findings
indicated that left hemisphere damaged (LHD) BSL signers, relative to elderly control
subjects, exhibited deficits on all comprehension tests. Right hemisphere damaged sign-
ers (RHD) did not differ from controls on single sign and single predicate-verb con-
struction, or on sentences that ranged in argument structure and semantic reversibility.
RHD signers (like LHD signers), however, were impaired on tests of locative relation-
ship expressed via classifier constructions and on a test of classifier placement, orienta-
tion, and rotation.
One interpretation is that the comprehension of these classifier constructions re-
quires not only intact left hemisphere resources, but intact right hemisphere visual-
spatial processing mechanisms as well. That is, while both LHD and RHD signers show
comprehension deficits, the RHD signers’ difficulties stem from more general visual
spatial deficits rather than linguistic malfunction per se. The question of whether these
768 VI. Psycholinguistics and neurolinguistics

visual spatial deficits are “extra-linguistic” lies at the heart of the debate. Atkinson et
al. (2005) suggest that the deficits stem from the disruption of processes which map
non-arbitrary sign locations onto real-world spatial positions. However, as Corina (per-
sonal communication) has pointed out, such forms are not only used to refer to real-
world events, but imaginary and non-present events as well. The broader point he
makes is whether aphasic deficits should be solely defined as those that have clear
homologies to the left hemisphere impairments that are evidenced in spoken languages,
or whether the existence of sign languages will force us to reconsider the concept of
linguistic deficits.

2.1.4. Paraphasia

In spoken language, the substitution of an unexpected word for an intended target is


known as paraphasia (Corina 2000). Most paraphasias have a clear semantic relation-
ship to the target. Several reports of semantic paraphasia in sign languages can be
found in the sign aphasia literature (Poizner/Klima/Bellugi 1987; Brentari/Poizner/Kegl
1995; Corina/Vaid/Bellugi 1992; Marshall et al. 2004). Poizner et al.’s (1987) subject
P.D., for instance, substituted bed for chair, and daughter for son; “Charles” (Mar-
shall et al. 2004) substituted car box for lorry.
Paraphasias can also involve phonological substitution, sometimes resulting in a
non-word or a word related in form but not meaning. Phonological paraphasias are
also found in signers. Although they affect all the major formational parameters of
sign languages (see chapter 3, Phonology), the distribution of paraphasic errors among
the four parameters of sign formation appears to be unequal: handshape configuration
errors are the most widely reported, while paraphasias affecting movement, location,
and orientation are less frequent. An unusual case of sign paraphasia, resulting from
damage to the left frontal operculum, is reported by Hickok et al. (1996). R. S. experi-
enced an initial expressive aphasia that largely resolved, however with lingering prob-
lems of word finding and frequent phonemic paraphasia. Corina has pointed out the
noteworthy nature of these paraphasias, which demonstrate how language modality
may uniquely influence the form of linguistic deficit ⫺ in this case, impairments with
no clear parallels to spoken language disruption (see section 5 below for further discus-
sion of the implications of these and other studies for understanding the relationship
between language and modality).
Poizner, Klima, and Bellugi (1987) speculated that the perceptual processing in-
volved in the comprehension of spatialised syntax involves both left and right hemi-
spheres; certain critical areas of both hemispheres must be relatively intact for accurate
performance. The syntactic comprehension deficits found in right and left hemisphere-
damaged subjects raise an interesting theoretical question: are these deficits aphasic in
nature, or are they secondary impairments arising from a general cognitive deficit in
spatial processing? Further work is required to tease apart these complicated theoreti-
cal questions.

2.2. Parkinson’s disease


Several studies in the modern era have addressed the issue of motor linguistic dissocia-
tions from general motor functions. One such strand of research concerns the linguistic
32. Atypical signing 769

behaviour of signers with Parkinson’s disease (PD). The main symptoms of PD result
from the death of dopamine-generating cells in the substantia nigra, a region of the
midbrain. PD patients reveal qualitatively different impairments from those found in
aphasic signers. The motoric aspects of sign language production are greatly disrupted
by PD, although language is preserved (Brentari/Poizner 1994; Brentari/Poizner/Kegl
1995; Poizner/Kegl 1992, 1993). For example, PD signers typically show a dampening
of facial expression and reduction of the amplitude of movement in general (Loew/
Kegl/Poizner 1995). In addition, the errors produced by signers with PD are more
pronounced at sentence and discourse level than in signs produced in isolation (Poizner
1990). Thus the deficits exhibited by the signers with PD reflect, in large part, an
impairment in the production of complex movement sequences (Tyrone/Woll 2008).
Brentari, Poizner, and Kegl (1995) demonstrated that signers with PD showed marked
disturbances in the temporal organization and coordination of the two motor subsys-
tems of the ASL sign stream: handshape and movement. The change in handshape
that is required for transition from one sign to another normally occurs early in the
transition between signs in control signers; that is, the normal timing relationship be-
tween handshape formation and arm movement is quite asynchronous. Signers suffer-
ing from PD, however, largely synchronized the change in handshape with the start
and end of the movement. They thus reduced the temporal variability in the relation-
ship between the handshape and movement motor subsystems of ASL. These sorts of
changes parallel those found in non-linguistic manual activities of hearing individuals
with PD.

2.3. Progressive supranuclear palsy


Progressive supranuclear palsy (PSP) is a progressive neurological condition with some
surface similarities to PD, although affecting different areas of the brain, mainly the
rostral brainstem and its projections to the basal ganglia, cerebellum, and cerebral
cortex. Articulation difficulties in hearing patients typically emerge early in the course
of the disease and disrupt several aspects of speech. The characteristic features of PSP
are articulatory incoordination and palilalia ⫺ the repetition of entire words at decreas-
ing amplitudes without pause. The only case study of a signer with PSP (Tyrone/Woll
2008) reported frequent deviations from citation form, with errors in orientation, hand-
shape, and location, poor coordination, atypical repetitions, and involuntary move-
ments. Most striking was the finding of palilalia. The disrupted signing can be consid-
ered as palilalic because entire signs were repeated, and repetitions became smaller in
movement amplitude, parallel to palilalia in speech. Research on the forms of articula-
tory deficits across language modalities can lend insight into the basic articulatory units
of both sign and speech. This study suggests that articulation is a modality-independent
function; it consists of the rapid, complex, sequential, coordinated movements neces-
sary for language production. Moreover, preliminary findings suggest that the same
neural structures govern articulation in both speech and sign (but see section 4.3).

2.4. Dementia
Despite the prevalence of dementia in the general population, there have been virtu-
ally no studies of dementia in the Deaf community. A small recent study (DiBlasi
770 VI. Psycholinguistics and neurolinguistics

2011) explored changes in Deaf individuals’ ASL as their cognitive status declines. This
study involved five individuals with cognitive impairments and five controls. They were
assessed on the Mini-Mental State Examination adapted for use with signers (Dean et
al. 2009). Each participant also was given two minutes to describe the Cookie Theft
picture (see Poizner/Klima/Bellugi (1987) for an illustration). Their discourse was ana-
lyzed for number of utterances, number of words per utterance, use of phonology,
morphology, and syntax, and content. There were significant differences between pa-
tients and controls on a number of features, including number of signs per utterance,
and occurrence of phonological errors. A larger-scale study is currently under way with
BSL signers in the UK, which is collecting data on normal cognitive and linguistic
function in signers aged between 50 and 90, in order to develop a cognitive and linguis-
tic screening tool for use with the signing population (Atkinson et al. 2011).

3. Developmental impairments in sign language


The study of children exposed to a sign language but following an atypical course of
development because of developmental disorders is a very new area of study. In this
population, the issues are particularly complex and involve questions of whether lan-
guage development is affected in modality-specific or modality-independent ways. In
cases where non-verbal cognitive deficits are present, the impact of these on the acqui-
sition and use of a language perceived and produced in the visual-spatial modality must
be considered. Typically, deafness is an exclusionary criterion for studies of language
impairment because these studies have always focused on the acquisition of a spoken
language. The inclusion of children exposed to sign languages but presenting with atyp-
ical development has the potential to open a new window on the question of whether
language impairments originate from deficits in the cognitive, linguistic, or perceptual
systems.
Several types of studies are reviewed here. Some concern hearing children and
young people who are atypical speakers of English and who also use BSL, for example,
hearing identical twins with Down Syndrome, who are the children of deaf parents.
Other cases discussed in section 4 include a hearing young man with Landau-Kleffner
Syndrome, aphasic in English but with relatively good BSL; and Christopher, a linguis-
tic savant who learned BSL as an adult despite cognitive and language impairments.
The remainder of this section concerns, deaf children and young people. Cases include
a young deaf woman with developmental visual-spatial impairments (Williams Syn-
drome) and children with Specific Language Impairment, autism, and Tourette’s syn-
drome.

3.1. Specific language impairment

Specific language impairment (SLI) is diagnosed where a deficit in normal spoken


language acquisition ⫺ typically, difficulty with the acquisition of phonology and mor-
phosyntax ⫺ is found with no apparent cognitive, social, or neurological cause (Leon-
ard 1998). All theories of SLI attempt to explain the disproportionate difficulty with
32. Atypical signing 771

phonology and grammar found in SLI but differ in whether they posit a deficit at the
level of language or general cognitive processing. Since hearing loss is specifically ex-
cluded in diagnosing SLI, deaf children are never included in studies of SLI and there
has been relatively little consideration of whether a child exposed to sign language
could have SLI (Morgan 2005).
Recently, group studies of SLI have been undertaken in the US (Quinto-Pozos/
Singleton 2010; Quinto-Pozos/Forber-Pratt/Singleton 2011) and the UK. Morgan and
colleagues (Mason et al. 2010; Morgan/Herman/Woll 2007) have studied SLI in chil-
dren whose first language is BSL, and who were referred for assessment by teachers
or speech and language therapists because of concerns about their sign language devel-
opment in comparison with their peers. All children had been exposed for at least four
years to native signers of BSL and had normal motor and cognitive development.
However, on assessments of BSL receptive grammar and grammatical and pragmatic
skills (Herman/Holmes/Woll 1999; Herman et al. 2004), 7 of 13 children displayed
impaired receptive grammar, and 8 had impaired productive grammar. It is clear that
complex morphology is generally impaired in this group study; the results also indicate
different profiles of impairment in individual children: affecting phonology, receptive
grammar, productive grammar, pragmatics, and discourse. These findings suggest that
SLI affects language acquisition in similar ways in both the spoken and signed modali-
ties but that language typology also influences which aspects of linguistic structure are
more or less intact.
Morgan and colleagues also report on a related single case study of “Paul”, a deaf
native signer aged five years, referred for assessment by his school because of worries
about his BSL development, which was described as being unusually slow for a native
signer. This case is of particular interest, since his impairments cannot be explained by
poor input. Paul scored well below the mean for BSL grammar on the BSL Receptive
Skills Test (Herman/Holmes/Woll 1999), with success on some difficult items, failure
on many easier ones, and with a particularly poor profile on negation, spatial verbs,
and classifiers.
The deaf children with SLI show comparable impairments to those found in hearing
children with SLI. In both the group study and the study of Paul, impairment was found
for grammatical constructions involving verb agreement (see chapter 7). In contrast to
typically developing native signers of Paul’s age, who use inflectional morphology on
the verb give to indicate agent and recipient, as in man letter give3 (‘The man gives
the letter to him/her’), Paul signed a sequence of uninflected signs when asked to
describe a picture of a man giving a letter to a boy despite prompting, as illustrated in
(1) (P = Paul, A = Deaf adult).

(1) P: give give square give (citation forms) [BSL]


‘Give, give the square thing, give.’
A: square give who?
‘Who gives the square thing?’
P: give give index(picture) letter
‘Give, give, (point), letter.’
A: picture what?
‘What is in the picture?’
772 VI. Psycholinguistics and neurolinguistics

P: letter index(picture)
‘A letter (point).’

Paul’s difficulty in using BSL verb morphology may be linked to the nature of meaning-
form mappings using this type of verb in BSL. Signers must simultaneously encode
both the core meaning, for instance, using the appropriate handshape for ‘giving’, and
the direction of movement, encoding the identity of the agent and recipient. This pack-
aging of information into a single unit with several components requires good language
skills. The sets of data from Paul and from the group study suggest that SLI affects
verb morphology in similar ways in sign language (BSL in this case) and spoken lan-
guage. Where this is the case, sign language data cannot help us to decide whether
difficulties with grammatical rules originate from domain-general impairments in infor-
mation processing which would affect rule learning underpinning language but also
other complex systems (Kail 1994) or from a domain-specific linguistic impairment
(van der Lely 2005), but the data do suggest that modality-related processing difficul-
ties cannot be the source of SLI.

3.2. Williams syndrome

Early studies of language in Williams syndrome (WS) reported dissociations between


profound visual-spatial deficits and impressive receptive and productive language skills
(see e.g. Bellugi et al. 1988). While these early studies suggested that language in WS
was intact, recent research has been more sensitive to patterns of relative strengths
and weaknesses across domains, and it has become clear that language is not wholly
intact in WS. A new picture has emerged which suggests that language should be
viewed as relatively spared rather than normal (see e.g. Karmiloff-Smith 2008).
English speakers with WS show subtle linguistic impairments that may be related
to problems with visual-spatial cognition. Studies of WS in populations speaking lan-
guages other than English indicate patterns of impairment in grammar, as the relatively
limited extent of morphological marking in English may mask processing difficulties.
Volterra et al. (1996) found that Italian speaking subjects with WS produced ungram-
matical, or grammatical but atypical, constructions in sentence repetition and story
description tasks and made frequent preposition errors. These findings suggest either
that language impairments may arise from impairments in visual-spatial cognitive do-
mains or that spatial aspects of both cognition and language are controlled by a higher-
level representational system.
The case of a signer with WS is thus of interest since there is the possibility of a
more transparent interaction between visual-spatial abilities and language. Atkinson,
Woll, and Gathercole (2002) report on the case of “Heather”, a young deaf woman.
She is of short stature, with a facial appearance and behavioural profile characteristic
of WS. Heather uses BSL as her preferred method of communication, although she
has some limited ability to lip-read and use spoken and written English. She lives
independently in sheltered housing for Deaf people with additional disabilities and
regularly attends local Deaf clubs and mixes in the Deaf community. Her command of
BSL is strikingly different from her Deaf intellectual peers living in the same sheltered
accommodation, in terms of fluency and complexity. However, although not immedi-
32. Atypical signing 773

ately apparent in spontaneous conversation, she does make consistent errors in her
use of some features of BSL.
Heather has clear impairments in non-language visual-spatial ability, measured on
a variety of standardised tests. She also displays marked problems with comprehension
and production of BSL on standardised assessments, with particular difficulty with
spatialised syntax and other grammatical structures using space for grammatical pur-
poses. At sentential level, Heather’s production of spatial verbs shows consistent im-
pairment in spatial representations. In her spontaneous signing, she appears to try to
deal with her difficulties by choosing English-like structures and a fixed sign order
resembling English. For example, Heather uses the prepositions under, on, and in
rather than classifiers located in spatial relationships to each other to incorporate infor-
mation about referents and the spatial relationships between them. In general, Heather
avoids using classifiers and prefers to use an undifferentiated point with her index
finger to locate referents in space. Where she does use classifiers, these are often bi-
zarre (see Atkinson/Woll/Gathercole (2002) for full details).
At the discourse level, Heather also has difficulties with ensuring maintenance of
topographic locations across sentences. The results from all the BSL assessments show
a disruption in the use of space within BSL, while linguistic devices which do not
incorporate spatial relationships, such as noun-verb distinctions and negation, are pre-
served.
Heather’s language abilities in general are well in advance of her visual-spatial abil-
ities. However, her language profile differs from that of hearing individuals with WS,
since the subtle impairments found in spoken language in WS are far more transparent
in BSL. Most strikingly, for Heather there is a clear dissociation between grammar
that relies on space, and grammar that can be specified lexically (e.g. plurals, static
locatives). This suggests that although the learning of a visual-spatial language is not
in itself dependent on intact visual-spatial cognition, the pattern of breakdown in BSL
abilities indicates a dissociation within BSL grammar between devices that depend on
grammatical processes involving space and those that do not. Heather’s command of
grammar appears well preserved except where spatial relationships are conveyed di-
rectly. In the latter circumstances, visual-spatial impairment overrides general gram-
matical ability.

3.3. Down syndrome

Although signs are widely used to support spoken language development in children
with Down syndrome (DS), there is only one case study of native sign language devel-
opment in this population (Woll/Grove 1996; Woll/Grove/Kenchington 1998). Ruthie
and Sallie are monozygotic twins. Both parents are deaf and members of the Deaf
community. In the presence of their parents and other deaf people, the twins mostly
use BSL without voice, although in such contexts, they occasionally address English-
only utterances to each other (these appear to function as private asides). In the pres-
ence of hearing children and adults and when playing with each other, they use English.
As is usually the case in DS, assessments of the twins’ verbal and nonverbal ability
show that their nonverbal cognitive skills are in advance of their verbal skills. On
measures of comprehension of English vocabulary and grammar, at 10 years of age,
774 VI. Psycholinguistics and neurolinguistics

they were functioning at a 3 to 4 year old level. Overall, as might be expected, visual
and motor skills are relative strengths for both girls.
The twins showed relatively higher skills for BSL vocabulary comprehension than
for English. The lexical advantage for signs was not seen in morphology. Sallie’s BSL
grammar is more advanced than Ruthie’s, but neither Sallie nor Ruthie has mental-
age appropriate mastery of BSL. Some of Sallie’s and many of Ruthie’s responses omit
spatial relationships completely; in others, they use lexical signs such as in-front and
on (i.e. English-like structures), rather than representing spatial relationships directly.
Across various areas of morphosyntax in both BSL and English, they have difficulty
with those with the greatest complexity. For example, they are very good at English
plurals which are relatively simple, but poor at those BSL plurals which require classifi-
ers C distributional morphemes, or those with three-dimensional representations of
space. Ruthie and Sallie thus find the grammatical system of a sign language no easier
to master than that of a spoken language.
Where problems relate to linguistic, as opposed to more general cognitive abilities,
delays and difficulties should be seen in language, regardless of modality. The twins’
BSL grammar is at a comparable level to their English grammar. This suggests a supra-
modal linguistic deficit, with the pattern of varying competences in both languages
related to the complexity of the required linguistic devices. This is consistent with
observations of difficulties in the acquisition and generalisation of rules affecting com-
plex sentence structure in children with DS. However, some studies of children with
DS indicate that although their visual-spatial skills are generally more advanced than
their auditory-vocal skills, there may be impairments in the area of spatial representa-
tion (Uecker et al. 1993; Vallar/Papagno 1993) and this may suggest differences in the
sources of their difficulties in the two languages. In particular, their difficulties in BSL
grammar cluster around hierarchically complex structures of a type not found in non-
linguistic spatial cognition. Additionally, unlike the relative similarities in grammar
cross-modally, their BSL vocabulary is an area of strength compared to English. This
in turn raises further questions about the nature of the sign lexicon in terms of such
issues as iconicity and phonological structure.

3.4. Autism

There is extensive research on deaf children in cognitive areas such as Theory of Mind
and face processing, where norms differ from those in the hearing population, and
studies looking at the use of signing with hearing children who have Autistic Spectrum
Disorder (ASD) (see Bonvillian (2002) for a review), but relatively few studies looking
at deaf children with ASD. The signing of children with ASD is of particular interest
because of the ways that some of the known impairments associated with autism are
likely to interact with sign language. These include the requirement for signers to un-
derstand the visual perspectives of others and the need to look to the interlocutor’s
face to communicate, in relation to which hearing children with ASD are often re-
ported to be particularly poor. These difficulties are associated with failure to master
Theory of Mind, which is delayed in autism. Two recent studies, however, have ex-
plored visual perspective and face information processing in deaf children with ASD,
respectively (Shield 2010; Denmark 2011).
32. Atypical signing 775

Shield compared 25 native signing deaf children and adolescents with autism with
a control group of 13 typically-developing deaf native signing children in a series of
studies, including naturalistic observation, lexical elicitation, fingerspelling, imitation
of nonsense gestures, visual perspective-taking tasks, and a novel sign learning task.
Shield hypothesised that an impairment in visual perspective-taking could lead to pho-
nological errors in ASL, specifically in the parameters of palm orientation, movement,
and location. Results showed that young deaf native signers with autism made frequent
phonological errors involving palm orientation. These results suggest that deaf children
with autism are impaired from an early age in a cognitive mechanism involved in the
acquisition of sign language phonology.
Denmark compared deaf children with ASD and typically developing age- and lan-
guage-matched deaf controls on a number of comprehension and production measures
looking at emotional and linguistic use of the face in BSL. Deaf children with ASD
performed comparably to deaf controls at comprehending and producing facial expres-
sions across many of the tasks and showed no impairment with faces overall. Rather
they showed specific difficulties with the comprehension and production of affective
facial expressions and the comprehension and production of adverbial linguistic struc-
tures representing manner of action. These findings suggest that comprehension and
production of affective and manner facial expressions in BSL rely on functions which
are impaired in ASD, in contrast to other facial expressions which may be regarded as
more purely linguistic. This provides a new line of evidence relating to separation of
linguistic and other types of facial expression already described for signers. The lack
of generalised impairments with faces in the deaf group in this study may also reflect
the generalised advantage for all deaf children in face processing, related to the need
to look to the face to communicate. Both studies demonstrate the importance of ex-
tending research on ASD to sign language users to enhance understanding of autism.

3.5. Tourette’s syndrome

Gilles de la Tourette syndrome (TS) is characterized by vocal and motor tics starting
in childhood. Vocal tics may be either noises or words, and vocal language tics may
consist of obscenities (coprolalia) and repetitions of others’ speech (echolalia). Sign
language tics were first reported by Lang, Consky, and Sandor (1993) in a hearing
woman with pre-existing TS who developed sign language tics following the learning
of ASL as a therapeutic exercise in adulthood. Two later case studies described TS in
a deaf adult (Morris et al. 2000) and a deaf child (Dalsgaard/Damm/Thomsen 2001),
respectively. These two cases are of particular interest, since they provide evidence
relevant to understanding the underlying cause of tics in TS generally.
The Morris et al. patient had the full array of tics seen in TS, but in sign language
rather than in spoken language. The child “Sam”, described by Dalsgaard, Damm, and
Thomsen, produced sign language and spoken language tics. As with hearing TS pa-
tients, both produced obscenities as tics. These cases challenge several of the explana-
tions advanced for the selective production of obscenities in TS. One of these is the
idea that obscenities are produced by hearing people because of their expletive pho-
netic properties including harsh expiratory phonation. This explanation suggests that
obscenities have different qualities than ordinary spoken words. It has also been sug-
776 VI. Psycholinguistics and neurolinguistics

gested that the production of obscenities may be a random process in which high-
frequency phonemes are produced which happen to resemble obscenities. An alterna-
tive theory postulates a semantic origin: the meaning of obscenities may be the prime
determinant of the tic rather than any underlying phonetic or phonological characteris-
tic of the word itself, since the utterance of obscenities may reflect a failure to control
brief aggressive impulses in TS. The obscene signs produced by these patients did not
have any marked “expletive” quality in terms of amplitude of motion. Indeed, further
evidence for a semantic basis for tics was provided by Morris et al.’s patient, who
produced fingerspelled as well as signed obscenities.

4. Other studies
The various conditions described in this section differ from those in the previous sec-
tions since they are not directly concerned with either atypical development or with
acquired impairments in a fluent user of a sign language. Instead they use cases of
atypical individuals as testbeds for exploring neuroscience and linguistic theories. The
first describes a study using the teaching of BSL as an L2 as a means of exploring the
notion of the language module in relation to modality; the second expands this ap-
proach to studies of hearing individuals with developmental aphasia arising from epi-
lepsy affecting the auditory cortex who have been taught sign language as an alterna-
tive to spoken language. In such cases ⫺ and unlike those discussed in the
developmental section ⫺ there are clear dissociations between skills in spoken lan-
guage and sign language. Studies of schizophrenic signers and stuttering provide evi-
dence for differences in the nature of the feedback loops in spoken language and
sign language.

4.1. A linguistic savant and sign language

Smith et al. (2010) report on an experiment in which Christopher (born 1962), a man
with a remarkable ability for learning new languages alongside serious disabilities in
other domains, was taught BSL. Christopher is mildly autistic and severely apraxic; he
lives in sheltered accommodation because he is unable to look after himself. Perhaps
uniquely, Christopher can read, write, speak, understand, and translate some 20 or
more languages, while on tests of non-verbal intelligence he scores very poorly.
Christopher was exposed to a typical introductory course in BSL and his learning
of BSL was compared to that of a control group of hearing university students. Christo-
pher’s general BSL learning was within the normal range of the control group’s abilities
(Smith et al. 2010). The one area where Christopher performed significantly worse than
the control group was with comprehension and production of BSL Entity classifiers. In
a task in which subjects had to match a signed sentence to a written English translation,
Christopher scored 20 % correct; the scores of the control group were between 80 %
and 100 %. In his processing of Entity classifiers, Christopher had some success identi-
fying the class of referent that the handshape represented (curved versus straight ob-
jects, for example) but was not able to process the spatial location or movement that
32. Atypical signing 777

the whole utterance encoded. Christopher did not go on to master BSL to a level
comparable to that of his many other spoken second languages. Yet, he acquired an
impressive single-sign lexicon both in comprehension and production and in doing so
overcame his typical aversion to looking at people’s faces when he communicates. His
general BSL developed to a level comparable with other hearing sign language
learners, but his acquisition differed from the control group in specific areas of the
grammar. He was unable to overcome his difficulty with representing three-dimen-
sional space and manipulations of these arrays in either the non-verbal domain or in
linguistic mapping. His sign language abilities at the level of spatial syntax and mor-
phology were thus limited by his cognitive impairments in non-verbal spatial process-
ing. He did not experience this plateau in the acquisition of morphology in other sec-
ond languages in the spoken modality and so his general cognitive impairment affects
only his sign language learning. These results suggest fundamental modality-dependent
differences for Christopher in the processing and learning of a sign language compared
to a spoken language.

4.2. Landau-Kleffner syndrome

Landau-Kleffner syndrome (LKS) is an auditory agnosia which begins between the


ages of 3 and 8 years and is thought to arise from an epileptic disorder within the
auditory speech cortex. Typically, children with LKS initially develop normally but
then lose comprehension and later production skills in spoken language. This loss is
accompanied by an abnormal electroencephalogram (EEG) with the epileptic focus
in the auditory cortex. The epilepsy usually subsides at puberty; however, a severe
communication impairment often persists. Because of the intractable nature of the
spoken language impairment associated with LKS, speech accompanied by signs has
been used with this group, although this has largely not been successful (Bishop 1982).
In contrast, several studies which have explored the acquisition and use of sign lan-
guage in this population (Baynes et al. 1998; Sieratzki et al. 2001; Roulet Perez et al.
2001) have reported strikingly better sign language than spoken language
The case of Stewart (Sieratzki et al. 2001) illustrates this. His LKS began between
4 and 5 years, and he remains globally aphasic in English as an adult. He was initially
educated in a school for children with severe language impairments, but because of a
lack of improvement in his English language skills, he was transferred to a school for
deaf children at the age of 13 years where he learned BSL.
Stewart demonstrates severe impairments in English on all measures with no im-
provement since childhood testing. At the age of 26, his scores for comprehension of
grammar and vocabulary were equivalent to that of a 2 year old hearing child on a
variety of standardised assessments. His performance on a picture naming vocabulary
task in English was very poor, with only 17/50 responses correct in meaning and articu-
lation. In contrast, in BSL, Stewart produced 29/50 items entirely correctly in meaning
and articulation, and a further 13 items with single-parameter articulation errors. He
was also assessed on BSL comprehension (Herman/Holmes/Woll 1999) and achieved
a score corresponding to that of an average 9 year old native signer.
In individuals with LKS, the source of the impairment lies in processing of the
auditory signal, with visual and spatial processing relatively unimpaired. The observa-
778 VI. Psycholinguistics and neurolinguistics

tions in section 2.1 above, that sign language comprehension may be more dependent
on inferior parietal areas associated with somatosensory and visual motor integration
while spoken language comprehension might rely more heavily on posterior temporal
association regions whose input includes networks intimately involved with auditory
speech processing, may underpin the relative sparing of sign language compared to
spoken language for individuals like Stewart.

4.3. Stuttering

Dysfluencies in signing are known to occur in acquired neurological impairments such


as Parkinson’s disease, progressive supranuclear palsy, and motor neurone disease. De-
velopmental dysfluency disorders in the deaf population are very rare, with a preva-
lence of stuttering in the deaf population of 0.4 % (Backus 1938) and 0.12 % (Mont-
gomery/Fitch 1988), in contrast to the prevalence of stuttering in the hearing
population of about 5 %. There have been a few surveys and case studies (Voelker/
Voelker 1937; Backus 1938; Bloodstein 1969; Silverman/Silverman 1971; Montgomery/
Fitch 1988). Interestingly, in many of these studies, sign stuttering is reported as most
frequently occurring in simultaneous communication, where speech is co-articulated
with sign.
Whitebread (2004) attempted to isolate and identify what might be the core features
of stuttering in ASL. These potential features are: (1) inconsistent interruptions in sign
and fingerspelling, (2) behaviours most often occurring at the beginning of the sign,
(3) hesitation in sign movement, (4) repetition of sign movement, (5) exaggerated or
prolonged signs, (6) unusual body movements unrelated to linguistic communication,
(7) lack of fluidity in sign articulation, (8) inappropriate muscular tension associated
with sign production, and (9) overt vocal or manual hesitation markers. The most
recent study is Cosyns et al. (2009), a questionnaire study of stutter-like dysfluencies in
Flemish Sign Language. Their respondents report observing ‘involuntary interjections’,
‘repetition of sign movement’, ‘unusual body movements’, and ‘poor fluidity of the
sign’. Strikingly, unlike speech stutterers, most signers exhibiting these behaviours were
unaware of fluency problems. Additionally, dysfluencies were reported to occur at me-
dial or final positions in signs, although in speech stuttering, repetitions or blocks at
the end of a word are rare.
Modality-specific features of signing such as relatively less articulatory complexity,
slower rate of articulation, etc. compared to speech ⫺ and of course a different set of
articulators (see chapter 25, Language and Modality, for discussion) ⫺ may account
for the strikingly low prevalence and atypical features of sign stuttering in comparison
to speech stuttering, suggesting that stuttering may indeed be primarily a disorder
of speech.

4.4. Schizophrenia and hallucinations of signers

The study of voice-hallucinations in signers (Schonauer et al. 1998; du Feu/McKenna


1999; Atkinson 2006; Atkinson et al. 2007) provides rare insight into the relationship
32. Atypical signing 779

between sensory experience and how “voices” are perceived. The distinct way in which
hallucinations are experienced may be due to differences in sensory feedback, which
is influenced by both auditory deprivation and language modality. This highlights how
the study of deaf people may inform wider understanding of auditory verbal hallucina-
tions and subvocal processes generally.
There are crucial differences in how hallucinations are experienced by deaf and
hearing people. Hearing people report vivid auditory imagery during voice hallucina-
tions, whereas deaf people, although reporting voice hallucinations, are uncertain about
auditory properties and often report visual or somatic analogues. It is possible that
these differences may arise from differences in the perceptual feedback loop thought
to be related to subvocalisation during thinking. Subvocalisation is primarily a form of
motor imagery; perceptual feedback may vary depending on the modality of the subvo-
cal articulation. Thus, a hearing individual might perceive an auditory trace ancillary
to motor subvocalisation of their thoughts, and the same process may result in a visual
or kinaesthetic percept for signers. Du Feu and McKenna (1999), for example, report
on a deaf patient who perceived his thoughts as simultaneously signed outside his own
head as if he could see them. It is possible he was experiencing imagery of the articula-
tions underlying his sub-articulated thoughts. This leads to the question of whether
these hallucinatory images should be considered to be visual or motor representations.
Signers may combine visual and kinaesthetic percepts into a central representation in
a way that is similar to integrated auditory-kinaesthetic representations in hearing peo-
ple. However, an important difference exists for deaf people producing signs because
the visual feedback received during self-production is substantially different from that
received during comprehension. It is likely that mental representations of sign language
are primarily kinaesthetic because muscle feedback dominates an individual’s experi-
ence of his own productions. This is supported by Emmorey, Korpics, and Petronio’s
(2009) report on the apparent absence of a role for visual feedback provided by concur-
rent visual monitoring of hand movement in the lower periphery of vision during sign
production. Because sign language production involves tracking the arms in space,
hallucinations might result in premotor kinaesthetic traces that might be experienced
as bizarre somatic sensations. Further research is needed to develop feedback models
for sign language and to advance our understanding of how modality and sensory
feedback shape the perceptual quality of experience.

5. Discussion: Issues of modality

At the beginning of this chapter, the question was asked whether language impairments
reside in a specific modality and are thus linked to difficulties with auditory or visual-
spatial processing, or are modality-independent deficits. It is possible that these two
options are not mutually exclusive. In developmental impairments, specific perceptual
processing or cognitive difficulties in the learner might interact with properties of the
language modality. For example, difficulties in processing rapid sequences of closely
related phonemes might create problems for the child acquiring a spoken language but
might be less problematic for the acquisition of a sign language. Conversely, cognitive
difficulties with representing three-dimensional space might not be crucial for acquir-
780 VI. Psycholinguistics and neurolinguistics

ing spoken languages but might prevent learners of sign languages from fully mastering
the grammar. This may be the case with Heather (Williams syndrome) and Christopher
(linguistic savant) although the impact of a visual-spatial impairment on each individ-
ual’s sign acquisition was different. Heather, who learned to sign in childhood, was
able to circumvent her visual-spatial problems with BSL and become a skilled signer,
although with some abnormalities. Christopher, despite being a superlative language
learner, found the morphosyntax-space interface very difficult to master, perhaps be-
cause he was exposed to BSL much later than Heather and used BSL far less (Smith
et al. 2010).
A second possibility is that difficulties with language acquisition (whether signed or
spoken) represent core processing problems at a higher level than those associated
with the perception of the signal. A difficulty with the representation and processing
of grammatical rules which allow the child to build up knowledge of the morpho-
syntactic regularities of the language being acquired would affect complex morphosyn-
tax in both modalities, as appears to be the case with Ruthie and Sallie (Woll/Grove
1996; Woll/Grove/Kenchington 1998). The various cases discussed in this chapter pro-
vide contrasting evidence to address the question of whether different language impair-
ments originate from cognitive, linguistic, or perceptual systems. In some cases, similar
impairments are found in both modalities, suggesting an impairment independent of
modality. In other cases, subjects show differences in language abilities in the two mo-
dalities.
In relation to acquired impairments, aphasia studies to date provide ample evidence
for the importance of the left hemisphere in mediation of sign language in the deaf.
Following left hemisphere damage, sign language performance breaks down in a lin-
guistically significant manner. In addition, there is growing evidence for the role of the
right hemisphere in aspects of sign language discourse and syntax. As well as their
usefulness in illuminating the nature of aphasia breakdown, descriptions of sign lan-
guage structure have raised new questions concerning the hemispheric specificity of
linguistic processing. Two examples of specific interest identified by Corina are the
case of sign paraphasia (patient RS) described by Hickok et al. (1996), and Atkinson
et al.’s (2005) study of British signers with left- and right-hemisphere strokes, discussed
in section 2.
These cases demonstrate the way in which a language’s modality may uniquely
influence the form of the linguistic deficit ⫺ in these cases, impairments with no clear
parallels to spoken language disruption. The first study, the case of RS, is important
for our understanding of the neurobiology of language, as the errors can be taken as
evidence for selective language-form-specific linguistic impairment, indicating that the
modality and/or form of a human linguistic system may place unique demands on the
neural mediation and implementation of language.
The second study identified by Corina as of critical importance in addressing the
relationship of modality and language is Atkinson and colleagues’ (2005) study, which
found that RHD signers, although generally exhibiting intact language, were impaired
on tests of locative relationship expressed via classifier constructions and on classifier
placement, orientation, and rotation.
Our understanding of the neural representation of human language has been greatly
enriched by the consideration of sign languages of the deaf. Outwardly, this language
form poses an interesting challenge for theories of cognitive and linguistic neural spe-
32. Atypical signing 781

cialization, which classically have regarded the left hemisphere as being specialized for
linguistic processing, and the right hemisphere as being specialized for visual-spatial
abilities. Given the importance of putatively visual-spatial properties of sign forms (e.g.,
movement trajectories and paths through three-dimensional space, facial expressions,
memory for abstract spatial locations, and orientation of the hands towards the body,
etc.), one might expect a greater reliance of right hemisphere resources during sign
language processing. However, despite major differences in the modalities of expres-
sion, striking parallels in the psycholinguistic and cognitive processing of these lan-
guages emerge (see Corina/Knapp (2008) and MacSweeney et al. (2008) for reviews).

6. Conclusion
The report of the UK government’s Foresight Cognitive Systems Project (Marslen-
Wilson 2003) identified the potentially unique contribution of sign language research
to understanding how the brain processes language: “A more dramatic type of cross-
linguistic contrast that may be uniquely valuable in elucidating the underlying proper-
ties of speech and language, comes through the comparison between spoken languages
and native sign languages, such as BSL” (p. 9). As Corina suggests in relation to signers
with stroke, such studies raise the question of whether, for example, aphasic deficits
should be solely defined as those that have clear homologies to the aphasias arising
from left hemisphere damage that are evidenced in spoken languages, or whether the
existence of sign languages will force us to reconsider the conception of linguistic defi-
cits.
The studies presented in this paper are examples of how cases of sign language
impairments can provide a unique perspective and a model for investigating how differ-
ent language impairments originate from different parts of the cognitive, linguistic, and
perceptual systems. They also enable direct study of impairments in the context of
cross-modal bilingualism. Finally, such profiles provide an evidence base for the devel-
opment of appropriate interventions for use with deaf and hearing children.

Acknowledgements: I am indebted to David Corina for observations of the theoretical


importance of the cases described by Hickok et al. (1996) and Atkinson et al. (2005)
for our understanding of aphasia in sign language and spoken language. My research
is supported by the Economic and Social Research Council of Great Britain (Grants RES-
620-28-6001, 6002, Deafness, Cognition and Language Research Centre (DCAL)).

7. Literature
Atkinson, Joanna
2006 The Perceptual Characteristics of Voice-Hallucinations in Deaf People: Insights Into
the Nature of Subvocal Thought and Sensory Feedback Loops. In: Schizophrenia Bulle-
tin 32(4), 701⫺708.
Atkinson, Joanna/Campbell, Ruth/Marshall, Jane/Thacker, Alice/Woll, Bencie
2004 Understanding ‘Not’: Neuropsychological Dissociations Between Hand and Head
Markers of Negation in BSL. In: Neuropsychologia 42, 214⫺229.
782 VI. Psycholinguistics and neurolinguistics

Atkinson, Joanna/Denmark, Tanya/Woll, Bencie/Ferguson-Coleman, Emma/Rogers, Katherine/


Young, Alys/Keady, John/Burns, Alastair/Geall, Ruth/Marshall, Jane
2011 Deaf with Dementia: Towards Better Recognition and Services. In: Journal of Dementia
Care 19(3), 38⫺39.
Atkinson, Joanna/Gleeson, Kate/Cromwell, Jim/O’Rourke, Sue
2007 Exploring the Perceptual Characteristics of Voice-Hallucinations in Deaf People. In:
Cognitive Neuropsychiatry 12(4), 339⫺361.
Atkinson, Joanna/Marshall, Jane/Woll, Bencie/Thacker, Alice
2005 Testing Comprehension Abilities in Users of British Sign Language Following CVA.
In: Brain and Language 94(2), 233⫺248.
Atkinson, Joanna/Woll, Bencie/Gathercole, Susan
2002 The Impact of Developmental Visuo-spatial Learning Difficulties on British Sign Lan-
guage. In: Neurocase 8, 424⫺441.
Backus, Ollie
1938 Incidence of Stuttering Among the Deaf. In: Annals of Otology, Rhinology and Laryn-
gology 47, 632⫺635.
Baynes, Kathy/Kegl, Judy/Brentari, Diane/Kussmaul, Clifton/Poizner, Howard
1998 Chronic Auditory Agnosia Following Landau-Kleffner Syndrome: A 23 Year Outcome
Study. In: Brain and Language 63, 381⫺425.
Bellugi, Ursula/Marks, Shelley/Bihrle, Amy/Sabo, Helene
1988 Dissociation Between Language and Cognitive Functions in Williams Syndrome. In:
Bishop, Dorothy/Mogford, Kay (eds.), Language Development in Exceptional Circum-
stances. London: Churchill Livingstone, 177⫺189.
Bishop, Dorothy
1982 Comprehension of Spoken, Written and Signed Sentences in Childhood Language Dis-
orders. In: Journal of Child Psychology and Psychiatry 23, 1⫺20.
Bloodstein, Oliver
1969 A Handbook on Stuttering. Chicago, IL: National Society for Crippled Children and
Adults.
Bonvillian, John
2002 Sign Communication Training and Motor Functioning in Children with Autistic Disor-
der and in Other Populations. In: Armstrong, David/Karchmer, Michael/Van Cleve,
John (eds.), The Study of Signed Languages. Essays in Honor of William C. Stokoe.
Washington, DC: Gallaudet University Press, 190⫺212.
Brentari, Diane/Poizner, Howard
1994 A Phonological Analysis of a Deaf Parkinsonian Signer. In: Language & Cognitive
Processes 9, 69⫺99.
Brentari, Diane/Poizner, Howard/Kegl, Judy
1995 Aphasic and Parkinsonian Signing: Differences in Phonological Disruption. In: Brain
and Language 48(1), 69⫺105.
Burr, Charles W.
1905 Loss of the Sign Language in a Deaf Mute from Cerebral Tumor and Softening. In:
New York Medical Journal 81, 1106⫺1108.
Chiarello, Christine/Knight, Robert/Mandel, Mark
1982 Aphasia in a Prelingually Deaf Woman. In: Brain 105(1), 29⫺51.
Corina, David
1998a Aphasia in Users of Signed Language. In: Coppens, Patrick/Lebrun, Yvan/Basso, Anna
(eds.), Aphasia in Atypical Populations. Mahwah, NJ: Lawrence Erlbaum, 261⫺309.
Corina, David
1998b The Processing of Sign Language: Evidence from Aphasia. In: Whitaker, Harry/Stem-
mer, Brigitte (eds.), Handbook of Neurology. San Diego, CA: Academic Press, 313⫺
329.
32. Atypical signing 783

Corina, David
2000 Some Observations Regarding Paraphasia in American Sign Language. In: Emmorey,
Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor
Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 493⫺507.
Corina, David/Bellugi, Ursula/Reilly, Judy
1999 Neuropsychological Studies of Linguistic and Affective Facial Expressions in Deaf Sign-
ers. In: Language and Speech 42, 307⫺331.
Corina, David/Knapp, Heather
2008 Signed Language and Human Action Processing: Evidence for Functional Constraints
on the Human Mirror-Neuron System. In: Annals of the New York Academy of Sciences
1145(1), 100⫺112.
Corina, David/Kritchevsky, Mark/Bellugi, Ursula
1996 Visual Language Processing and Unilateral Neglect: Evidence from American Sign
Language. In: Cognitive Neuropsychology 13, 321⫺356.
Corina, David/Poizner, Howard/Bellugi, Ursula/Feinberg, Todd/Dowd, Dorothy/O’Grady-Batch,
Lucinda
1992 Dissociation Between Linguistic and Nonlinguistic Gestural Systems: A Case for Com-
positionality. In: Brain and Language 43(3), 414⫺447.
Corina, David/Vaid, Jyotsna/Bellugi, Ursula
1992 The Linguistic Basis of Left Hemisphere Specialization. In: Science 255, 1258⫺1260.
Cosyns, Marjan/Van Herreweghe, Annemieke/Christiaens, Griet/Van Borsel, John
2009 Stutter-like Dysfluencies in Flemish Sign Language Users. In: Clinical Linguistics &
Phonetics 23(10), 742⫺750.
Critchley, MacDonald
1938 “Aphasia” in a Partial Deaf-mute. In: Brain 61, 163⫺169.
Dalsgaard, Søren/Damm, Dorte/Thomsen, Per Hove
2001 Gilles de la Tourette Syndrome in a Child with Congenital Deafness. In: European
Child & Adolescent Psychiatry 10(4), 256⫺259.
Dean, Pamela/Feldman, David/Morere, Donna/Morton, Diane
2009 Clinical Evaluation of the Mini-Mental State Exam with Culturally Deaf Senior Citi-
zens. In: Archives of Clinical Neuropsychology 24(8), 753⫺760.
Denmark, Tanya
2011 Do Deaf Children with Autism Spectrum Disorder Show Deficits in the Comprehension
and Production of Emotional and Linguistic Facial Expressions in British Sign Lan-
guage? PhD Dissertation, University College London.
DiBlasi, Anita
2011 Evaluating the Effects of Aging on American Sign Language Users. MA Thesis, Ohio
State University.
Douglass, E./Richardson, J.C.
1959 Aphasia in a Congenital Deaf-mute. In: Brain 82(1), 68⫺80.
du Feu, Margaret/McKenna, Peter
1999 Prelingually Profoundly Deaf Schizophrenic Patients Who Hear Voices: A Phenomeno-
logical Analysis. In: Acta Psychiatria Scandinavica 99, 453⫺459.
Emmorey, Karen/Corina, David
1993 Hemispheric Specialization for ASL Signs and English Words: Differences Between
Imageable and Abstract Forms. In: Neuropsychologia 31(7), 645⫺653.
Emmorey, Karen/Corina, David/Bellugi, Ursula
1995 Differential Processing of Topographic and Referential Functions of Space. In: Emmo-
rey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence
Erlbaum, 43⫺62.
Emmorey, Karen/Damasio, Hanna/McCullough, Stephen/Grabowski, Thomas/Ponto, Laura L.B./
Hichwa, Richard D./Bellugi, Ursula
2002 Neural Systems Underlying Spatial Language in American Sign Language. In: Neuroim-
age 17(2), 812⫺824.
784 VI. Psycholinguistics and neurolinguistics

Emmorey, Karen/Grabowski, Thomas/McCullough, Stephen/Ponto, Laura/Hichwa, Richard/Dam-


asio, Hanna
2005 The Neural Correlates of Spatial Language in English and American Sign Language:
A PET Study with Hearing Bilinguals. In: NeuroImage 24(3), 832⫺840.
Emmorey, Karen/Korpics, Franco/Petronio, Karen
2009 The Use of Visual Feedback During Signing: Evidence from Signers with Impaired
Vision. In: Journal of Deaf Studies and Deaf Education 14(1), 99⫺104.
Grasset, Joseph
1896 Aphasie de la Main Droite chez un Sourd-muet. In: Le Progrès Médical 4, 169.
Herman, Ros/Grove, Nicola/Holmes, Sally/Morgan, Gary/Sutherland, Hilary/Woll, Bencie
2004 Assessing BSL Development: Production Test (Narrative Skills). London: City Univer-
sity Publications.
Herman, Ros/Holmes, Sally/Woll, Bencie
1999 Assessing British Sign Language Development: Receptive Skills Test. Coleford, Glouces-
tershire: Forest Book Services.
Hickok, Gregory/Kritchevsky, Mark/Bellugi, Ursula/Klima, Edward
1996 The Role of the Left Frontal Operculum in Sign Language. In: Neurocase 2, 373⫺380.
Hickok, Gregory/Love-Geffen, Tracy/Klima, Edward
2002 Role of the Left Hemisphere in Sign Language Comprehension. In: Brain and Lan-
guage 82(2), 167⫺178.
Hughlings Jackson, John
1878 On Affections of Speech from Disease of the Brain. In: Brain 1, 304⫺330.
Kail, Robert
1994 A Method for Studying the Generalised Slowing Hypothesis in Children with Specific
Language Impairment. In: Journal of Speech and Hearing Research 37, 418⫺421.
Karmiloff-Smith, Annette
2008 Research Into Williams Syndrome: The State of the Art. In: Nelson, Charles/Luciana,
Monica (eds.), Handbook of Developmental Cognitive Neuroscience. Cambridge, MA:
MIT Press, 691⫺700.
Kimura, Doreen/Battison, Robbin/Lubert, Barbara
1976 Impairment of Nonlinguistic Hand Movements in a Deaf Aphasic. In: Brain and Lan-
guage 3(4), 566⫺571
Lang, Anthony/Consky, Earl/Sandor, Paul
1993 ‘Signing Tics’ ⫺ Insights Into the Pathophysiology of Symptoms in Tourette’s Syn-
drome. In: Annals of Neurology 33, 212⫺215.
Leischner, Anton
1943 Die “Aphasie” der Taubstummen. In: Archiv für Psychiatrie und Nervenkrankheiten
115, 469⫺548.
Lely, Heather K. J. van der
2005 Domain-specific Cognitive Systems: Insight from Grammatical Specific Language Im-
pairment. In: Trends in Cognitive Sciences 9, 53⫺59.
Leonard, Laurence
1998 Children with Specific Language Impairment. Cambridge, MA: MIT Press.
Liddell, Scott
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Loew, Ruth/Kegl, Judy/Poizner, Howard
1995 Flattening of Distinctions in a Parkinsonian Signer. In: Aphasiology 9(4), 381⫺396.
MacSweeney, Mairead/Capek, Cheryl/Campbell, Ruth/Woll, Bencie
2008 The Signing Brain: The Neurobiology of Sign Language. In: Trends in Cognitive Scien-
ces 12(11), 432⫺440.
32. Atypical signing 785

MacSweeney, Mairead/Campbell, Ruth/Woll, Bencie/Brammer, Michael/Giampetro, Vincent/Da-


vid, Anthony/Calvert, Gemma/McGuire, Philip
2006 Lexical and Sentential Processing in British Sign Language. In: Human Brain Mapping
27(1), 63⫺76.
Marini, Andrea/Carlomagno, Sergio/Caltagirone, Carlo/Nocentini, Ugo
2005 The Role Played by the Right Hemisphere in the Organization of Complex Textual
Structures. In: Brain and Language 93(1), 46⫺54.
Marshall, Jane/Atkinson, Joanna/Smulovitch, Elaine/Thacker, Alice/Woll, Bencie
2004 Aphasia in a User of British Sign Language: Dissociation Between Sign and Gesture.
In: Cognitive Neuropsychology 21(5), 537⫺554.
Marshall, Jane/Atkinson, Joanna/Woll, Bencie/Thacker, Alice
2005 Aphasia in a Bilingual User of British Sign Language and English: Effects of Cross-
linguistic Cues. In: Journal of Cognitive Neuropsychology 22(6), 719⫺736.
Marslen-Wilson, William
2003 Speech and Language: Research Review for Foresight Cognitive Systems Project
[www.foresight.gov.uk/Cognitive%20Systems/FS_CogA4_SpeechLang.pdf].
Mason, Katherine/Rowley, Katherine/Marshall, Chloe/Atkinson, Joanna/Herman, Rosalind/Woll,
Bencie/Morgan, Gary
2010 Identifying Specific Language Impairment in Deaf Children Acquiring British Sign
Language: Implications for Theories and Practice. In: British Journal of Developmental
Psychology 28, 33⫺49.
Montgomery, Brenda/Fitch, James
1988 The Prevalence of Stuttering in the Hearing-impaired School Age Population. In: Jour-
nal of Speech and Hearing Disorders 53, 131⫺135.
Morgan, Gary/Herman, Ros/Woll, Bencie
2007 Language Impairments in Sign Language: Breakthroughs and Puzzles. In: International
Journal of Language and Communication Disorders 42(1), 97⫺105.
Morris, Huw/Thacker, Alice/Newman, Peter/Lees, Andrew
2000 Sign Language Tics in a Prelingually Deaf Man. In: Movement Disorders 15(2), 318⫺
320.
Newman, Aaron/Bavelier, Daphne/Corina, David/Jezzard, Peter/Neville, Helen
2002 A Critical Period for Right Hemisphere Recruitment in American Sign Language Proc-
essing. In: Nature Neuroscience 5, 76⫺80.
Poizner, Howard
1990 Language and Motor Disorders in Deaf Signers. In: Hammond, Geoffrey (ed.), Cere-
bral Control of Speech and Limb Movements. Amsterdam: Elsevier, 303⫺326.
Poizner, Howard/Kegl, Judy
1992 The Neural Basis of Language and Motor Behaviour: Evidence from American Sign
Language. In: Aphasiology 6, 219⫺256.
Poizner, Howard/Kegl, Judy
1993 Neural Disorders of the Linguistic Use of Space and Movement. In: Annals of the New
York Academy of Sciences 682, 192⫺213.
Poizner, Howard/Klima, Edward/Bellugi, Ursula
1987 What the Hands Reveal About the Brain. Cambridge, MA: MIT Press.
Quinto-Pozos, David/Forber-Pratt, Anjali/Singleton, Jenny
2011 Do Developmental Communication Disorders Exist in the Signed Modality? Reporting
on the Experiences of Language Professionals and Educators from Schools for the
Deaf. In: Language, Speech, and Hearing Services in Schools 42, 423⫺443.
Quinto-Pozos, David/Singleton, Jenny
2010 Investigating Signed Language Disorders: Case Study Methods and Results. Paper Pre-
sented at the 10 th Theoretical Issues in Sign Language Research (TISLR) Conference,
West Lafayette, IN.
786 VI. Psycholinguistics and neurolinguistics

Roulet Perez, Eliane/Prélaz, Anne-Claude/Metz-Lutz, Marie-Noëlle/Boyes Braem, Penny/De-


onna, Thierry
2001 Sign Language in Childhood Epileptic Aphasia (Landau-Kleffner Syndrome). In: De-
velopmental Medicine & Child Neurology 43(11), 739⫺744.
Sarno, John/Swisher, Linda/Sarno, Martha
1969 Aphasia in a Congenitally Deaf Man. In: Cortex 5, 398⫺414.
Schonauer, K./Achtergarde, D./Gotthardt, U./Folkerts, H. W.
1998 Hallucinatory Modalities in Prelingually Deaf Schizophrenic Patients: A Retrospective
Analysis of 67 Cases. In: Acta Psychiatrica Scandinavica 97, 1⫺7.
Shield, Aaron
2010 The Signing of Deaf Children with Autism: Lexical Phonology and Perspective-taking
in the Visual-spatial Modality. PhD Dissertation, University of Texas at Austin.
Sieratzki, Jechil/Calvert, Gemma/Brammer, Michael/Campbell, Ruth/David, Anthony/Woll,
Bencie
2001 Accessibility of Spoken, Written, and Sign Language in Landau-Kleffner Syndrome:
A Linguistic and Functional MRI Study. In: Epileptic Disorders 3(2), 79⫺89.
Silverman, Franklin/Silverman, Ellen-Marie
1971 Stutter-like Behavior in Manual Communication of the Deaf. In: Perceptual and Motor
Skills 33, 45⫺46.
Smith, Neil/Tsimpli, Ianthi/Morgan, Gary/Woll, Bencie
2010 The Signs of a Savant. Cambridge: Cambridge University Press.
Tureen, Louis/Smolik, Edmund/Tritt, Jack
1951 Aphasia in a Deaf Mute. In: Neurology 1, 237⫺244.
Tyrone, Martha/Woll, Bencie
2008 Palilalia in Sign Language. In: Neurology 70(2), 155⫺156.
Tyrone, Martha/Woll, Bencie
2008 Sign Phonetics and the Motor System: Implications from Parkinson’s Disease. In: Quer,
Josep (ed.), Signs of the Time: Selected Papers from TISLR 8. Hamburg: Signum, 43⫺68.
Uecker, Anne/Mangan, Peter/Obrzut, John/Nadel, Lynn
1993 Down Syndrome in Neurobiological Perspective: An Emphasis on Spatial Cognition.
In: Journal of Clinical Child Psychology 22, 266⫺276.
Vallar, Giuseppe/Papagno, Costanza
1993 Preserved Vocabulary Acquisition in Down’s Syndrome: The Role of Phonological
Short-term Memory. In: Cortex 29, 467⫺483.
Voelker, E. S./Voelker, Charles
1937 Spasmophemia in Dyslalia Cophotica. In: Annals of Otology, Rhinology and Laryngol-
ogy 46, 740⫺743.
Volterra, Virginia/Capirci, Olga/Pezzini, Grazia/Sabbadini, Letizia/Vicari, Stefano
1996 Linguistic Abilities in Italian Children with Williams Syndrome. In: Cortex 32, 663⫺677.
Whitebread, Geoff
2004 Stuck on the Tip of My Thumb: Stuttering in American Sign Language. Honors MA
Thesis, Gallaudet University.
Woll, Bencie/Grove, Nicola
1996 On Language Deficits and Modality in Children with Down Syndrome: A Case Study
of Hearing DS Twins with Deaf Parents. In: Journal of Deaf Studies and Deaf Education
1(4), 271⫺278.
Woll, Bencie/Grove, Nicola/Kenchington, Debora
1998 Spoken Language, Sign Language and Gesture: Using a Natural Case Study to Under-
stand Their Relationships and Implications for Language Development in Children
with Down Syndrome. In: Björck-Åkesson, Eva/Lindsay, Peter (eds.), Communication
Naturally: Theoretical and Methodological Issues in Augmentative and Alternative Com-
munication. Proceedings of the Fourth ISAAC Research Symposium, Vancouver. Van-
couver: Malardalen University Press, 76⫺91.
32. Atypical signing 787

Woll, Bencie/Morgan, Gary


2011 Language Impairments in the Development of Sign: Do They Reside in a Specific
Modality or Are They Modality-independent Deficits? In: Bilingualism: Language and
Cognition 15, 75⫺87.

Bencie Woll, London (United Kingdom)


VII. Variation and change

33. Sociolinguistic aspects of variation and change


1. Introduction
2. Linguistic variables in sign languages
3. Phonological variation and change
4. Lexical variation and change
5. Grammatical variation
6. Register and stylistic variation
7. Conclusion
8. Literature

Abstract
In this chapter, we provide an overview of the study of sociolinguistic variation and
change in sign languages, with a focus on deaf sign languages in English-speaking coun-
tries (particularly ASL, Auslan, BSL, and NZSL). We discuss linguistic, social, and
stylistic factors in sociolinguistic variation, and the nature of variables in signed and
spoken languages. We then move on to describe work on phonological variation, describ-
ing specific studies investigating variation in the formational parameters of location,
handshape as well as one- versus two-handed productions of signs. Next, we outline
some of the major research into lexical variation, and its relationship to social factors
such as the signer’s age, region of origin, gender, sexual orientation, ethnicity, and reli-
gion. This is followed by a discussion of grammatical variation, such as studies focussing
on linguistic and social factors that condition variable subject argument expression. We
then describe some of the work on stylistic variation in sign languages, before concluding
that much work remains to be carried out to better understand sociolinguistic variation
in deaf communities.

1. Introduction
In this chapter, we describe sociolinguistic variation and change in sign languages. It
has been a long-standing observation that there is considerable attested variation in
the use of most sign languages (e.g., Stokoe/Casterline/Croneberg (1965) for American
Sign Language (ASL)). Indeed, as additional sign languages are identified and de-
scribed, this observation continues to hold true (e.g., Meir et al. (2007) for Al-Sayyid
Bedouin Sign Language, a village community sign language of Israel). The factors that
drive sociolinguistic variation and change in both spoken and sign language communi-
ties can be broadly categorised into three types ⫺ linguistic or internal constraints,
social or inter-speaker constraints, and stylistic or intra-speaker constraints (e.g., Mey-
erhoff 2006). They form a complex interrelationship, with each influencing language
33. Sociolinguistic aspects of variation and change 789

use in distinctive ways. Social factors include, for example, a signer’s age, region of
origin, gender, ethnicity, and socio-economic status (Lucas/Valli/Bayley 2001). Linguis-
tic factors include phonological processes such as assimilation and reduction (e.g.,
Schembri et al. 2009), and grammaticalization (see chapter 34, Lexicalization and
Grammaticalization). Stylistic variation involves alternation between, for example, cas-
ual and formal styles of speech used by an individual speaker, often reflecting differing
degrees of attention to speech due to changes in topic, setting, and audience (Schilling-
Estes 2002). Much of the research on sociolinguistic variation, which we describe and
illustrate in this chapter, is concerned with variation that reflects the type of linguistic
and social factors listed above.
It should be noted, however, that some factors involved in sociolinguistic variation
in sign languages are distinctive. Urban Deaf signing communities (in contrast to mixed
Deaf-hearing village sign language communities, e.g., Nyst (2007); see chapter 24,
Shared Sign Languages) are exceptional among linguistic communities in that they
are invariably extremely small minority communities embedded within larger majority
communities whose languages are in an entirely different modality and which may have
written forms and extensive written literatures, unlike sign languages. The influence of
the spoken and written language of the majority hearing community on the local deaf
community sign language is thus a major driving factor in much observed variation (see
chapter 35, Language Contact and Borrowing), and some of the linguistic outcomes of
this contact situation (such as fingerspelling and mouthing) are unique to bimodal
bilingual communities (Lucas/Valli 1992). This picture is further complicated by pat-
terns of language acquisition and generational transmission which are atypical for most
Deaf signers (see chapter 28 on acquisition). These complex usage and acquisition
environments are thus additional important factors when discussing variation in sign
languages.
In the following sections, we examine and exemplify sociolinguistic variation in sign
languages at the levels of phonology, lexicon, grammar, and discourse. These variant
forms are often described in terms of accents, regional and social dialects, and registers
or styles (e.g., Meyerhoff 2006). Our discussion, however, also includes some observa-
tions on language change, as sociolinguistic variation is inseparable from language
change (Labov 1972). From one perspective, this is captured in the ‘apparent time
hypothesis’ which suggests that variation in the linguistic system used by speakers of
different ages at a single point in time can indicate a change in progress (Bailey 2002).
From another perspective, the use of variant forms of individual signs (phonological
or lexical) or certain multi-sign constructions (grammar, register) in particular contexts
might also be called ‘synchronic contextual variation’ (Heine 2002). This is based on
the observation that variation is often indicative of differential patterns of grammati-
calization of forms across a speech community (Pagliuca 1994; Chambers 1995).
In other words, there is always variation in language, and variation may function as
an index of social variables such as gender, class, etc. In addition, some variation may
actually reflect on-going language change, including grammaticalization. Only by tak-
ing all these observations into account can sociolinguistic variation and change in sign
languages be properly understood.
790 VII. Variation and change

2. Linguistic variables in sign languages


Variation in spoken and sign languages may be found at all levels of structural organi-
zation. At the phonological level, speakers of different British English varieties vary
in their use of rhotic sounds, such as the use of /r/ after a vowel in words such as father
and tractor in south-western England but not in the south-east (Wardaugh 1992). In
sign languages, variation may occur in all sublexical features of signs, including hand-
shape (e.g., a @- or u-hand configuration in the sign index1 in ASL) and location (e.g.,
the sign know produced on the forehead or cheek in Australian Sign Language (Aus-
lan)), as we shall see below. Lexical variation in English includes words such as fall
versus autumn used in North American and British/Australian/New Zealand varieties,
respectively (Schneider 2006). Similarly, regional varieties of British Sign Language
(BSL) and Auslan vary in their use of signs for colours, such as blue, green, and white
(Johnston/Schembri 2007). Grammatical variation in English includes the use of double
negation in many vernacular varieties of the language, as in she didn’t say nothing
compared to she didn’t say anything (Meyerhoff 2006). In the discussion below, we will
see that ASL and Auslan both exhibit variable subject expression, with signers some-
times using clauses such as index1 understand and sometimes simply understand to
mean ‘I understand’. Discourse-level variation includes register or stylistic variation,
and may involve the use of different genres or situational uses of language. For exam-
ple, conversations in English differ in structure from narratives: turn-taking occurs in
conversations, with a range of cues used to determine whose turn it is to speak, and
the structure of talk varies widely, but usually follows a number of principles, such as
the maxim of relevance. In narratives, however, usually the storyteller speaks with
minimal interruptions, using a structured sequence of sentences that describe events in
the order in which they occur. Similar differences have been found in sign language
conversations and narratives (Coates/Sutton-Spence 2001; Johnston/Schembri 2007;
also see chapter 22, Communicative Interaction).
The sociolinguistic study of spoken languages has long rested on two guiding princi-
ples: the “principle of quantitative modelling” and the “principle of multiple causes”
(Young/Bayley 1996). The first principle refers to the need to carefully quantify both
variation in linguistic form and the relationship between a variant form and features
of its surrounding linguistic environment and social context. The second principle re-
flects the long-standing assumption that no single linguistic or social factor can fully
explain variation in language use. Lucas and Bayley (2010) provide the following exam-
ple: variable use of -ing in English (that is, whether a speaker in a particular situation
says workin’ or working) is influenced by the grammatical category of the word to
which the ending is attached (for example, whether it is a verb or a noun) and by the
speaker’s gender and social class (Houston 1991; Trudgill 1974).

3. Phonological variation and change


To date, three major sociolinguistic investigations have systematically addressed pho-
nological variation in particular sign languages: ASL, Auslan, and New Zealand Sign
Language (NZSL). All three studies examined location variation, while the ASL study
33. Sociolinguistic aspects of variation and change 791

also examined handshape variation as well as variable metathesis and location deletion
in the sign deaf. With respect to the language external factors of region, age, gender,
and ethnicity, these studies also show how these factors influence apparently random
variation in rather systematic ways.

3.1. Early studies on ASL

Early studies conducted into phonological variation in ASL include Woodward, Erting,
and Oliver’s (1976) investigation into the variable use of signs, such as rabbit and
colour, which have related forms produced on the face or on the hands. Drawing on
data from 45 participants, these researchers found evidence of variation due to ethnic-
ity, with Black signers being much more likely to use the hand variants. Their data also
suggested regional differences, with signers in New Orleans producing fewer variants
on the hands than those in Atlanta. Another study by Woodward and DeSantis (1977)
found similar ethnic variation in two-handed versus one-handed forms of ASL signs
such as cat, cow (see Figure 33.1), and glasses, with White signers using significantly
more of the one-handed variants of these signs. They also found that Southern signers
used more two-handed variants than non-Southerners, and that older signers used
more than younger signers.

Fig. 33.1: Two-handed and one-handed variants of the ASL sign cow (Baker-Shenk/Cokely 1980,
90). Copyright © 1980 by Gallaudet University Press. Reprinted with permission.

One study that did not directly concern itself with phonological variation is also rele-
vant here. As Lucas et al. (2001) pointed out, Frishberg’s investigation into phonologi-
cal change in ASL demonstrated that there was a relationship between sociolinguistic
variation and language change. Frishberg (1975) compared lexical signs listed in the
1965 dictionary of ASL with the same signs in publications that documented older
varieties of ASL and French Sign Language (LSF, to which ASL is related historically;
see chapter 38 on the history of sign languages). In particular, she found that many
newer forms of signs involved changes from two-handed variants to one-handed forms
(e.g., mouse, devil), from less to more symmetrical variants (e.g., depend, last), and/
or moved from more peripheral locations in the signing space to more centralised
places of articulation (e.g., like, feel). Similar findings for BSL were reported in Woll
(1987), such as the movement of signs from higher to lower locations (e.g., perhaps
792 VII. Variation and change

Fig. 33.2: Historical change in the BSL, Auslan and NZSL sign perhaps (Kyle/Woll 1985). Copy-
right © 1985 by Cambridge University Press. Reprinted with permission.

from the head to in front of the body, as shown in Figure 33.2, trouble and police
from the arm to the back of the wrist or hand), as well as a tendency for two-handed
signs to become one-handed. Both Frishberg (1975) and Woll (1987) remarked that
diachronic changes in ASL and BSL were related to synchronic phonological variation.
Many research projects into sociolinguistic variation in ASL have tended to draw
on data from small numbers of participants, and varied a great deal in methodology
(Patrick/Metzger 1996). For example, Hoopes (1998) undertook a study into variable
pinky extension in ASL signs such as think and wonder, finding evidence that pinky
extension occurred more often in signs that were emphatically stressed and in more
intimate social registers. This study, however, was based on only 100 tokens collected
from data produced by a single 55 year old White female non-native signer.

3.2. The case of deaf in ASL

In the 1990s, the first large-scale studies of phonological variation in ASL were under-
taken by Ceil Lucas and her colleagues (Lucas/Bayley/Valli 2001). These investigations
drew on a representative sample of the American deaf population and employed multi-
variate analyses of the data (that is, an analysis which considers multiple variables
simultaneously) using Varbrul software, a statistical programme developed specifically
for sociolinguistic research. The dataset for this major study consisted of videotaped
conversations, interviews, and lexical sign elicitation sessions collected from 207 Deaf
native and early learner signers of ASL in seven sites across the USA: Staunton, Vir-
ginia; Frederick, Maryland; Boston, Massachusetts; Olathe, Kansas; New Orleans, Lou-
isiana; Fremont, California; and Bellingham, Washington. The participants included a
mix of men and women, both White and African-American, from three different age
groups (15⫺25, 26⫺52, and 55 years of age and over). The sample also included signers
from both working class and middle class backgrounds.
The first study in the ASL project investigated variation in the sign deaf, building
on an initial small-scale study reported in Lucas (1995). This sign has a number of
33. Sociolinguistic aspects of variation and change 793

Fig. 33.3: Phonological variation in the ASL sign deaf: (a) ear to chin variant, (b) chin to ear
variant, (c) contact cheek variant in the compound deaf^culture (Lucas/Bayley/Valli
2001). Copyright © 2001 by Cambridge University Press. Reprinted with permission.

phonological variants, but three were the focus of the study (see Figure 33.3): the
citation form in which the @-handshape contacts the ear and then moves down to
contact the chin, and two non-citation forms which consist of either a reversed move-
ment of the hand from chin to ear or a reduced form in which the handshape simply
contacts the cheek. Results from the multivariate analysis of 1,618 examples showed
that the factors that conditioned such phonological variation were linguistic, social, and
stylistic in nature. First, Bayley, Lucas, and Rose (2000) reported that signers were less
likely to use a citation form in nominal compounds, such as deaf^world or deaf^cul-
ture, but more likely to do so when deaf was part of a predicate, as in index3 deaf
(‘She is deaf’). Second, social factors such as region and age were important. Signers
in Kansas, Missouri, and Virginia tended to use non-citation forms of deaf more than
twice as often as signers in Boston. Despite this, older signers in Boston were found
to be consistently more likely to use the citation form than younger signers. Third,
stylistic factors may have also been at work, with less use of the citation forms in
narratives than in conversation, but the number of examples of deaf collected from
narratives was small. Thus, this result is only suggestive and needs to be confirmed by
a large sample.
794 VII. Variation and change

3.3. Handshape in ASL

Lucas and her colleagues next explored variation in ASL signs produced in citation
form with the @-handshape, such as the lexical signs go-to, mouse, and black, together
with functors such as index2, but, and where. This class of signs exhibit variation in
hand configuration. Bayley, Lucas, and Rose (2002) found that this variation may be
relatively small, with some bending of the @-handshape so that it resembles an
B-handshape, or with thumb extension so that it looks like an A-hand configuration.
In some cases, however, the assimilation may be more marked, with the thumb and
other fingers also extended so that the @-handshapes resembles a [. In some early
observations about this variation, Liddell and Johnson (1989) suggested that this phe-
nomenon might be primarily due to assimilation effects in which handshape features
of the neighbouring signs influenced the variant forms. The results presented in Bayley,
Lucas, and Rose (2002), however, showed that additional linguistic and social factors
were at work in handshape variation. Analysis of the 5,356 examples in the ASL data-
set using Varbrul revealed the relative strength of the influence of each factor when
compared to other factors, and phonological environment turned out to be significant,
but not the most important linguistic factor. Instead, like deaf, grammatical function
was the strongest influence. Signers are more likely to choose the A-handshape variant
for wh-signs, for example, and the [ variant for pronouns (particularly index1), whereas
other lexical and function signs are more often realised in citation form. Social factors
were also important, with signers in California, Kansas/Missouri, Louisiana, and Mas-
sachusetts favouring the citation form, while those in Maryland, Virginia, and Washing-
ton state all disfavoured it. Lucas and her colleagues also found that age, social class,
and ethnicity were important constraints, but not for all variants of the @-handshape:
younger signers used the A and [ variants more often than older signers, for example,
but while signers who were not native users of ASL preferred the [-handshape form,
this difference was not true of the A-handshape variants.

3.4. Location in ASL

Lucas and her team also investigated location variation in a class of ASL signs repre-
sented by know. In their citation form, these signs are produced on or near the signer’s
forehead, but often may be produced at locations lower than this, either on other parts
of the signer’s body (such as near the cheek) or in the space in front of the signer’s
chest. Again, Varbrul analysis of 2,594 ASL examples in their dataset showed that
grammatical function was the strongest linguistic factor, with nouns, verbs, and adjec-
tives (e.g., father, understand, dizzy) appearing more often in citation forms while
prepositions (e.g., for) and interrogative signs (e.g., why) favoured lowered variants.
Phonological environment was also important, with preceding signs made on or near
the body having a significant influence on whether or not the target sign appeared as
a lowered variant. The results also indicated that younger signers, men, and non-native
signers all favoured lowered variants when compared to older signers, women, and
native signers. Regional and ethnic differences also emerged, with African-American
deaf people and those from Virginia and Washington state tending to use more citation
forms than Whites and signers from the five other regions.
33. Sociolinguistic aspects of variation and change 795

3.5. One-handed versus two-handed forms in BSL and NZSL

There has also been some work on phonological variation in BSL and NZSL. Deuchar
(1981) noted that phonological deletion of the non-dominant hand in two-handed signs
was possible in BSL (sometimes known as ‘weak drop’, e.g., Brentari 1998). Deuchar
claimed that the deletion of the non-dominant hand in symmetrical two-handed signs,
such as give and hospital, was frequent, as also noted in ASL (Battison 1974). She
argued that weak drop in asymmetrical two-handed signs appeared most likely in signs
where the handshape on the non-dominant hand was a relatively unmarked configura-
tion, such as [ or 4. Thus, variants without the non-dominant hand seemed more
common in her data in signs such as right (with non-dominant [) than in father (non-
dominant R). Furthermore, she undertook a pilot study to investigate what discourse
factors might affect the frequency of weak drop. Deuchar predicted that signers might
use less deletion in more formal varieties of BSL. She compared 30 minutes of BSL
data collected under two situations: one at a deaf club social event and another in a
church service. Based on a small dataset of 201 tokens, she found that only 6 % of two-
handed signs occurred with weak drop in the formal situation, whereas 50 % exhibited
deletion of the non-dominant hand in the informal setting. She also suggested that this
weak drop variation may also reflect language change in progress, based on Woll’s
(1981) claim that certain signs (e.g., again) which appear to be now primarily one-
handed in modern BSL (and indeed in all varieties of BSL, Auslan, and NZSL ⫺
sometimes referred to as BANZSL) were formerly two-handed.
Glimpses of similar patterns of diachronic change in phonological structure
emerged in a study of NZSL numeral signs (McKee/McKee/Major 2011), in which it
was noted that variants consistently favoured by the younger generation for numerals
six to ten utilise only the dominant hand, whereas older signers are more likely to use
a two-handed ‘base 5’ (weak hand) plus ‘additional digits’ (dominant hand) system for
these numerals (e.g., signing five on the non-dominant hand simultaneously with two
on the dominant hand for ‘seven’, similar to the number gestures sometimes used by
hearing people).

3.6 Location in Auslan and NZSL

The NZSL numerals data comes from a major NZSL sociolinguistic project which, like
the related Auslan variation project that preceded it, replicated the work of Lucas and
colleagues. The Auslan and NZSL sociolinguistic variation projects also investigated
phonological variation, focusing specifically on variation in the location parameter in
a class of signs that includes think, name, and clever which, like the similar class of
signs in ASL studied by Lucas, Bayley, Rose, and Wulf (2002), could be produced at
locations lower than the forehead place of articulation seen in their citation forms (see
Figure 33.4).
Schembri et al. (2009) reported that variation in the use of the location parameter
in these signs reflects both linguistic and social factors, as has also been reported for
ASL. Like the American study, the Auslan results provided evidence that the lowering
of this class of signs reflects a language change in progress in the Australian deaf
796 VII. Variation and change

Fig. 33.4: Three BANZSL forehead location signs and one lowered variant (Johnston/Schembri
2007). Copyright © 2007 by Cambridge University Press. Reprinted with permission.

community, led by younger people and individuals from the larger urban centres. This
geolinguistic pattern of language change (i.e., from larger to smaller population cen-
tres) is known as cascade diffusion, and is quite common cross-linguistically (Labov
1990). The NZSL study found evidence of similar regional differences in the use of
lowered variants, but age was not a significant factor in their dataset.
Furthermore, the results indicated that some of the particular factors at work, and
the kinds of influence that they have on location variation, appear to differ in Auslan
and NZSL when compared to ASL. First, the Auslan and NZSL studies suggested
relatively more influence on location variation from the immediate phonological envi-
ronment (i.e., from the preceding and following segment) than is reported for ASL.
This may reflect differences in methodology between the three studies ⫺ unlike the
ASL study, the Auslan and NZSL studies did not include signs made in citation form
at the temple or compound signs in which the second element was produced lower in
the signing space. Second, the Auslan data suggested that location variation in this
class of signs is an example of language change led by deaf women, not by deaf men
as in ASL (Lucas/Bayley/Valli 2001). This is typical of a language change known as
‘change from below’, that is, one that is occurring without there being much awareness
of this change in progress among the community of speakers or signers (see Labov
1990). Third, the Australian and New Zealand researchers showed that grammatical
function interacts with lexical frequency in conditioning location variation (i.e., they
found that high frequency verbs were lowered more often than any other class of
signs), a factor not considered in the ASL study.

4. Lexical variation and change

Lexical variation presents the clearest examples of sociolinguistic variation in many


sign languages, with lexical choices often systematically associated with signers of a
particular age, gender, region, ethnicity, or educational background.
33. Sociolinguistic aspects of variation and change 797

4.1. Region

From the very beginning of the systematic study of sign languages, the significant
amount of regional lexical variation in signing communities has been reported. For
example, in the appendix to the 1965 Dictionary of American Sign Language (Stokoe/
Casterline/Croneberg 1965), Croneberg discusses regional variation in ASL, focussing
on the eastern states of Maine, Vermont, New Hampshire, Virginia, and North Caro-
lina. A lexical variation study based on a list of 134 vocabulary items suggested that
the ASL varieties used in Virginia and North Carolina represented distinct dialects,
whereas no such dialect boundary could be found between the three New England
states, where many of the same lexical items were shared.
Regional lexical variation has been noted in a wide range of sign languages, such as
LSF (Moody 1983), Italian Sign Language (Radutzky 1992), Brazilian Sign Language
(Campos 1994), South African Sign Language (Penn 1992), Filipino Sign Language
(Apurado/Agravante 2006), and Indo-Pakistani Sign Language (Jepson 1991; Wood-
ward 1993; Zeshan 2000). Even signed varieties that are used across relatively small
geographical areas, such as Flemish Sign Language in the Flemish-speaking areas of
Belgium (Van Hecke/De Weerdt 2004) and Sign Language of the Netherlands (NGT,
Schermer 2004), can have multiple distinctive regional variants. NGT, for instance,
has five regional dialects, with significant lexical differences between all regions but
particularly between the south and the rest of the country. Indeed, the compilers of
sign language dictionaries have struggled to deal with lexical variation adequately, and
have often avoided the problem altogether by compiling standardising lexicons (see
chapter 37, Language Politics), or by minimising the amount of information included
(e.g., Brien 1992 for BSL).
In our discussion below, we largely draw on data from sociolinguistic studies of
ASL, Auslan, NZSL, and BSL. The reason is that what is of more importance and
interest than actual examples is that the phenomenon stems from similar sociolinguistic
factors in different signing communities, and manifests itself in very similar ways.
Take, for example, the two main regional varieties of Auslan ⫺ the northern dialect
and the southern dialect. Most noticeably, these two dialects differ in the signs tradi-
tionally used for numbers, colours, and some other concepts (Johnston 1998). Indeed,
the core set of vocabulary items in certain semantic areas (e.g., colour signs) is actually
different for every basic term in these dialects (see Figure 33.5).
There are also a number of state-based specific lexical differences that cut across
this major dialect division. The sign afternoon, for example, has five forms or variants
across six states (see Figure 33.6).
The geographical distribution of lexical variation in core areas of the lexicon, like
that illustrated above, lies at the basis for the proposal that Auslan can be divided into
two major regional varieties. It appears that these two regional varieties have devel-
oped, at least in part, from lexical variation in different varieties of BSL in the 19th
century, although primary sources documenting sign language use at the time are
lacking.
In NZSL, there is similar evidence of regional variation in the lexicon (see Kennedy
et al. 1997), associated with three main concentrations of Deaf population in Northern
(Auckland), Central (Wellington), and Southern (Christchurch) cities.
798 VII. Variation and change

Fig. 33.5: Colour signs in the northern (top) and southern (bottom) dialects of Auslan (Johnston/
Schembri 2007). Copyright © 2007 by Cambridge University Press. Reprinted with per-
mission.

Fig. 33.6: The sign afternoon in various states of Australia (Johnston/Schembri 2007). Copyright
© 2007 by Cambridge University Press. Reprinted with permission.

Regional lexical variation in BSL is well-known in the British Deaf community and
has been the subject of some research (Sutton-Spence/Woll/Allsop 1990). As with Aus-
lan, signs for colours and numbers vary greatly from region to region. For example,
Manchester signers traditionally appear to use a unique system of signs for numbers.
Some of this regional variation has been documented (e.g., Edinburgh & East of Scot-
land Society for the Deaf 1985; Skinner 2007; Elton/Squelch 2008), but compared to
the lexicographic projects undertaken in Australia (Johnston 1998) and New Zealand
(Kennedy et al. 1997), augmented with data from the recent sociolinguistic variation
projects, lexical variation and its relation to region in BSL remains relatively poorly
described. Current work as part of the BSL Corpus Project based at University College
London, however, has begun to document and describe this lexical variation in more
detail (see http://www.bslcorpusproject.org).
For ASL, there has also been some work on regional variation. For example,
Shroyer and Shroyer (1984) elicited data from 38 White signers in 25 states for 130 con-
cepts. This yielded a collection of 1,200 sign variants (including the signs meaning
‘birthday’ shown in Figure 33.7), although the authors did not carefully distinguish
between related phonological variants and distinct lexical variants (Lucas et al. 2001).
33. Sociolinguistic aspects of variation and change 799

Their data did, however, seem to suggest that, like BANZSL varieties, ASL regional
variation was concentrated in certain semantic categories, particularly signs for food
and animals.

Fig. 33.7: Regional variation in the ASL signs for birthday: standard variant (left), Pennsylvania
variant (center), and Indiana variant (right) (Valli/Lucas/Mulrooney 2005). Copyright
© 2005 by Gallaudet University Press. Reprinted with permission.

As part of the larger sociolinguistic variation study into ASL, Ceil Lucas and her
colleagues collected lexical data for 34 stimulus items from 207 signers in their study
(Lucas/Bayley/Valli 2001). They carefully distinguished between distinct lexical vari-
ants with identical meanings and phonological variants of the same lexical item. Thus,
in ASL, there are different lexical variants for pizza, none of which share handshape,
movement, or location features. With the sign banana, however, one lexical variant
has a number of phonological variants which vary in the handshape on the dominant
hand. The researchers found that there was an average of seven lexical variants for
each sign, and that the signs early, arrest, faint, cereal, cheat, and soon showed the
most variation, with cake, microwave-oven, relay, and faint having the largest number
of phonological variants of the same lexical item. Signers from Massachusetts and
Kansas/Missouri had the largest number of unique variants.
These examples of lexical variation in Western sign languages are likely to be due
to the fact that residential deaf schools were set up independently from each other in
different parts of such countries during the 19th and 20th centuries. When many of
these schools were established in the UK, for example, there was no single, centralised
training programme for educators of deaf children who wished to use sign language in
the classroom; thus the signs used within each school (by the teachers and by the
students) must have varied from one institution to the next. Furthermore, in some
schools, signed communication was forbidden during the latter part of the 19th and for
much of the 20th century, leading to the creation of new signs by deaf children (because
few language models were available) while using signed communication outside the
classroom. Because sign languages must be used face to face, and because opportunities
for travel were few, each variant tended to be passed down from one generation to the
next without spreading to other areas. In a 1980 survey (Kyle/Allsop 1982), for exam-
ple, 40 % of people surveyed in the Bristol deaf community claimed that they had
800 VII. Variation and change

never met a deaf person from farther than 125 miles away. As a result, around half of
the individuals said they could not understand the varieties of BSL used in distant
parts of the UK.
Compared to these reports about considerable traditional lexical variation in BSL,
however, it has been claimed that ASL may have a relatively more standardised lexicon
(Valli/Lucas/Mulrooney 2005). In their lexical variation study, Ceil Lucas and her col-
leagues found that of the 34 target items they studied, 27 included a variant that ap-
peared in the data from all seven sites across the USA. Lucas et al. (2001) suggested
that shared lexical forms exist alongside regional variants due to historical patterns of
transmission of ASL across the country. The residential schools in each of the seven
sites studied in the project all had direct or indirect links with the first school, the
American School for the Deaf in Hartford, Connecticut. In the USA, the Hartford
school trained its deaf graduates as teachers who then were sent out across the USA
to establish new schools, leading to the spreading of a standardised variety of ASL
across the continent.
Travel within the UK and regular signing on broadcast television in the UK, how-
ever, mean that British deaf people are now exposed to many more lexical variants of
BSL than they once were. It appears that this is the reason why deaf people increas-
ingly report much less trouble communicating with those from distant regions of the
UK (Woll 1994). Indeed, it is possible that this greater mixing of the variants may lead
to dialect levelling (Woll 1987). There is in fact much controversy amongst sign lan-
guage teachers surrounding the issue of dialect levelling and standardisation, with con-
flict arising between preserving traditional diversity within BSL and the notion of
standardising signs for teaching purposes (e.g., Elton/Squelch 2008).

4.2. Age

As mentioned earlier, the vast majority of deaf people have hearing families and the
age at which they acquire sign languages may be very late. Thus the intergenerational
transmission of sign languages is often problematic. This can result in considerable
differences across generations, such that younger BSL and NZSL signers sometimes
report difficulty in understanding older signers. A study reported in Woll (1994), for
example, indicated that younger signers (i.e., those under 45 years of age) recognised
significantly fewer lexical variants in BSL than older signers. An earlier study of the
Bristol and Cardiff communities suggested that the BSL colour signs brown, green,
purple, and yellow and numbers hundred and thousand used by older deaf people
were not used by younger deaf people from hearing families in Bristol (Woll 1983).
New signs had replaced these older forms, which, for the colour signs, had an identical
manual form that was differentiated solely by mouthing the equivalent English words
for ‘brown’, ‘green’, etc.
Sutton-Spence, Woll, and Allsop (1990) conducted a major investigation of sociolin-
guistic variation in fingerspelling in BSL, using a corpus of 19,450 fingerspelled items
collected from 485 interviews with BSL signers on the deaf television programme See
Hear. They analysed the use of the British manual alphabet in relation to four social
factors: sex, region, age, and communication mode used. There were no effects due to
gender on the use of fingerspelling, but age was a significant factor. Sutton-Spence and
33. Sociolinguistic aspects of variation and change 801

her colleagues found that over 80 % of all clauses included a fingerspelled element in
the data from those aged 45 years or older. In comparison, fingerspelling was used in
fewer than 40 % of clauses in the data from participants aged under 45. Region was
also an important variable: the most use of fingerspelling was found in the signing of
individuals from Scotland, Northern Ireland, Wales, and central England, with the least
used by signers from the south-western region of England. Data from signers in north-
ern England and in the southeast included moderate amounts of fingerspelling. Deaf
individuals who used simultaneous communication (i.e., speaking and signing at the
same time) also used significantly more fingerspelling than those who used signed
communication alone.
A much smaller study of fingerspelling use in Auslan by Schembri and Johnston
(2007) found that that deaf signers aged 51 years or over made more frequent use of
the manual alphabet than those aged 50 or younger. This was particularly true of those
aged 71 years or older.
In a short paper on the use of fingerspelling by deaf senior citizens in Baltimore,
Kelly (1991) suggested that older ASL signers appeared to make greater use of the
manual alphabet than younger signers. She also noted the use of mixed representations
in which older signers first used a sign, then a fingerspelled equivalent, and then re-
peated the sign (e.g., insult i-n-s-u-l-t insult).
Padden and Gunsauls (2003) reported that a number of sociolinguistic factors ap-
pear to be important in their data on ASL fingerspelling, although they did not provide
quantitative analyses that indicate whether such patterns were statistically significant.
They found that age and social class appeared to affect the use of fingerspelled proper
versus common nouns, with older and working-class signers much more likely to finger-
spell common nouns. They also stated that native signers fingerspelled more frequently,
with university-educated deaf native signers using the most fingerspelling. This finding
has been supported by the ASL sociolinguistic variation work by Ceil Lucas and col-
leagues, who also report that more fingerspelling is used by middle-class signers than
working-class signers (Lucas/Bayley/Valli 2001).
In ASL, Auslan, and BSL, these age-related differences in fingerspelling usage un-
doubtedly reflect the educational experiences of older deaf people, many of whom
were instructed using approaches that emphasised the use of fingerspelling. Language
attitudes may also play a role here, with older people possibly also retaining relatively
stronger negative attitudes towards sign language use, although this has not yet been
the focus of any specific empirical study. Language change is important here, too, as
many older signers appear to prefer the use of traditionally fingerspelled items rather
than the ‘new signs’ used by younger people. For example, signs such as truck, soccer,
and coffee were used by younger signers in the Schembri and Johnston (2007) dataset,
whereas only older individuals fingerspelled t-r-u-c-k, s-o-c-c-e-r, and c-o-f-f-e-e. In
NZSL, the changing status of sign language manifests itself in generational differences
in the extent of English mouthing, rather than fingerspelling, as a contact language
feature. A preliminary analysis of variation in mouthing in NZSL shows that signers
over the age of 65 years accompany an average of 84 % of manual signs with mouthing
components, compared to 66 % for signers under 40 years (McKee 2007).
The lexical variation study in ASL conducted by Lucas and her colleagues showed
that there were lexical variants for 24 of the 34 stimulus items that were unique to
each age group in their dataset. Olders signers produced unique forms for perfume,
802 VII. Variation and change

snow, and soon, for example, and did not use the same signs as younger signers for
dog and pizza. They specifically investigated evidence of language change in two sets
of signs. First, they looked in detail at deer, rabbit, snow, and tomato because claims
had been made in earlier work that phonological change was underway in these signs
with deer changing from two-handed to one-handed, rabbit moving down from a head
to hands location, and snow and tomato undergoing reduction and deletion of seg-
ments. Second, they were interested in the signs africa and japan because new, more
politically-correct variants of these signs had recently emerged as a result of the percep-
tion that the older variants reflected stereotypes about the physical appearance of
people from these parts of the world. The picture that emerged from their analysis was
complex, however, with some evidence that language change was taking place for rab-
bit, snow, tomato, japan, and africa in some regions and in some social groups. For
instance, no signers in Maryland used the head variant of rabbit any longer, and no
younger signers from California, Maryland, and Virginia used the old form of africa
(see Figure 33.8). Contrary to what has previously been claimed, however, deer was
produced in all regions by all age groups in both one- and two-handed forms, providing
little evidence of a change in progress.

Fig. 33.8: Lexical variation due to age in ASL (Lucas et al. 2001; figures 1 and 7; illustrated by
Robert Walker). Copyright © 2001, the American Dialect Society. Reprinted by permis-
sion of the publisher, Duke University Press (www.dukeupress.edu).

Variation in the NZSL numeral signs one to twenty is also systematically condi-
tioned by social characteristics, especially age (McKee/McKee/Major 2011). Like the
sociolinguistic variation in Auslan project mentioned above, the NZSL sociolinguistic
variation project drew on a corpus of NZSL produced by filming 138 deaf people in
conversations and interviews; the sample is balanced for region (Auckland, Palmerston
North/Wellington, and Christchurch), gender, and age group. All participants acquired
NZSL before the age of 12 years, and the majority of these before the age of seven.
Multivariate analysis of this data revealed that age has the strongest effect on variation
in the number system, followed by region and gender. With respect to region, signers
from Auckland (the largest urban centre) are slightly more likely to favour less com-
mon variants than those from Wellington and Christchurch, who are more likely to
favour the more standard signs that are used in Australasian Signed English. Overall,
men are slightly more likely than women to favour less common forms, although gen-
der has the weakest effect of the three social factors.
33. Sociolinguistic aspects of variation and change 803

Variation in numeral usage reveals diachronic change in NZSL, and increasing


standardisation in this subset of the lexicon: all 15⫺29 year olds produced the same
forms for numerals one to twenty, except for numbers nine, eleven, twelve, and
nineteen which exhibited minor variation. Apart from these exceptions, they uni-
formly favoured signs introduced from Australasian Signed English. Signers over
30 years of age, and especially above 45 years, exhibited more in-group variation (using
a greater range of lexical variants), reflecting the fact that they were not exposed to a
conventional signed lexicon at school. These results confirm the powerful standardising
impact of introducing total communication approaches into deaf education in 1979.
Distinctive forms produced by the youngest and oldest age groups show that numer-
als in NZSL (particularly numbers above five) have been partly re-lexified, mostly
because Australasian Signed English forms (themselves based on Auslan signs) re-
placed older variants. For certain numbers, such as eight, the change is complete, in
that none of the youngest age group use older forms of this numeral, shown in Figure
33.9 as A, C, and D. In other cases, alternate variants still co-exist, or in some cases, a
change is apparently in progress towards a standard form.

Fig. 33.9: Variation due to age in NZSL eight (McKee/McKee/Major 2011). Copyright © 2011 by
Gallaudet University Press. Reprinted with permission.

4.3. Gender and sexuality

Although anecdotal reports suggest that a small number of Auslan lexical variants may
be used differently by women and men (e.g., the different signs hello or hi described
in Johnston/Schembri (2007)), there have not yet been any empirical studies demon-
strating systematic lexical variation in any BANZSL variety due to gender. A number
of studies have suggested that gender may influence lexical variation in ASL, however.
Lucas, Bayley, and Valli (2001) report that only 8 of the 34 stimulus items they studied
did not show variants unique to either men or women, building on earlier findings by
Mansfield (1993).
Quite significant lexical variation based on gender has been the focus of research
into Irish Sign Language (Irish SL; Le Master/Dwyer 1991; Leeson/Grehan 2004). For
over a century, the Irish deaf community maintained distinct vocabularies associated
with the different traditions of sign language use in the single-sex residential deaf
schools in Dublin: St Mary’s School for Deaf Girls and St Joseph’s School for Deaf
Boys. Using a set of 153 stimuli, Le Master and Dywer (1991) reported that 106 of the
804 VII. Variation and change

Fig. 33.10: Examples of lexical variation in Irish SL due to gender. Copyright © Barbara LeMas-
ter. Reprinted with permission.

items were distinct, although 63 % of these were related in some way. The male and
female signs for green, for example, differ in handshape, location, and movement,
whereas the men’s and women’s signs for apple and daughter share hand configura-
tion (see Figure 33.10). Although these lexical differences have lessened in contempo-
rary Irish SL, Leeson and Grehan (2004) suggest that such gender differences continue
to exist in the language.
Gender differences in the use of fingerspelling in ASL have been reported by Mul-
rooney (2002), drawing on a dataset of 1,327 fingerspelled tokens collected from inter-
views with 8 signers. She found evidence in her dataset that men were more likely to
produce non-citation forms than women (e.g., fingerspelling produced outside the usual
ipsilateral area near the shoulder and/or with some of the manual letters deleted).
A number of other aspects of language use have been reported to vary according
to gender. Wulf (1998) claimed, based on a sample of 10 native and near-native signers,
that the men in her dataset consistently demonstrated a difference in the lower bound-
ary of signing space, with the males ending their production of signs at a lower location
than the women. Coates and Sutton-Spence (2001) proposed that female BSL signers
used different styles of conversational interaction than males. In their dataset, deaf
women tended to set up a collaborative conversational floor in which multiple conver-
sations could take place simultaneously, while males signers generally took control of
the floor one at a time and used fewer supportive back-channeling strategies (see
chapter 22, Communicative Interaction, for details).
33. Sociolinguistic aspects of variation and change 805

Studies conducted by Rudner and Butowsky (1981) and by Kleinfeld and Warner
(1997) compared American gay and heterosexual signers’ knowledge of ASL signs
related to gay identity. Both studies reported varied perceptions of different variants
of the signs lesbian and gay, with straight and gay individuals differing in their sign
usage and in their judgements of commonly-used signs related to sexuality. Kleinfeld
and Warner found, for example, that the fingerspelled loan sign #gay appeared to
be most acceptable to gay and lesbian signers, and that its use was spreading across
the USA.

4.4. Ethnicity and religion

Research has established the existence of a distinct African-American variety of ASL


(e.g., Aramburo 1989; Lucas et al. 2001). Like the gender differences in Irish SL, the
emergence of this lexical variation reflects the historical context of American deaf
education, with specific schools having been established for African-American deaf
children in some southern states during the period of segregation in the 19th and 20th
century. Croneberg’s lexical variation study mentioned above, for example, identified
considerable lexical differences between the signs of Black and White signers living in
the same city in North Carolina. Work by Aramburo (1989) showed that African-
American ASL signers had unique lexical variants, such as flirt, school (shown in
Figure 33.11), and boss. Further supporting evidence for lexical variation was found in
the sociolinguistic variation study conducted by Ceil Lucas and her colleagues: of the
34 stimuli, only 6 did not have uniquely African-American variants.

Fig. 33.11: Example of lexical variation due to ethnicity in ASL; the sign school: White variant
(left), African-American variant (right) (Valli/Lucas/Mulrooney 2005). Copyright ©
2005 by Gallaudet University Press. Reprinted with permission.

More recent work investigating the Black variety of ASL (McCaskill et al. 2011)
indicates that a number of other differences can be identified, in addition to use of
specific lexical variants. Findings suggest that, compared to White signers, Black signers
make greater use of two-handed variants, produce fewer lowered variants of signs in
the class of signs including know (see section 3.4), and use significantly more repetition.
806 VII. Variation and change

Fig. 33.12: Examples of Māori signs in NZSL (Kennedy et al. 1997). Copyright © 1997 by Univer-
sity of Auckland Press/Bridget Williams Books. Reprinted with permission.

A study drawing on narratives elicited from 24 signers (12 Black, 12 White) tested the
claim that Black signers use a larger signing space than White signers, and found that
this did appear to be the case.
More work on variation due to ethnicity has been undertaken for NZSL. NZSL
exists in contact with both the dominant host language of English and Māori as the
spoken language of the indigenous people of New Zealand. There is no empirical
evidence that Māori signers’ use of NZSL varies systematically from that of non-Māori
deaf people, whose social networks and domains of NZSL use substantially overlap. It
could be expected, however, that the NZSL lexicon would reflect some degree of
contact with spoken Māori, albeit constrained by modality difference and by the minor-
ity status of both languages in society. Contact between hearing speakers of Māori and
the Māori deaf community over the last decade has led to the coinage of signs and
translations of Māori concepts that are in the process of becoming established ‘borrow-
ings’ into NZSL ⫺ used for both referential purposes and to construct Māori deaf
ethnic identity. These borrowings (locally referred to as ‘Māori signs’, see Figure 33.12),
such as whanau (extended family), marae (meeting place), and haka (a Māori dance
ritual), are constructed by several processes: semantic extension of existing NZSL signs
by mouthing Māori equivalents (e.g., whanau which is also a widely used BANZSL
sign meaning ‘family’), loan translations of Māori word forms, and coining of neolo-
gisms (e.g., marae and haka) (McKee et al., 2007).
As is also true of New Zealand, separate schools for Catholic deaf children were
established in Britain and Australia. All of these institutions employed Irish SL as the
language of instruction until the 1950s. As a result, an older generation of signers in
some regions of the UK and Australia make some use of Irish SL signs and the Irish
manual alphabet, particularly when in the company of those who share their educa-
tional background. Some Irish SL signs have been borrowed into regional varieties of
BSL (e.g., ready, green) and Auslan (e.g., home, cousin) (Brennan 1992; Johnston/
Schembri 2007).
Generally, there are no documented distinctions in the sign language used by vari-
ous ethnic groups in the UK and Australia, partly because the education of deaf chil-
dren in these countries has, for the most part, never been segregated by ethnicity. Many
deaf people in the UK from minority ethnic backgrounds are, however, increasingly
forming social groupings which combine their deaf and ethnic identity (for example,
social groups formed by deaf people with south Asian backgrounds in London), and
33. Sociolinguistic aspects of variation and change 807

thus we might expect some sociolinguistic variation reflecting these identities to be


developing, but this has not yet been the focus of any research. One exception to this
generalisation, however, is the British Jewish deaf community, many of whom were
educated in a separate Jewish deaf school that existed in London from 1866 to 1965
(Jackson 1990; Weinberg 1992). A book of BSL signs used to represent key elements
of Judaism was published in 2003 (Jewish Deaf Association 2003).

5. Grammatical variation

There has been little research into morphosyntactic variation in ASL and BANZSL
varieties, and there have not yet been empirical studies demonstrating whether there
are consistent differences between signers due to gender, age, social class, or region.
Some gender differences in the use of simultaneous constructions and topic-marking
have been reported for Irish SL, but the dataset was small and thus far from conclusive
(Leeson/Grehan 2004). In contrast, differences in grammatical skills in native and non-
native signers have been reported several times in the literature (e.g., Boudreault/
Mayberry 2006).
Observation suggests that in many contexts, for example, signers will vary in their
choice and combination of the morphological, syntactic, and discourse structures that
are described elsewhere in this volume. Schembri (2001) showed, for example, that
native signers of Auslan varied in their use of ‘classifier’ handshapes to represent the
motion of humans and vehicles (see chapter 8, Classifiers, for details). In his dataset,
both the upturned W-handshape and the upright @-may be used to represent a person
moving, and a [-handshape with the palm oriented sideways or downwards may repre-
sent vehicles. Schembri et al. (2002) and Johnston (2001) examined noun-verb pairs in
Auslan, finding that not all signers made use of the same set of subtle differences in
movement and other features sometimes used to distinguish signs referring to concrete
objects from those used to indicate actions (see chapter 5 for discussion of word
classes).
Similarly, Johnston and Schembri (2007) describe how Auslan signers have two ma-
jor strategies available to them when producing sentences with agreement/indicating
verbs. First, they may use an SVO constituent order of signs to represent actor versus
patient roles (e.g., mother ask father ‘Mother asks father’). Alternatively, they may
convey this information by spatial modifications to the verb sign, using orders other
than SVO (e.g., motherClf fatherCrt lfCaskCrt ‘Mother asks father’). The linguistic,
stylistic, and social factors that influence these types of choices have not yet been the
focus of any research.
As part of the sociolinguistic variation in ASL, NZSL, and Auslan projects de-
scribed above, variation in the presence of subject noun phrases was investigated (Lu-
cas/Bayley/Valli 2001; McKee et al. 2011). Like other sign languages, ASL, NZSL, and
Auslan all exhibit significant variation in the expression of subject arguments. The ASL
study drew on a dataset of 429 clauses containing only plain verbs. The Auslan and
NZSL studies used larger datasets of 976 and 2145 clauses, respectively. The ASL and
Auslan datasets were collected from spontaneous narratives produced by 19 deaf ASL
signers and 20 deaf Auslan signers, while the NZSL dataset included 33 deaf partici-
808 VII. Variation and change

pants. The overall results were remarkably similar. McKee and colleagues found that
half (NZSL) to two-thirds (Auslan) of the clauses had no overt subject noun phrase,
not unlike the figure in ASL (65 %). Factors that conditioned an increased tendency
to omit subject arguments in NZSL, Auslan, and ASL included the following: use of a
subject that identified a referent that was the same as the one in the immediately
preceding clause; the subject having a non-first person referent (first person arguments
strongly favoured the retention of overt subjects in Auslan and ASL); the use of role
shift; and (for Auslan and ASL) the presence of some degree of English influence in
the clause (English not being a pro-drop language). These linguistic factors are similar
to those reported to be at work in other pro-drop languages such as Spanish (e.g.,
Bayley/Pease-Alvarez 1997) or Bislama (Meyerhoff 2000). In addition, the NZSL and
ASL studies found evidence for social factors playing a role in variable subject expres-
sion. For ASL, it was found that women and older signers (i.e., over 55 years of age)
favoured overt subjects, whereas men and younger signers (i.e., aged 15⫺54) did not.
It may be that women and older signers produce more pronouns than men and younger
signers because of a perception that the use of more English-like structures represents
a prestige variety of signed communication (certainly, this pattern with gender varia-
tion is characteristic of many spoken languages; see Labov 1990). In NZSL, age and
ethnicity were important, with middle aged (i.e., 40⫺64 years old) and non-Māori New
Zealanders more likely to drop subjects. Unlike ASL and NZSL, however, multivariate
statistical analysis of the Auslan data suggested that social factors such as the signer’s
age and gender were not significant.

6. Register and stylistic variation

A number of studies have investigated variation in discourse. Collins and Petronio


(1998), for example, investigated the ASL used by deaf-blind individuals, while Mesch
(2000) undertook similar studies for Swedish Sign Language. Amongst other things,
the ASL study found that yes/no-questions were not conveyed by the non-manual
signals (e.g., brow raise) used by sighted signers, but instead involved use of the manual
channel, either by means of an additional outward movement of the signs in an inter-
rogative clause or by the addition of a question mark sign (see chapter 23 for further
details).
The work of June Zimmer (1989) on ASL represents one of the few studies on
stylistic variation in a sign language. Until further studies are carried out, no firm
conclusions can be drawn about what register variation looks like in ASL, or indeed
any sign language, on the basis of this study, but we will outline Zimmer’s findings
here because they suggest issues for future research into this area.
In Zimmer’s (1989) study, data for her analysis came from three videotapes of one
deaf native ASL signer (i) giving a lecture on linguistics, (ii) presenting a talk to a
small audience, and (iii) conducting a television interview. All three texts are somewhat
formal in style because each is planned and not spontaneous, and all are performed
for an audience. However, the degree of formality in each text appears to be different.
Zimmer found that the lecture was particularly different from the talk and interview,
at all levels of structural organisation. A closer inspection also revealed that parts
33. Sociolinguistic aspects of variation and change 809

of each text were different from other parts, which she referred to as intra-textual
register variation.
A number of phonological differences between the texts were noted. For example,
the signing space appeared to be much larger in the lecture, with signs being made
beyond the top of the head, centre of the chest, and shoulder width (this may simply
reflect the signer’s use of ‘loud’ signing rather than any true phonological difference;
see Crasborn 2001). Signs in the lecture also appeared to be longer in duration. Role
shifts (see chapter 17) involved shifting of the entire torso or sideways movement by
a step or two in the lecture, whereas only head movements were used in the talk and
interview. Hand-switching (in which the non-dominant hand is used as the dominant
hand) was used in all three texts, often with pronouns, but was used most frequently
in the lecture. There was less assimilation of the handshape in pronoun signs in the
lecture (e.g., fewer handshape changes in index1 from @ to [). Lastly, there was less
perseveration and anticipation in the lecture, that is, there were fewer instances in
which the non-dominant hand in a two-handed sign appeared before the sign started
or remained held in space after the sign had finished.
In terms of lexical and morphological differences in the three situations, Zimmer
(1989) reported that certain colloquial ASL signs, such as what-for and pea-brain,
appeared in the talk and in portions of direct speech in the lecture but did not occur
elsewhere. She also noted that conjunctions such as and and then were used more
frequently in the lecture. Exaggerated reduplication of signs to indicate that some
action was difficult and of long duration occurred more in the lecture, but similar
meanings were realised through non-manual features, such as squinting eyes and the
‘ee’ intensifier, in the informal talk.
Several differences in syntactic and discourse features were found. For example,
pseudo-cleft structures were used extensively in the lecture, but less so in the other
two texts. Topicalisation was used more in the informal talk than the lecture. Discourse
markers appeared more often in the lecture, such as the sign now when used not to
talk about time, but to segment the lecture into smaller parts. Lastly, pointing with the
non-dominant hand at a word fingerspelled on the dominant hand (e.g., d-e-a-f, a-t-
t-i-t-u-d-e) only occurred in the lecture (see Figure 33.13).
Most intra-textual variation occurred in the lecture, where there were three types
of register variation. The body of the lecture was formal in style, but reported speech

Fig. 33.13: Use of a pointing sign following fingerspelling (Zimmer 1989). Copyright © 1989 by
Academic Press. Reprinted with permission.
810 VII. Variation and change

interspersed through the lecture had features of a more casual style. Some specific
examples had a metaphorical and poetic style usually associated with signed theatre
and poetry. The signer represented hearing researchers as a vehicle, for example, and
deaf researchers as a boat, and then produced a simultaneous sign construction with
two depicting verbs showing both moving along together (see chapter 22, Communica-
tive Interaction, for another type of stylistic variation, i.e. variation in the use of polite-
ness strategies depending on situational context).

7. Conclusion

In this chapter, we have explored some of the research conducted in the past few
decades on sociolinguistic variation in deaf communities, with a particular focus on
ASL, BSL, Auslan, and NZSL. We have shown how, just as in spoken language com-
munities, variation is often not random, but is conditioned by linguistic, social, and
stylistic factors. Although our understanding has grown significantly in the last decade,
Lucas and Bayley (2010) have pointed out that much work remains to be done. The
major sociolinguistic studies of ASL, Auslan, and NZSL have covered a number of
different regions in each country, but have not yet examined any particular region’s
deaf community to the same depth that is common in sociolinguistic studies of spoken
languages. In particular, the quantitative studies discussed here need to be followed up
by more detailed qualitative and ethnographic work: we need to understand how sign-
ers proactively choose from among the specific phonological, lexical, and grammatical
variants to present themselves in specific ways (i.e., as markers of specific identities).
Moreover, many regions were not included in these studies (no rural regions or small
towns were visited in the Australian study, for example). Other sociolinguistic variables
need to be investigated, and stylistic factors need to be more fully explored. The influ-
ence of immigrant communities and the impact of the many late learners and hearing
and deaf second-language users on established sign languages are also important. Pur-
suing such research questions will increase our knowledge about the sociolinguistics of
sign languages, as well as broaden our understanding of variation in language generally.

8. Literature

Apurado, Yvette/Agravante, Rommel L.


2006 The Phonology and Regional Variation of Filipino Sign Language: Considerations for
Language Policy. Paper Presented at the 9 th Philippine Linguistics Congress, University
of the Philippines.
Aramburo, Anthony
1989 Sociolinguistic Aspects of the Black Deaf Community. In: Lucas, Ceil (ed.), The Socio-
linguistics of the Deaf Community. San Diego, CA: Academic Press, 103⫺121.
Bailey, Guy
2002 Real and Apparent Time. In: Chambers, Jack K./Trudgill, Peter/Schilling-Estes, Natalie
(eds.), The Handbook of Language Variation and Change. Oxford: Blackwell, 312⫺332.
33. Sociolinguistic aspects of variation and change 811

Baker-Shenk, Charlotte/Cokely, Dennis


1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Wash-
ington, DC: Gallaudet University Press.
Battison, Robin
1974 Phonological Deletion in American Sign Language. In: Sign Language Studies 5, 1⫺19.
Bayley, Robert/Lucas, Ceil/Rose, Mary
2000 Variation in American Sign Language: The Case of deaf. In: Journal of Sociolinguistics
4, 81⫺107.
Bayley, Robert/Pease-Alvarez, Lucinda
1997 Null Pronoun Variation in Mexican-Descent Children’s Narrative Discourse. In: Lan-
guage Variation and Change 9, 349⫺371.
Boudreault, Patrick/Mayberry, Rachel
2006 Grammatical Processing in American Sign Language: Age of First Language Acquisi-
tion Effects in Relation to Syntactic Structure. In: Language and Cognitive Processes
21, 608⫺635.
Brennan, Mary
1992 The Visual World of BSL: An Introduction. In: Brien, David (ed.), Dictionary of British
Sign Language/English. London: Faber & Faber, 1⫺133.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Brien, David (ed.)
1992 Dictionary of British Sign Language/English. London: Faber & Faber.
Campos de Abreu, Antonio
1994 The Deaf Social Life in Brazil. In: Erting, Carol/Johnson, Robert C./Smith, Dorothy L./
Snider, Bruce D. (eds.), The Deaf Way: Perspectives from the International Conference
on Deaf Culture. Washington, DC: Gallaudet University Press, 114⫺116.
Chambers, Jack K.
1995 Sociolinguistic Theory: Linguistic Variation and Its Social Significance. Oxford: Black-
well.
Chambers, Jack K./Trudgill, Peter/Schilling-Estes, Natalie
2002 The Handbook of Language Variation and Change. Oxford: Blackwell.
Coates, Jennifer/Sutton-Spence, Rachel
2001 Turn-taking Patterns in Deaf Conversation. In: Journal of Sociolinguistics 5, 507⫺529.
Crasborn, Onno
2001 Phonetic Implementation of Phonological Categories in Sign Language of the Nether-
lands. PhD Dissertation, University of Utrecht. Utrecht: LOT.
Deuchar, Margaret
1981 Variation in British Sign Language. In: Woll, Bencie/Kyle, James G./Deuchar, Margaret
(eds.), Perspectives on British Sign Language and Deafness. London: Croom Helm,
109⫺119.
Edinburgh & East of Scotland Society for the Deaf
1985 Seeing the Signs! Edinburgh: Edinburgh & East of Scotland Society for the Deaf.
Elton, Frances/Squelch, Linda
2008 British Sign Language: London and South East Regional Signs. London: LexiSigns.
Frishberg, Nancy
1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Lan-
guage 51, 696⫺719.
Hecke, Eline van/Weerdt, Kristof de
2004 Regional Variation in Flemish Sign Language. In: Van Herreweghe, Mieke/Vermeerber-
gen, Miriam (eds.), To the Lexicon and Beyond: Sociolinguistics in European Deaf Com-
munities. Washington, DC: Gallaudet University Press, 27⫺38.
812 VII. Variation and change

Heine, Bernd
2002 On the Role of Context in Grammaticalization. In: Wische, Ilse/Diewald, Gabriele
(eds.), New Reflections on Grammaticalization. Amsterdam: Benjamins, 83⫺102.
Hill, Joseph/McCaskill, Carolyn/Lucas, Ceil/Bayley, Robert
2009 Signing Outside the Box: The Size of the Signing Space in Black ASL. Paper Presented
at NWAV 38: New Ways of Analyzing Variation Conference, University of Ottawa.
Hoopes, Rob
1998 A Preliminary Examination of Pinky Extension: Suggestions Regarding Its Occurrence,
Constraints, and Function. In: Lucas, Ceil (ed.), Pinky Extension and Eye Gaze: Lan-
guage Use in Deaf Communities. Washington, DC: Gallaudet University Press, 3⫺17.
Houston, Anne
1991 A Grammatical Continuum for (ING). In: Trudgill, Peter/Chambers, Jack K. (eds.),
Dialects of English: Studies in Grammatical Variation. London: Longman, 241⫺257.
Jackson, Peter
1990 Britain’s Deaf Heritage. Haddington: Pentland Press.
Jepson, Jill
1991 Urban and Rural Sign Language in India. In: Language in Society 20, 37⫺57.
Jewish Deaf Association
2003 Sign Language in Judaism. London: Jewish Deaf Association.
Johnston, Trevor
2001 Nouns and Verbs in Auslan (Australian Sign Language): An Open or Shut Case? In:
Journal of Deaf Studies and Deaf Education 6, 235⫺257.
Johnston, Trevor
2003 BSL, Auslan and NZSL: Three Sign Languages or One? In: Baker, Anne/Bogaerde,
Beppie van den/Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign Language
Research. Selected Papers from TISLR 2000. Hamburg: Signum, 47⫺69.
Johnston, Trevor (ed.)
1998 Signs of Australia: A New Dictionary of Auslan. Sydney: North Rocks Press.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language (Auslan): An Introduction to Sign Language Linguistics.
Cambridge: Cambridge University Press.
Kelly, Arlene B.
1991 Fingerspelling Use Among the Deaf Senior Citizens of Baltimore. In: Winston, Eliza-
beth A. (ed.), Communication Forum 1991. Washington, DC: Gallaudet University
School of Communication, 90⫺98.
Kennedy, Graeme/Arnold, Richard/Dugdale, Pat/Fahey, Shaun/Moskovitz, David (eds.)
1997 A Dictionary of New Zealand Sign Language. Auckland: Auckland University Press
with Bridget Williams Books.
Kleinfeld, Mala S./Warner, Noni
1997 Lexical Variation in the Deaf Community Relating to Gay, Lesbian, and Bisexual Signs.
In: Livia, Anna/Hall, Kira (eds.), Queerly Phrased: Language, Gender, and Sexuality.
New York: Oxford University Press, 58⫺84.
Kyle, Jim/Allsop, Lorna
1982 Deaf People and the Community. Bristol: University of Bristol, Centre for Deaf Studies.
Labov, William
1972 Sociolinguistic Patterns. Oxford: Blackwell.
Labov, William
1990 The Intersection of Sex and Social Class in the Course of Language Change. In: Lan-
guage Variation and Change 2, 205⫺254.
Le Master, Barbara/Dwyer, John P.
1991 Knowing and Using Female and Male Signs in Dublin. In: Sign Language Studies 73,
361⫺369.
33. Sociolinguistic aspects of variation and change 813

Leeson, Lorraine/Grehan, Carmel


2004 To the Lexicon and Beyond: The Effect of Gender on Variation in Irish Sign Language.
In: Van Herreweghe, Mieke/Vermeerbergen, Myriam (eds.), To the Lexicon and Be-
yond: Sociolinguistics in European Deaf Communities. Washington, DC: Gallaudet Uni-
versity Press, 39⫺73.
Liddell, Scott K./Johnson, Robert E.
1989 American Sign Language: The Phonological Base. In: Sign Language Studies 64, 195⫺
277.
Lucas, Ceil
1995 Sociolinguistic Variation in ASL: The Case of DEAF. In: Lucas, Ceil (ed.), Sociolinguis-
tics in Deaf Communities. Washington. DC: Gallaudet University Press, 3⫺25.
Lucas, Ceil/Bayley, Robert
2010 Variation in American Sign Language. In: Brentari, Diane (ed.), Sign Languages (Cam-
bridge Language Surveys). Cambridge: Cambridge University Press, 451⫺476.
Lucas, Ceil/Bayley, Robert/Reed, Ruth/Wulf, Alyssa
2001 Lexical Variation in African American and White Signing. In: American Speech 76,
339⫺360.
Lucas, Ceil/Bayley, Robert/Rose, Mary/Wulf, Alyssa
2002 Location Variation in American Sign Language. In: Sign Languages Studies 2(4),
407⫺440.
Lucas, Ceil/Bayley, Robert/Valli, Clayton
2001 Sociolinguistic Variation in American Sign Language. Washington, DC: Gallaudet Uni-
versity Press.
Lucas, Ceil/Valli, Clayton
1992 Language Contact in the American Deaf Community. San Diego, CA: Academic Press.
Mansfield, Doris
1993 Gender Differences in ASL: A Sociolinguistic Study of Sign Choices by Deaf Native
Signers. In: Winston, Elizabeth (ed.), Communication Forum. Washington, DC: Gallau-
det University Press, 86⫺98.
McCaskill, Carolyn/Lucas, Ceil/Bayley, Robert/Hill, Joseph
2011 The Hidden Treasure of Black ASL: Its History and Structure. Washington, DC: Gallau-
det University Press.
McKee, David/McKee, Rachel/Major, George
2011 Numeral Variation in New Zealand Sign Language. In: Sign Language Studies 12,
72⫺97.
McKee, Rachel
2007 Hand to Mouth: The Role of Mouthing in New Zealand Sign Language. Paper Pre-
sented at the Australian Sign Language Interpreters Association National Conference.
Macquarie University, Sydney.
McKee, Rachel/McKee, David/Smiler, Kirsten/Pointon, Karen
2007 Māori Signs: The Construction of Indigenous Deaf Identity in New Zealand Sign Lan-
guage. In: Quinto-Pozos, David (ed.), Sign Languages in Contact. Washington, DC:
Gallaudet University Press.
McKee, Rachel/Schembri, Adam/McKee, David/Johnston, Trevor
2011 Variable “Subject” Presence in Australian Sign Language and New Zealand Sign Lan-
guage. In: Language Variation and Change 23, 1⫺24.
Meir, Irit/Padden, Carol/Aronoff, Mark/Sandler, Wendy
2007 Body as Subject. In: Journal of Linguistics 43, 531⫺563.
Meyerhoff, Miriam
2000 Constraints on Null Subjects in Bislama (Vanuatu): Social and Linguistic Factors. Can-
berra: Pacific Linguistics Publications.
814 VII. Variation and change

Moody, Bill
1983 Introduction à L’Histoire et à la Grammaire de la Langue des Signes Entre les Mains
des Sourds. Paris: Ellipses.
Mulrooney, Kirstin J.
2002 Variation in ASL Fingerspelling. In: Lucas, Ceil (ed.), Turn-taking, Fingerspelling, and
Contact in Sign Languages. Washington, DC: Gallaudet University Press, 3⫺23.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Padden, Carol A./Gunsauls, Darline C.
2003 How the Alphabet Came to Be Used in a Sign Language. In: Sign Language Studies 4,
10⫺33.
Pagliuca, William (ed.)
1994 Perspectives on Grammaticalization. Amsterdam: Benjamins.
Patrick, Peter/Metzger, Melanie
1996 Sociolinguistic Factors in Sign Language Research. In: Arnold, Jennifer/Blake, Renee/
Davidson, Brad/Mendoza-Denton, Norma/Schwenter, Scott/Solomon, Julie (eds.), So-
ciolinguistic Variation: Data, Theory, and Analysis. Stanford, CA: CSLI Publications,
229⫺242.
Penn, Claire (ed.)
1992 Dictionary of Southern African Signs for Communicating with the Deaf. Human Science
Research Council.
Radutsky, Elena (ed.)
1992 Dizionario della Lingua Italiana dei Segni. Rome: Edizioni Kappa.
Romaine, Suzanne
2000 Language in Society: An Introduction to Sociolinguistics. Oxford: Oxford University
Press.
Rudner, William A./Butowsky, Rochelle
1981 Signs Used in the Deaf Gay Community. In: Sign Language Studies 30, 36⫺38.
Schembri, Adam
2001 Issues in the Analysis of Polycomponential Verbs in Australian Sign Language (Auslan).
PhD Dissertation, University of Sydney.
Schembri, Adam/Johnston, Trevor
2007 Sociolinguistic Variation in the Use of Fingerspelling in Australian Sign Language (Aus-
lan): A Pilot Study. In: Sign Language Studies 7, 319⫺347.
Schembri, Adam/McKee, David/McKee, Rachel/Johnston, Trevor/Goswell, Della/Pivac, Sara
2009 Phonological Variation and Change in Australian and New Zealand Sign Languages:
The Location Variable. In: Language Variation and Change 21, 193⫺231.
Schembri, Adam/Wigglesworth, Gillian/Johnston, Trevor/Leigh, Gregory R./Adam, Robert/
Barker, Roz
2002 Issues in Development of the Test Battery for Auslan Morphology and Syntax Project.
In: Journal of Deaf Studies and Deaf Education 7, 18⫺40.
Schermer, Trude
2004 Lexical Variation in the Netherlands. In: Van Herreweghe, Mieke/Vermeerbergen, Mir-
iam (eds.), To the Lexicon and Beyond: Sociolinguistics in European Deaf Communities.
Washington, DC: Gallaudet University Press, 91⫺110.
Schilling-Estes, Natalie
2002 Investigating Stylistic Variation. In: Chambers, Jack K./Trudgill, Peter/Schilling-Estes,
Natalie (eds.), Handbook of Language Variation and Change. Oxford: Blackwell,
375⫺401.
Schneider, Edgar W.
2006 English in North America. In: Kachru, Braj B./Kachru, Yamuna/Nelson, Cecil L. (eds.),
The Handbook of World Englishes. Oxford: Blackwell, 58⫺73.
33. Sociolinguistic aspects of variation and change 815

Schroyer, Edgar H./Schroyer, Susan P.


1984 Signs Across America. Washington, DC: Gallaudet College Press.
Skinner, Robert Andrew
2007 What Counts? A Typological and Descriptive Analysis of British Sign Language Number
Variations. Masters Thesis, Birkbeck College, University of London.
Stokoe, William C./Casterline, Dorothy C./Croneberg, Carl G.
1965 A Dictionary of American Sign Language on Linguistic Principles. Washington, DC:
Gallaudet College Press.
Sutton-Spence, Rachel/Woll, Bencie/Allsop, Lorna
1990 Variation and Recent Change in Fingerspelling in British Sign Language. In: Language
Variation and Change 2, 313⫺330.
Trudgill, Peter
1974 The Social Differential of English in Norwich. Cambridge: Cambridge University Press.
Trudgill, Peter
2000 Sociolinguistics: An Introduction to Language and Society. London: Penguin Books.
Valli, Clayton/Lucas, Ceil/Mulrooney, Kirstin J.
2005 Linguistics of American Sign Language: A Resource Text for ASL Users. Washington,
DC: Gallaudet University Press.
Wardaugh, Ronald
1992 An Introduction to Sociolinguistics. Oxford: Blackwell.
Weinberg, Joan
1992 The History of the Residential School for Jewish Deaf Children. London: Reunion of
the Jewish Deaf School Committee.
Woll, Bencie
1981 Borrowing and Change in BSL. Paper Presented at the Linguistics Association of Great
Britain Autumn Meeting, University of York.
Woll, Bencie
1983 Historical Change in British Sign Language. Unpublished Report, Deaf Studies Unit,
University of Bristol.
Woll, Bencie
1987 Historical and Comparative Aspects of BSL. In: Kyle, James G. (ed.), Sign and School.
Clevedon, UK: Multilingual Matters, 12⫺34.
Woll, Bencie
1994 The Influence of Television on the Deaf Community in Britain. In: Ahlgren, Inger/
Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Usage: Papers
from the Fifth International Symposium on Sign Language Research. Durham, UK:
ISLA, 293⫺301.
Woodward, James C.
1993 The Relationship of Sign Language Varieties in India, Pakistan, and Nepal. In: Sign
Language Studies 78, 15⫺22.
Woodward, James C./DeSantis, Susan
1977 Two to One It Happens: Dynamic Phonology in Two Sign Languages. In: Sign Language
Studies 17, 329⫺346.
Woodward, James C./Erting, Carol J./Oliver, Susanna
1976 Facing and Hand(l)ing Variation in American Sign Language. In: Sign Language Studies
10, 43⫺52.
Wulf, Alyssa
1998 Gender Related Variation in ASL Signing Space. Manuscript, Gallaudet University,
Washington, DC.
Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan: A Description of a Sign Language. Amsterdam: Benja-
mins.
816 VII. Variation and change

Zimmer, June
1989 Toward a Description of Register Variation in American Sign Language. In: Lucas, Ceil
(ed.), The Sociolinguistics of the Deaf Community. San Diego, CA: Academic Press,
253⫺272.

Adam Schembri, Melbourne (Australia)


Trevor Johnston, Sydney (Australia)

34. Lexicalization and grammaticalization


1. Introduction
2. Lexicalization and grammaticalization as universal processes
3. Lexicalization in sign languages
4. Grammaticalization in sign languages
5. The relationship between lexicalization and grammaticalization:
Some issues for sign languages
6. Conclusions
7. Literature

Abstract
Lexicalization refers broadly to the process of word formation in language, while gram-
maticalization is the process wherein items that are either lexical or somewhat grammati-
cal in nature take on increased grammatical function. In both lexicalization and gram-
maticalization, the process takes place over time, there may be significant variation in
usage before the process is complete, and the result is that new lexical words and new
grammatical morphemes both become entrenched in the language across the community
of language users. In this chapter, we examine how lexicalization and grammaticalization
occur in sign languages as recent research has shown. While the principles of both proc-
esses apply equally across spoken and sign languages, there are some challenges for
investigating lexicalization and, especially, grammaticalization in sign languages, par-
tially because historic records are scarce, but also because in many cases, sign languages
are rather young. Nonetheless, there are many instances where these processes are appar-
ent. One interesting aspect to this examination is that gestural sources for both lexicalized
words and grammatical items are often observable, which distinguishes the investigation
from that for spoken languages.

1. Introduction
Without exception, languages change in numerous ways over time. By now this is indis-
putable in sign languages along with spoken languages, as evidence from newly forming
34. Lexicalization and grammaticalization 817

languages such as Nicaraguan Sign Language (Senghas/Kita/Özyürek 2004) has shown


along with continuing changes in better established languages (e.g., the emergence of
a case marker in Israeli Sign Language (Israeli SL) (Meir 2003)) and early and continu-
ing change in languages that are well-established such as American Sign Language
(ASL) (Frishberg 1975). In some views, certain mechanisms of language change are
considered as language universals (Bybee 2001; Haspelmath 2004). Two such processes
are referred to as lexicalization and grammaticalization (alternately known as ‘gram-
maticization’).
In this chapter, we explore how both lexicalization and grammaticalization have
taken place in sign language, looking at general principles of these processes along
with specific instances of change as some items enter the lexicon of the language and
as others emerge as grammatical constructions. However, not every instance of either
lexicalization or grammaticalization in a sign language is clearly observable, which may
be because of a number of factors. First, numerous lexical and grammatical structures
appear to stem from gestural roots. Second, our understanding of morphologically
complex constructions in a visual modality is developing, but is still incomplete. Third,
structures undergoing lexicalization or grammaticalization are frequently still undergo-
ing this development. And fourth, we are simply still uncovering what the processes
of lexicalization and grammaticalization are all about generally.
In section 2, we look at lexicalization and grammaticalization processes from a theo-
retical perspective, for which the majority of work has been focused on spoken lan-
guages. The examination of these processes in sign languages is relatively recent, so
that the general literature on these topics is not typically informed by accounts of sign
language phenomena, although Heine and Kuteva (2007) acknowledge this fact as well
as the potential for new research on grammaticalization in sign language to impact our
overall knowledge of the area. In sections 3 and 4, we examine the research on lexicali-
zation and grammaticalization in sign languages, along with some detailed examples of
each. This discussion is not intended to be exhaustive, given the space available, but
includes some key examples that illustrate these principles of language evolution in
sign languages. In section 5, we first discuss the relationship between lexicalization and
grammaticalization. These processes are not, as is sometimes thought, opposite in na-
ture. That is, lexicalization is not the reverse of grammaticalization, and vice versa. Yet,
they are both mechanisms of language change, and do share some common features. As
well, they may in a sense be complementary, both contributing to the evolution of
some linguistic item, such as the rise of a lexical word that subsequently participates in
a grammaticalizing construction. We then look briefly at instances that are particularly
illustrative of the type of questions that remain for how these processes unfold in sign
language. In particular, we consider lexicalized forms that appear to emerge from so-
called classifier constructions, especially in terms of whether or not this could be con-
sidered as an example of “degrammaticalization”, a controversial notion in the spoken
language literature on grammaticalization. Finally, some conclusions are drawn.

2. Lexicalization and grammaticalization as universal processes


Lexicalization and grammaticalization have come to be understood as large-scale cross-
linguistic processes that build lexicon on the one hand, and grammar on the other. As
818 VII. Variation and change

we have learned about these principles of change, we can better approach the analysis
of both lexical and grammatical structure synchronically in individual languages, apply-
ing general principles of change as must often be the case when historic records are
limited. But as Haspelmath (2004) points out, it is a mistake to refer to ‘grammaticiza-
tion theory’ as a single definable theory because grammaticization encompasses a
collection of theoretical premises that together help us understand what a grammatical-
ized element has undergone. Certain points concerning this process are (not surpris-
ingly) presently under debate, such as whether or not unidirectionality ⫺ the notion
that grammatical development is always in the direction of lexical > grammatical or
from less grammatical (e.g., a modal verb) > more grammatical (e.g., an auxiliary) ⫺
can be considered universal, such that grammatical items do not reverse their pathways
of change to ultimately emerge as lexemes. This topic will be taken up once again in
section 5 below, but for the present, we will begin by stating that grammaticalization
refers to the process of the emergence of grammatical items (that then participate in
grammatical categories such as tense or aspect marking, case marking, and the like)
and not simply to the fact that something exists in the grammar of a language. Enough
is understood from diachronic studies of grammaticalization for us to conclude that if
something exists in the grammar of a language, even without clear diachronic evidence,
we can presume that it got there somehow through a diachronic process of change,
and has not appeared suddenly as a fully functional grammatical item (see Wilcox
2007). Lexicalization, on the other hand, refers generally to the process of the emer-
gence of lexemes, or items listed in the lexicon of a language. Lexicalized items are
regularized as institutionalized (community-wide) usages with particular lexical class
features and constraints (Brinton and Traugott 2005; see also Haiman 1994). Word
formation processes such as compounding and conversion are seen as inputs to lexicali-
zation. Thus, lexicalization as a process of change equally does not mean simply that
a word is lexical but rather that it is undergoing, or has undergone, such change in a
principled way.
For the purpose of the present discussion, definitions of both lexicalization and
grammaticalization are taken from Brinton and Traugott (2005). Although there are
differences in opinion on definitional specifics, these theoretical debates will not be
undertaken here in the interest of space and of pointing the discussion in the direction
of sign language evidence, but the reader is referred to such seminal work as Brinton
and Traugott (2005), Bybee (2003), Bybee et al. (1994), Heine et al. (1991), Heine and
Kuteva (2007), Heine and Reh (1984), Hopper (1991), Hopper and Traugott (2003),
and others for detailed accounts of theoretical principles and language examples.

2.1. Lexicalization

The definitions of lexicalization and grammaticalization adopted for the present discus-
sion are from Brinton and Traugott (2005). Lexicalization is thus defined as follows
(Brinton/Traugott 2005, 96):

Lexicalization is the change whereby in certain linguistic contexts speakers use a syntactic
construction or word formation as a new contentful form with formal and semantic proper-
ties that are not completely derivable or predictable from the constituents of the construc-
34. Lexicalization and grammaticalization 819

tion or the word formation pattern. Over time there may be further loss of internal constit-
uency and the item may become more lexical.

In a synchronic sense, Brinton and Traugott note, lexicalization has been taken to mean
the coding of conceptual categories, but in a diachronic sense, lexicalization is the
adoption of an item into the lexicon following a progression of change. Further, we
may consider lexicalizations that are in fact innovations created for a particular, or
local, discourse event, but which are neither institutionalized (i.e., conventionalized
usages throughout the language community) nor listable in the lexicon. Such produc-
tive innovations are widely reported in the sign language literature, but here we will
focus on the diachronic and institutional senses of lexicalization. Traugott and Dasher
(2002, 283) define lexicalization as “a change in the syntactic category of a lexeme
given certain argument structure constraints, e.g. use of the nouns calendar or window
as verbs or […] the formation of a new member of a major category by the combination
of more than one meaningful element, e.g. by derivational morphology or com-
pounding”.
Various word formation processes lead to lexicalization, including compounding
and blending, derivation and conversion. A reanalysis that involves the weakening or
loss of the boundary between words or morphemes leading to compounding is a type
of lexicalization (Hopper/Traugott 2003), meaning that while reanalysis has often been
thought of as a grammaticalization process, it does not take place solely within that
domain. Brinton and Traugott (2005) refer to this as “fusion”, wherein individually
definable features of compositionality are decreased in favour of the new whole. While
the component parts contributing to a new lexical item lose their individual autonomy,
the new lexical word gains an autonomy of its own. This fusion has also been referred
to as “univerbation”, the “unification of two or more autonomous words to form a
third; univerbation is also involved in lexicalizations of phrases into lexemes […] or of
complex into simple lexemes” (Brinton/Traugott 2005, 68).

2.2. Grammaticalization

Grammaticalization is the process whereby functional categories come into being,


either when lexical items take on a grammatical function in certain constructions, or
when items that are already grammatical in nature develop into further grammatical
categories.
Here, we adopt the definition of grammaticalization given by Brinton and Traugott
(2005, 99):

Grammaticalization is the change whereby in certain linguistic contexts speakers use parts
of a construction with a grammatical function. Over time the resulting grammatical item
may become more grammatical by acquiring more grammatical functions and expanding
its host-classes.

Grammaticalization is thus a process of change in language whereby grammatical el-


ements emerge and continue to evolve over time. Bybee (2003, 146) sees grammaticali-
zation as “the process by which a lexical item or a sequence of items becomes a gram-
820 VII. Variation and change

matical morpheme, changing in distribution and function in the process”. For example,
when a verb of motion or of desire (e.g., meaning ‘go’ or ‘wish’) evolves into a future
marker, it loses verb features (i.e., it is “decategorized” (Hopper 1991)) and emerges
with an entirely different distribution in the syntax of the language as a grammatical
marker. Grammaticalization is a gradual process wherein an item that is lexical in
nature (or, as we have come to learn about sign languages, could be gestural in nature)
participates in a construction that becomes increasingly grammatical in function, along
with significant changes in meaning and potentially, but not necessarily, in form. Form
changes are always in the direction of phonetic reduction and loss (e.g., I’m going to >
I’m gonna > I’menna in spoken English). Meaning changes are from concrete and
literal meanings to those more abstract and general (Brinton/Traugott 2005), some-
times referred to as “bleaching”, perhaps intended to mean semantic loss, but as Brin-
ton and Traugott point out, this term is not particularly descriptive. Instead, they sug-
gest that lexical content meaning is replaced by abstract, grammatical meaning.
In the process of grammaticalization, older forms of the lexical source often remain
viable in the language, with newer grammaticalizing usages “layering”, to use Hopper’s
(1991) term. Older forms may or may not disappear. This usually results in a great
deal of both formal and functional variation in usage. If we consider gestural sources
apparent in sign languages, it is thus not coincidental that some items seem at times
linguistic and at times gestural, but within the framework of grammaticalization, this
is neither surprising nor problematic.
Grammaticalization and lexicalization are not processes opposite from or in opposi-
tion to one another, however; rather, they are two developmental processes in language
evolution of different sorts. Lexicalization, on the one hand, is responsible for the
creation of new words (“adoption into the lexicon”: Brinton/Traugott 2005, 20) or of
words used in new ways, such as a change in syntactic category. Grammaticalization,
on the other hand, leads to the creation of new grammatical items (or constructions:
Bybee 2003) from either lexical words or “intermediate” grammatical items or, as is
the case for sign languages, from gestural sources without an intervening lexical word
stage (Janzen/Shaffer 2002; Wilcox 2007). In fact, the two processes may frequently
work in tandem, beginning with the creation of a new lexical word, which may itself
have a truly gestural source, and from that, the later development of a grammatical
morpheme.
To date, most work on grammaticalization has looked at the evolution of spoken
language grammar. These studies have revealed a number of theoretical principles that
are thought to be universal (Bybee et al. 1994; Heine et al. 1991; Heine/Reh 1984;
Hopper 1991; Hopper/Traugott 1993), leading Bybee (2001) to suggest that universals
of change may in fact be more robust than synchronic language universals generally.
If so, we might assume that the same grammaticalization processes take place in sign
languages, which the work on ASL finish as a perfective and completion marker (Jan-
zen 1995, 2003), topic constructions (Janzen 1998, 1999; Janzen/Shaffer 2002), a case-
marker in Israeli SL (Meir 2003), negation in German Sign Language (DGS, Pfau/
Steinbach 2006), modals in several sign languages (Shaffer 2000, 2002, 2004; Wilcox/
Shaffer 2006; Wilcox/Wilcox 1995), and discourse markers in ASL (Wilcox 1998),
among others, has begun to demonstrate.
Whereas early work on grammaticalization suggested that metaphor was perhaps
the most productive mechanism behind grammaticalizing elements (see Bybee et al.
34. Lexicalization and grammaticalization 821

2004), it is now evident that metonymy plays a crucial role as well. Both lexicalization
and grammaticalization involve semantic change, but what takes place here is not quite
the same for each process. Lexicalization commonly involves innovation, in which the
new item appears rather abruptly (although for some items, for example the radical
phonological change in some compounds, may not be abrupt). In grammaticalization,
however, change occurs slowly over time, characterized by overlapping forms and
variation until, most probably motivated by pragmatic inferencing (Traugott/Dasher
2002), new grammatical constructions arise in which older meanings have generalized
and new ⫺ often dramatically reduced ⫺ phonological forms solidify.

3. Lexicalization in sign languages


What it means to be “lexicalized” in sign languages is not a simple matter, because of
a number of elements that are described below in this section in more detail, but
primarily the issue has to do with a high level of productivity in many construction
types. Johnston and Schembri (1999) offer a definition of lexemes in Australian Sign
Language (Auslan) that is helpful in our attempt to distinguish a form that is lexicalized
from one that is not (Johnston/Schembri 1999, 126):

A lexeme in Auslan is defined as a sign that has a clearly identifiable and replicable citation
form which is regularly and strongly associated with a meaning which is (a) unpredictable
and/or somewhat more specific than the sign’s componential meaning potential, even when
cited out of context, and/or (b) quite unrelated to its componential meaning potential (i.e.,
lexemes may have arbitrary links between form and meaning).

According to Sutton-Spence and Woll (1999), in British Sign Language (BSL), the
number of lexical signs is relatively small. Sutton-Spence and Woll claim that lexical
signs are those signs that can be listed in citation form where the meaning is clear out
of context, or which are in the signer’s mental lexicon. They cite The Dictionary of
British Sign Language/English (Brien 1992) as containing just 1,789 entries, which they
suggest is misleading in terms of the overall lexicon of BSL because the productivity
of signs not found in the core lexicon is the more important source of vocabulary.
Johnston and Schembri contrast lexemes with other signs of Auslan that maintain
at least some accessible componentiality in that meaningful component parts are still
identifiable and contribute meaning to the whole. They suggest that handshape, loca-
tion, movement, and orientation are “phonomorphemes” (Johnston/Schembri 1999,
118) that can individually be meaningful (e.g., a flat handshape identifying a flat sur-
face) and that contribute to a vast productivity in formational/meaning constructions
which are not fully lexicalized and thus are not in the lexicon proper of the language
(see also Zeshan’s (2003) discussion of lexicalization processes in Indo-Pakistani Sign
Language (IPSL), based largely on Johnston and Schembri’s criteria).
So-called ‘classifier’ handshapes that participate in productivity and the creation of
novel forms have been noted in numerous sign languages (see chapter 8). Supalla
(1986), in one of the first descriptions of a classifier system in a sign language (ASL),
states that these handshapes participate in the ASL class of verbs of motion and loca-
tion. Supalla claims that signers can manipulate the participating handshape morpheme
822 VII. Variation and change

in ways that suggest that they recognize that these handshapes are forms that are
independent within the construction, and thus, under Johnston and Schembri’s defini-
tion, would not qualify as lexicalized. In these productive forms that depict entities
and events (see Liddell 2003; Dudis 2004), handshapes, positioning and locations of
the hands (and body), and movement are all dynamic, which means that they can be
manipulated to reflect any number of shapes, movements, and interactions. Nonethe-
less, classifier forms have been seen as sources leading to lexicalization (Aronoff et
al. 2003).
Lexicalization takes place when a single form begins to take on specific meaning
which, as Johnston and Schembri note, may not necessarily be predictable from the
component parts. They list sister in Auslan as an example (Johnston/Schembri 1999,
129), articulated with an upright but hooked index finger tapping the nose; yet, the
meaning of ‘sister’ is not seen as related to any act of tapping the nose with the finger
or another hooked object. In form, lexicalized signs become more or less invariable.
Slight differences in articulation do not alter the meaning. Further examples from Aus-
lan are given in Figure 34.1, from Johnston and Schembri (1999).

picture lock meet


‘square shaped in ‘turn small object in ‘Two people approach
vertical plane’ vertical surface’ each other’
Fig. 34.1: In Auslan, picture is a lexicalized tracing pattern, lock is a lexicalized handling classifier
form, and meet is a lexicalized proform (Johnston/Schembri 1999, 128, their Figures 7⫺9).
Copyright © 1999 by John Benjamins. Reprinted with permission.

Sign languages appear to allow for a wide range of related forms that stretch from
those that are highly productive, fully componential, meaningful complexes to lexical
forms as described above. This means that even though a lexicalized sign may exist,
less lexicalized forms are possible that may take advantage of distinguishable compo-
nent parts. Regarding this, Johnston and Schembri (1999, 129 f.) point out that “most
sign forms which are lexicalized may still be used or performed in context in such a
way as to foreground the meaning potential of one or more of the component aspects”.
This potential appears to be greater for signed than for spoken languages. This is no
doubt at least in part due to the iconic manipulability of the hands as articulators
moving in space and the conceptual ability to represent things other than actual hands.
Dudis (2004) refers to this as an aspect of body partitioning, which leads to a plethora
of meaningful constructions at the signer’s disposal, and is one of the defining charac-
teristics of sign languages.
34. Lexicalization and grammaticalization 823

3.1. meet in ASL, IPSL, and Auslan

In a number of sign languages, for example, ASL, IPSL, and Auslan, a lexicalized sign
meaning ‘to meet’ (see Figure 34.1 above) is articulated similarly, clearly lexicalized
from a classifier form (the recent controversies concerning sign language classifiers are
not discussed here, but see, for example, Schembri (2003) and Sallandre (2007)). The
upright extended index finger as a classifier handshape is often called a “person classi-
fier” (e.g., Zeshan 2003) but this may be over-ascribing semantic properties to it, even
though prototypically, it may represent a person if just because we so frequently ob-
serve and discuss humans interacting. In the present context, Frishberg (1975, 715)
prefers “one self-moving object with a dominant vertical dimension meets one self-
moving object with a dominant vertical dimension” when two such extended index
fingers are brought together in space. A classifier construction such as this is highly
productive, as Frishberg also notes, meaning that the approach actions of two individu-
als, whatever they might be (e.g., approaching, not approaching, one turning away
from the other, etc.) can be articulated. However, in some contexts, this productivity
is significantly diminished. Frishberg glosses her classifier description as meet. But since
the form is in fact highly productive, the notion of ‘meeting’ would only apply in some
contexts; thus the gloss is not appropriate for this classifier overall, and is reserved in
the present discussion for the lexicalized form meet (‘to meet’).
In the case of lexicalized meet, at least for ASL, the resulting meaning has little to
do with the physical event of two people approaching one another, and more to do
with initial awareness of the person, for example, in the context of ‘I met him in the
sixties’. The lexicalized form has lost compositional importance as well, such that the
path movements of the two hands do not align with located referents, that is, they are
spatially arbitrary.
Problematic, however, is that this lexicalized version glossed as meet appears to be
the very end point of a continuum of articulation possibilities from fully compositional
to fully lexicalized and non-productive. In ASL, for example, if the signer has the
lexicalized meaning in mind, but it is at least somewhat tied to the physical event, the
articulated path movement may not be fully arbitrary. This illustrates that productive
and lexical categories may not be discreet, and thus explains why it is sometimes diffi-
cult to determine how lexicalized forms should be characterized. For Johnston and
Schembri, there is the resulting practical dilemma of what should and should not be
included as lexemes in a dictionary, which may contribute to a seemingly low number
of lexemes in sign languages altogether.

3.2. Gesture and lexicon

Sign language and gesture have long had an uneasy alliance, with much early formal
analyses working to show that sign language is not just elaborate gesturing, such that
the question of whether signers gesture at all has even been asked (Emmorey 1999;
see also chapter 27 on gesture). More recently, some researchers have looked for po-
tential links between gesture and sign language, partly due to a renewed interest in
gestural sources of all human language in an evolutionary sense (e.g., Armstrong/
824 VII. Variation and change

Fig. 34.2: $ukriya: ‘thanks’ in IPSL Fig. 34.3: paisa: ‘money’ in IPSL (Zeshan
(Zeshan 2000, 147). Copyright 2000, 166). Copyright © 2000 by
© 2000 by John Benjamins. John Benjamins. Reprinted
Reprinted with permission. with permission.

Stokoe/Wilcox 1995; Armstrong/Wilcox 2007; Stokoe 2001; see also chapter 23, Manual
Communication Systems: Evolution and Variation). One area of investigation has con-
cerned the role that gesture plays in grammaticalization in sign languages as illustrated
in section 4 below.
Gesture has been noted as the source for signs in the lexicon of sign languages as
well, although once again much work has attempted to show that gestural sources for
lexical items give way to formal, arbitrary properties, especially in terms of iconicity
(Frishberg 1975). However, it is undeniable that gestures are frequently such sources,
even though no comprehensive study of this phenomenon has been undertaken. Here
we illustrate the link between gesture and lexicon from one source, IPSL (Zeshan
2000), but others are noted in section 3.3 below.
Zeshan (2000) cites examples of IPSL signs that are identical in form and meaning
to gestures found among hearing people, but where usage by signers differs from usage
by hearing gesturers in some way, for example, the gestures/signs for ‘thanks’ (Fig-
ure 34.2) and ‘money’ (Figure 34.3). Zeshan found that the gesture for ‘thanks’ is
restricted in use to beggars, whereas the IPSL sign $ukriya: (‘thanks’) is unrestricted
and used by anyone in any context. The gesture for ‘money’, once adopted into IPSL
(labeled paisa: ‘money’ in Zeshan 2000) participates in signed complexes such as
paisa:^dena: (‘give money’) when the sign is moved in a direction away from the signer
(Zeshan 2000, 39). In contrast, the gestural form is not combinable nor can its form
be altered by movement.
Gestures such as these are likely widespread as sources for lexical signs, but as has
been demonstrated for IPSL, as signers co-opt these gestures and incorporate them
into the conventionalized language system, they conform to existing patterning within
that language in terms of phonetic, morphological, and syntactic constraints and the
properties of the categories within which they become associated.

3.3. Common word formation processes: compounding, conversion,


and fingerspelling
New word formation in sign languages is often innovative: signers combine iconic parts
(handshapes, movements, etc.) into some new structure that, if the innovation is useful
34. Lexicalization and grammaticalization 825

Fig. 34.4: email in ASL. The dominant hand index finger moves away from the signer several
times (adapted from Signing Savvy, http://www.signingsavvy.com/index.php; retrieved
August 9, 2009). Image copyright © 2009, 2010 Signing Savvy, LLC. All rights reserved.
Reprinted with permission.

throughout the language community, may be institutionalized and thus lexicalized. This
may take place in response to changing technology, social and cultural changes, and
education. As discussed at the beginning of section 3, such innovations are usually
compositional, using metonymic representations of referent characteristics or proper-
ties. The item referred to may be quite abstract, thus the resulting representation may
also be metaphoric in nature. While lexicalization is in progress, forms used in the
community may be quite variable for a period of time until one form emerges as an
institutionalized lexicalized sign. One recent ASL example is the sign email, shown in
Figure 34.4. Other means of creating lexicon are also common, such as compounding
and blending, conversion, derivation, etc. Borrowing may also contribute to the lexicon
of a language, and in sign languages, this may be borrowing from another sign language
or from a surrounding spoken language primarily through fingerspelled forms.

3.3.1. Compounding

Compounding, as mentioned above, is a frequent source of new words in spoken lan-


guages, and this is no less true for sign languages (also see chapter 5, Word Classes
and Word Formation). Johnston and Schembri (1999) draw a distinction between pro-
ductive compounding, whereby two nominal lexemes are articulated phonologically as
a compound that fits a more local discourse purpose, and lexicalized compounds which
become operational as new, distinct lexical items, often with phonological structure
that differs radically from the simple combination of the two source lexemes, and a
meaning that may or may not reflect the source lexeme meanings and may be quite
arbitrary. Drawing on the work of Klima and Bellugi (1979) and others, Sutton-Spence
and Woll (1999, 102) state that in lexicalized BSL compounds (e.g., ‘blood’ from
red^flow; ‘people’ from man^woman; ‘check’ from see^maybe):
826 VII. Variation and change

⫺ the initial hold of the first sign is lost;


⫺ any repeated movement in the second sign is lost;
⫺ the base hand of the second sign is established at the point in time when the first
sign starts;
⫺ there is rapid transition between the first and second sign;
⫺ the first sign is noticeably shorter than the second.

Although there are differences cross-linguistically, these observations are fairly indica-
tive of lexicalized compounding across sign languages. Johnston and Schembri suggest,
however, that the resulting forms in lexicalized compounding in Auslan may best be
referred to as blends.

3.3.2. Conversion and derivation

Changes in lexical category by conversion or derivation in sign languages are most


often noted in relation to noun-verb pairs, discussed extensively for ASL by Supalla
and Newport (1978) but also noted in a number of other sign languages such as Auslan
(Johnston/Schembri 1999), in which the category of noun or verb is indicated by differ-
ent movement patterns (see chapter 5, Word Classes and Word Formation, for further
discussion). But Johnston and Schembri also point out that a noun and a verb within
a given pair, no matter whether they are examples of conversion or whether one cat-
egory is derived from the other, cannot be considered as independent words and thus
are not entered into the lexicon separately.

3.3.3. Lexicalized fingerspelling

In a number of sign languages, some very commonly fingerspelled words have become
stylized and often considerably reduced in complexity so as to be considered as lexical-
ized signs. Battison (1978) shows that the list of lexicalized fingerspellings in ASL ⫺
‘fingerspelled loan signs’ in his terminology ⫺ is quite extensive. Lexicalized fingerspel-
lings are typically quite short, between two and five letters, and are often reduced to
essentially the first and last letters; for example, b-a-c-k in ASL becomes b-k. Evidence
that these forms are not simply reduced fingerspellings comes from the observation
that they can take on features of other lexemes and participate in grammatical con-
structions. b-k, for example, can move in the direction of a goal. Brentari (1998) ob-
serves that frequently such lexicalized fingerspellings reduce in form to conform to a
general constraint on handshape aperture change within monomorphemic signs, that
is, there will be only one opening or one closing aperture change. The lexicalization of
the fingerspelled b-u-t in ASL, for instance, involves the reduction of the overall item
to a B (u) handshape (often characterized by a lack of tenseness such that the hand-
shape appears as a slightly lax 5 (<) handshape oriented with the palm toward the
addressee) closing to a T (5) handshape. Thus the aperture change in the resulting
form conforms to a single movement and consequently, the resulting form appears to
be a lexicalized sign composed of a single syllable.
34. Lexicalization and grammaticalization 827

Lexicalized fingerspellings have also been noted in sign languages that have two-
handed fingerspelling systems such as BSL (Brennan 2001) and Auslan (Johnston/
Schembri 1999, 2007). For Auslan, Johnston and Schembri (1999, 135) state that simi-
larly to lexicalization patterns generally, “lexicalized fingerspelling appears to exist on
a continuum, with some items both phonologically and semantically lexicalized, others
only partially phonologically or semantically lexicalized, and yet others being examples
of nonce (‘one-off’) borrowings which undergo only local lexicalization for the duration
of a particular signed exchanged”.

4. Grammaticalization in sign languages

The next part of this chapter examines how grammaticalization is evident in sign lan-
guages. This is a relatively new field in sign language research, but it is helped by the
growing body of research on spoken language grammaticalization which, as mentioned
above, has demonstrated principles of language change that appear to be universal.
One issue in grammaticalization studies in sign language is that detailed historical
records are scarce, and change can only be documented in detail when usage can be
compared at different times in the history of the language. Nonetheless, some research-
ers have been able to piece together information on the relationships among construc-
tions diachronically, and otherwise have relied on principles that have emerged in this
field based on languages for which such historical comparison can be made. As in the
discussion of lexicalization above, space permits the discussion of only a sampling of
grammaticalization findings, even though this work is expanding to more grammatical
categories in more sign languages (for discussion of further examples, see Pfau/Stein-
bach (2006, 2011)).
An important consideration is that in work on sign language grammaticalization,
gestural sources have frequently been shown to exist for grammatical elements. Wilcox
(2007) demonstrates that grammaticalization in sign languages moves along two path-
way types characterized by a crucial difference. Based on his own work and the work
of others, Wilcox shows that some items have grammaticalized from gestural sources
through a lexical stage, that is, along a gesture > lexical item > grammatical item path-
way, while others have bypassed a lexical stage, instead taking the route of gesture >
grammatical item, without an intervening lexical stage whatsoever. As mentioned
above, the ability to recognize gestural sources has recently been acknowledged as an
important insight in our understanding of grammaticalization generally (Heine/Kut-
eva 2007).
In some (but not all) contexts, grammatical markers have appeared as affixes in
spoken languages, such as is frequently the case with tense and aspect markers. In
contrast, clearly definable affixation has not been reported often for sign languages,
suggesting that this is not automatically the place to look for grammatical material.
Issues surrounding the lack of affixation in sign languages will not be taken up here,
partly because such affixation may not yet be very well understood (should, for exam-
ple, as Wilcox (2004) suggests, co-occurring grammatical items articulated as facial
gestures be considered a kind of affixation because they appear to be bound mor-
phemes dependent on what is articulated with the hands?), and because developing
828 VII. Variation and change

grammar does not necessarily depend on affixation even in a traditional sense. How-
ever, one account of affixing in Israeli SL is found in Meir and Sandler (2008, 49 f.),
and Zeshan (2004) reports that affixation occurs in her typological survey of negation
in sign languages.
Below we look at examples from each of the two routes of grammaticalization as
outlined by Wilcox (2007).

4.1. finish in ASL

finish in ASL has been shown to have developed from a fully functioning verb to a
number of grammatical usages, including a completive marker and a perfective marker,
which may in fact qualify as an affix (Janzen 1995). The fully articulated two-handed
form of finish is shown in Figure 34.5. As a perfective marker, indicating that some-
thing has taken place in the past, the form is reduced phonologically to a one-handed
sign which is articulated with a very slight flick of the wrist. When used as a perfective
marker, finish always appears pre-verbally. In its completive reading, the sign may be
equally reduced, but it is positioned either post-verbally or clause-finally. Interestingly,
ASL signers report that finish as a full verb is nowadays rare in signers’ discourse.
The grammatical use of finish in ASL has not reached inflectional status in that it
does not appear to be obligatory. Also, Janzen (1995) does not report a gestural source
for this grammaticalized item, but it is possible that such an iconic gestural element
does exist, thus it demonstrates Wilcox’s gesture > lexical item > grammatical item
route of development.
An additional grammaticalized use of finish in ASL is that of a conjunction (Janzen
2003), as illustrated in (1) (note that ‘(2h)’ indicates a two-handed version of a normally
one-handed sign; // signifies a pause; CCC indicates multiple movements).

top [ASL]
(1) go(2h) restaurant // eat+++ finish take-advantage see train arrive
‘(We) went to a restaurant and ate and then got a chance to go and see a
train arrive.’

Fig. 34.5: finish in its fully articulated form, with (a) as the beginning point and (b) as the end
point. The reduced form employs only one hand and has a much reduced wrist rotation
(Janzen 2007, 176). Copyright © 2007 by Mouton de Gruyter. Reprinted with permis-
sion.
34. Lexicalization and grammaticalization 829

In this case, finish is topic-marked (see section 4.5 for further discussion of topic
marking), functioning neither as a completive marker on the first clause, nor as an
informational topic. Rather, in this example, the manual sign and the non-manual
marker combine to enable finish to function as a linker meaning ‘and then’. Additional
such topic-marked conjunctions are discussed in Janzen, Shaffer, and Wilcox (1999).

4.2. Completive aspect in IPSL

Zeshan (2000) reports a completive marker in IPSL, labeled ho_gaya: (see Figure
34.6), which appears rather consistently in sentence-final position, and which may even
accompany other lexical signs that themselves mean ‘to end’, as in (2) from Zeshan
(2000, 63).

Fig. 34.6: The IPSL completive aspect marker ho_gaya: (Zeshan 2000, 39). Copyright © 2000 by
John Benjamins. Reprinted with permission.

(2) xatam(a) ho_gaya: [IPSL]


end compl
‘(The affair) ended (without result).’

Therefore Zeshan claims that ho_gaya: has only a grammatical and no lexical function.
In contrast to ASL finish, ho_gaya: has a gestural source that is identical in form and
means to ‘go away’ or ‘leave it’ (Zeshan 2000, 40). Since there is no evidence that a
lexical sign based on this gesture ever existed in IPSL, this may be considered as an
example of Wilcox’s second route to grammar, with no intervening lexical stage (for
discussion of aspectual markers, see also chapter 9).

4.3. future in LSF and ASL

The marker of futurity in both modern French Sign Language (LSF) and modern ASL
has been shown to have developed from a gestural source which has been in use from
830 VII. Variation and change

Fig. 34.7: (a) The French gesture meaning ‘to depart’ (Wylie 1977, 17); (b) Old LSF depart (Brou-
land 1855).

at least classical antiquity onward and still in use today in a number of countries around
the Mediterranean (Shaffer 2000; Janzen/Shaffer 2002; Wilcox 2007). Bybee et al.
(1994) note that it is common for future markers in languages to develop out of move-
ment verb constructions (as in be going to > gonna in English), verbs of desire, and
verbs of obligation. De Jorio (2000 [1832]) describes a gesture in use at least 2000 years
ago in which the palm of one hand is held edgewise moving out from underneath the
palm of the other hand to indicate departure. This gesture is shown in Figure 34.7a,
from Wylie (1977), a volume on modern French gestures.
An identical form is listed for the Old LSF sign depart in Brouland (1855); see
Figure 34.7b. Shaffer (2000) demonstrates that shortly after the beginning of the 20th
century, a similar form ⫺ although with the dominant, edgewise hand moving outward
in an elongated path ⫺ was in use in ASL to mean both ‘to go’ and ‘future’. Because
this historical form of the lexical verb go and the form future (perhaps at this stage
also a lexical form) co-existed in signers’ discourse, this represents what Hopper (1991)
calls ‘layering’, that is, the co-existence of forms with similar shapes but with different
meanings and differing in lexical/grammatical status. The elongated movement suggests
movement along a path. At some point in time, then, two changes took place. First,
the lexical verb go having this form was replaced by an unrelated verb form ‘to go’
and second, the sign future moved up to the level of the cheek, perhaps as an instance
of analogy in that it aligned with other existing temporal signs articulated in the same
region. Analogy in this respect has been considered as a motivating force in grammati-
calization (Fischer 2008; Itkonen 2005; Krug 2001). This change to a higher place of
articulation may have been gradual: Shaffer found examples of usages in LSF at an
intermediate height. Once at cheek-level, only a future reading is present; this form
cannot be used as a verb of motion, a change that took place both in ASL (Figure
34.8a) and LSF (Figure 34.8b). The future marker has thus undergone decategorializa-
tion as it moved along a pathway from full verb to future marker, which in modern
ASL appears pre-verbally. The forms illustrated in Figure 34.8 also represent a degree
of phonological reduction. Brentari (1998) states that the articulation of signs is phono-
34. Lexicalization and grammaticalization 831

Fig. 34.8 (a) future in modern ASL; (b) future in modern LSF (both from Shaffer 2000, 185 f.).
Copyright © 2000 by Barbara Shaffer. Reprinted with permission.

logically reduced when the fulcrum of movement is distalized. In the LSF and ASL
future marker, the fulcrum has shifted from the shoulder to the elbow, and in the most
reduced forms, to the wrist.
As is frequently the case with grammaticalizing items, multiple forms can co-exist,
often with accompanying variation in phonological form. The form illustrated in Fig-
ure 34.8a can appear clause-finally in ASL as a free morpheme with both future and
intentionality meanings. The path movement can vary according to the perceived dis-
tance in future time: a short path for an event in the near future, a longer path for
something in the distant future (note that deictic facial gestures usually accompany
these variants, but these are not discussed here). In addition, the movement can also
vary in tenseness depending on the degree of intentionality or determination. In ASL,
this future marker can also occur pre-verbally, although much of the variation in form
seen in the clause-final marker does not take place. The movement path is shortened,
perhaps with just a slight rotation of the wrist. The most highly reduced form appears
prefix-like, with the thumb contacting the cheek briefly followed by handshape and
location assimilation to that of the verb. In this case, the outward movement path of
future is lost altogether. The grammaticalization pathway of the future marker in LSF
and ASL is one of the clearest examples we find of the pathway gesture > lexical item
> grammatical item, based on evidence of usage at each stage of development.

4.4. Negative headshakes as grammatical negation

A negating headshake is reported as a facial/head gesture in a number of sign lan-


guages such as DGS (Pfau 2002; Pfau/Quer 2002; Pfau/Steinbach 2006) and many oth-
ers (see Zeshan (2004) for a typological survey). Negative headshakes occur commonly
across many cultures, either as co-speech or freestanding gestures, but as grammatical-
ized items in sign languages, they become regularized as part of specific constructions.
Pfau and his colleagues indicate that a negative headshake can be the sole negator of
the clause, and that there are language-specific constraints concerning its co-occurrence
with manual signs (see also chapter 15 on negation). In ASL, the negating headshake
832 VII. Variation and change

may co-occur either with or without a negative particle articulated on the hands. In
Pfau’s (2002) examples of negation in DGS, the negative headshake (hs) occurs along
with the verb alone or with verb plus negative particle, as in (3) (Pfau 2002, 273).
Optionally, the headshake may spread onto the direct object.

hs hs
(3) mutter blume kauf (nicht) [DGS]
mother flower buy.neg (not)
‘Mother does not buy a flower.’

The grammaticalization pathway of negative headshake gesture > grammatical nega-


tive marker again illustrates Wilcox’s route in which a lexical item does not intervene.
Pfau and Steinbach (2006) refer to McClave (2001) and Kendon (2002) for a survey of
the headshake as a gestural source in language use.

4.5. Topic constructions in ASL

The use of topic-comment structure has been reported in numerous sign languages.
For ASL, Janzen (1998, 1999, 2007) and his colleagues (Janzen/Shaffer 2002; Janzen/
Shaffer/Wilcox 1999) have shown that topic marking developed along the pathway
given in (4):

(4) generalized questioning gesture > yes/no question marking > topic marking

As a widespread gesture used to enquire about something, the eyebrows are typically
raised and eyes wide open, and the hands may also be outstretched with palms up. It
is important to note that this gesture is typically used when the focus is identifiable to
both interlocutors, such as a bartender pointing at a bar patron’s empty glass and using
the facial gesture to enquire about another drink. Yes/no-questions in ASL (and many
other sign languages) are articulated with the same facial gesture, possibly along with
a forward head-tilt, which likely has a gestural source as well. A head-tilt forward in
interlocution signals attentiveness or interactional intent: the questioner is inviting a
response. Note that in a yes/no-question, too, the basic information typically being
asked about is something identifiable to the addressee, who is asked to respond either
positively or negatively (e.g., Is this your book?).
When accompanying a topic-marked phrase in ASL, the facial gesture may still
appear very much like a yes/no-question, but in this case, it does not function interac-
tively, but rather marks grounding information upon which to base some comment as
new information (also see chapter 21, Information Structure). Raised eyebrows mark
the topic phrase as well, although the head-tilt may be slightly backward or to the side,
rather than forward. Janzen (1998) found that topic phrases could be noun phrases,
temporal adverbial and locative phrases, or whole clauses (which may consist of a verb
only, since subjects and objects may not be overt). Topics appear sentence-initially and
are followed by one or more comments (but note the further grammaticalized topic-
marked finish described in section 4.1 above that links preceding and following
clauses). Topic constructions contain shared or identifiable information and, even
34. Lexicalization and grammaticalization 833

though they pattern like yes/no-questions, do not invite a response; thus the interactive
function of the yes/no-question has been lost (which may also explain the loss of the
forward head-tilt). An example of a simple topic-comment structure in ASL is given
in (5) (Janzen 2007, 181), with the facial (non-manual) topic marker shown in Figure
34.9. Once again, no lexical stage intervenes between the gestural source and the gram-
maticalized item.

top
(5) tomorrow night work [ASL]
‘I work tomorrow night.’

4.6. Evidentials in Catalan Sign Language (LSC)


Evidentials represent another area that illustrates Wilcox’s (2007) first type of gram-
maticalization pathway, where a gesture first acts a source for a lexical item, which
subsequently evolves into a grammatical usage. Wilcox and Wilcox (1995) report on
evidentials in ASL such as mirror, which has as its gestural source the representation
of holding a mirror to one’s face, and which then has evolved into a modal function
often glossed as seem. Wilcox (2007) illustrates the same pathway with an interesting
and elaborate set of evidentials in LSC such as that given in Figure 34.10. Here, a
gesture indicating the face has evolved in LSC as the item remble (‘resemble’), but
with the grammatical meaning of subjective belief that something is the case based on
some sort of evidence, as illustrated in (6) (Wilcox 2007, 114; slightly adapted).

(6) resemble index3 today come no [LSC]


‘It seems that she’s not coming today.’

Increased subjective stance is one well-documented marker of grammaticalized forms


(Traugott 1989; Traugott/König 1991; Brinton/Traugott 2005).

Fig. 34.9: Facial gestures marking the topic phrase tomorrow night in the ASL example (5)
(Janzen 2007, 180). Copyright © 2007 by Mouton de Gruyter. Reprinted with permis-
sion.
834 VII. Variation and change

Fig. 34.10: resemble (remble) in LSC (Wilcox 2007, 113). Copyright © 2007 by Mouton de Gruy-
ter. Reprinted with permission.

5. The relationship between lexicalization and grammaticalization:


Some issues for sign languages
Both processes of lexicalization and grammaticalization involve some change in mean-
ing, but in different directions. Brinton and Traugott (2005, 108) suggest that items
“that can undergo grammaticalization tend to have quite general meanings (e.g., terms
for ‘thing,’ ‘go,’ ‘come,’ ‘behind’), while items that lexicalize often have highly special-
ized meaning (e.g., black market)”. In grammaticalization but not in lexicalization,
lexical meaning loses ground and an operational meaning emerges or is “fore-
grounded” (Wischer 2000, 365).
Lexicalization and grammaticalization are seen as processes that differ in their di-
rection of change: lexicalization is a change toward the establishment of lexical entries
while grammaticalization is a change toward the emergence of items within grammati-
cal categories. Still, these processes do share some properties. Brinton and Traugott
(2005, 101), for example, give evidence that both grammaticalization and lexicalization

[…] are subtypes of language change subject to general constraints on language use and
acquisition. Lexicalization involves processes that combine or modify existing forms to
serve as members of a major class, while grammaticalization involves decategorialization
of forms from major to minor word class and/or from independent to bound element to
serve as functional forms. Both changes may involve a decrease in formal or semantic
compositionality and an increase in fusion.

Although lexicalization and grammaticalization differ significantly, there remain some


questions regarding a possible relationship between the two processes. It was men-
tioned in section 1, for example, that an item may emerge through lexicalization which
then participates in a grammaticalizing construction. Lexicalization is not the “reverse”
of grammaticalization, however, as is sometimes suggested (see for example, Zeshan
2003, 132), but it is not clear that the principle of unidirectionality in grammaticaliza-
tion always holds, as suggested in 5.1 below. Further, once certain signs have lexical-
ized, signers may still have access to their “parts”, in effect “de-lexicalizing” them, as
discussed in section 5.2.
34. Lexicalization and grammaticalization 835

5.1. The case of classifier predicates and the problem of directionality

Johnston and Schembri (1999) suggest that some lexicalized forms may easily give way
to more productive usages, as discussed in section 3 above. But what is the evolutionary
relationship between the variable classifier forms (if classifiers are understood as gram-
matical categories) and invariable lexical forms? That is, which came first? The princi-
ple of unidirectionality tells us that grammatical change takes place in the direction of
lexical > grammatical, but we might consider that lexemes such as meet (discussed in
section 3.1 above) and chair (as in the chair/sit noun/verb pair; see Supalla/Newport
1978), among numerous other examples, solidify out of more productive possibilities
that include classifier handshapes or property markers. Items such as meet and chair
may be cases of what Haspelmath (2004, 28) terms “antigrammaticalization”, a type
of change that goes in the opposite direction of grammaticalization, that is, from dis-
course to syntax to morphology. Haspelmath makes clear that this does not mean
grammaticalization in reverse, in that a grammaticalized element progressively de-
volves back to its lexical source form (which presumably would have at one time been
lost). Rather, we are dealing with a process where a form more lexical in nature devel-
ops from a more grammatical source. Thus we might conclude that lexical items like
meet and chair have emerged from a wide range of variable classifier verb forms as
specific, morphologically restricted signs because they encode prototypical event sche-
mas. This may be more plausible than concluding that the variable classifier forms have
grammaticalized from the lexical sources meet and chair, as would normally be ex-
pected in a grammaticalization pathway, but further work in this area is needed.

5.2. Can lexicalized signs be “de-lexicalized”?

Even though lexicalized signs are not meaningfully dependent on their componential
parts, it seems that signers may evoke these parts in novel ways. The ASL lexeme tree,
for example, is fully lexicalized in that the upright forearm and the open handshape
with spread fingers together form a highly schematized articulation because the actual
referent may have none of the features suggested by the form: there is no requirement
that the referent tree labeled by the noun phrase has a straight, vertical trunk, nor that
it has five equally spaced branches. Neither the signer nor the addressee will be con-
cerned with discrepancies between the sign and the actual features of the referent tree
because of the schematic and symbolic nature of the lexeme. And yet, should a signer
wish to profile a certain part of the referent tree, the sign may be decomposed at least
to some extent, say by pointing to the forearm in reference to the trunk only, or by
referring to one of the fingers as representing one of the actual tree’s branches. Thus
the whole may be schematized, but unlike monomorphemic words of spoken lan-
guages, the parts are still evocable if needed. Johnston and Schembri (1999, 130) sug-
gest that this is a “de-lexicalization” process, and Brennan (1990) describes it as dor-
mant iconic features becoming revitalized. Helpful to this discussion is the suggestion
(Eve Sweetser, personal communication) that in a semantic analysis involving “mental
spaces” (see, for example, Fauconnier 1985) the visible parts of articulated signs such
as tree in a sign language are cognitively mapped onto the interlocutors’ mental image
836 VII. Variation and change

of the referent, a mapping that is not available to speakers and hearers of a spoken
language; thus the components within the lexeme are more easily available to users of
sign languages. For signs that are claimed to have lost any connection to their iconic
motivation, such as Auslan sister (Johnston/Schembri 1999, 129), decomposition is
less available.
However, in Johnston and Schembri’s sense, de-lexicalization must refer to individual
instantiations of decomposition, and not to a change that affects the lexical item in an
institutionalized way across the language community. Furthermore, we could not suggest
that, just because such decompositional referencing is possible for some lexeme, the
signer’s mental representation of the lexeme or its inclusion in the lexicon has weakened.

6. Conclusions

Despite the material difference between sign and spoken languages due to differences
in articulation, or modality, we see evidence that sign languages change over time along
principles similar to those governing changes in spoken languages. Language change
along these lines in sign languages is beginning to be explored, which will undoubtedly
tell us much about sign language typology and about the processes of change in lan-
guage generally, no matter the modality of use. The domains of lexicon and grammar
in sign languages are still not well understood, but as more sign languages are de-
scribed, more information about how these domains are formed will emerge. There
remain some challenges, however.
For example, the vast productivity and variation in form in sign languages, with
relatively smaller numbers of lexemes (at least as far as are usually considered as
dictionary entries) make it difficult to know at what stage lexicalization takes place and
how stable lexicalized forms are. It is not certain whether the productivity discussed in
this chapter is apparent because sign languages tend to be relatively young languages
and thus, as they evolve, whether their lexicons expand over time, or whether the
principles behind compositionality and productivity pull word formation in a direction
away from a solidified or ‘frozen’ lexicon.
Then, too, what may be the extensiveness of grammaticalization in sign languages?
As Johnston and Schembri (2007) point out, grammaticalization often takes centuries
to unfold, and the youth of most sign languages may mean that many aspects of gram-
maticalization are newly underway. If this is the case, however, we might expect to
find that many grammatical categories are at or near the beginning stages of their
development, but this has not been established as fact. The visual nature of language
structure has given us a different sense of what combinatorial features in both lexicon
and grammar might be like, and research on both areas often reveals a vast complexity
in both structure and function.
A new surge of interest in the relationship between gesture and language altogether
suggests that much can be learned from examining gestural sources in both lexicaliza-
tion and grammaticalization in sign language (Wilcox 2007). Such gestures are not
‘hearing people’s’ gestures, they belong to deaf people, too, and evidence is mounting
that they are integral to both lexicalization and grammaticalization patterns in sign lan-
guages.
34. Lexicalization and grammaticalization 837

7. Literature
Armstrong, David F./Stokoe, William C./Wilcox, Sherman E.
1995 Gesture and the Nature of Language. Cambridge: Cambridge University Press.
Armstrong, David F./Wilcox, Sherman E.
2007 The Gestural Origin of Language. Oxford: Oxford University Press.
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2003 Classifier Constructions and Morphology in Two Sign Languages. In: Emmorey, Karen
(ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Law-
rence Erlbaum, 53⫺84.
Battison, Robbin
1978 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
Brennan, Mary
1990 Word-formation in British Sign Language. Stockholm: Stockholm University Press.
Brennan, Mary
2001 Making Borrowings Work in British Sign Language. In: Brentari, Diane (ed.), Foreign
Vocabulary in Sign Languages: A Cross-Linguistic Investigation of Word Formation.
Mahwah, NJ: Lawrence Erlbaum, 49⫺85.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Brien, David (ed.)
1992 The Dictionary of British Sign Language/English. London: Faber & Faber.
Brinton, Laurel J./Traugott, Elizabeth Closs
2005 Lexicalization and Language Change. Cambridge: Cambridge University Press.
Brouland, Joséphine
1855 Langage Mimique: Spécimen d’un Dictionaire des Signes. Washington, DC: Gallaudet
University Archives.
Bybee, Joan
2001 Phonology and Language Use. Cambridge: Cambridge University Press.
Bybee, Joan
2003 Cognitive Processes in Grammaticalization. In: Tomasello, Michael (ed.), The New Psy-
chology of Language, Volume 2: Cognitive and Functional Approaches to Language
Structure. Mahwah, NJ: Lawrence Erlbaum, 145⫺167.
Bybee, Joan/Perkins, Revere/Pagliuca, William
1994 The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World.
Chicago: The University of Chicago Press.
Dudis, Paul G.
2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15(2), 223⫺238.
Emmorey, Karen
1999 Do Signers Gesture? In: Messing, Lynn/Campbell, Ruth (eds.), Gesture, Speech, and
Sign. New York: Oxford University Press, 133⫺159.
Fischer, Olga
2008 On Analogy as the Motivation for Grammaticalization. In: Studies in Language 32(2),
336⫺382.
Fauconnier, Gilles
1985 Mental Spaces. Cambridge, MA: MIT Press.
Frishberg, Nancy
1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Lan-
guage 51, 696⫺719.
Haiman, John
1994 Ritualization and the Development of Language. In: Pagliuca, William (ed.), Perspec-
tives on Grammaticalization. Amsterdam: Benjamins, 3⫺28.
838 VII. Variation and change

Haspelmath, Martin
2004 On Directionality in Language Change with Particular Reference to Grammaticaliza-
tion. In: Fischer, Olga/Norde, Muriel/Perridon, Harry (eds.), Up and down the Cline ⫺
The Nature of Grammaticalization. Amsterdam: Benjamins, 17⫺44.
Heine, Bernd/Claudi, Ulrike/Hünnemeyer, Friederike
1991 Grammaticalization: A Conceptual Framework. Chicago: University of Chicago Press.
Heine, Bernd/Kuteva, Tania
2007 The Genesis of Grammar: A Reconstruction. Oxford: Oxford University Press.
Heine, Bernd/Reh, Mechthild
1984 Grammaticalization and Reanalysis in African Languages. Hamburg: Helmut Buske
Verlag.
Hopper, Paul
1991 On Some Principles of Grammaticization. In Traugott, Elizabeth Closs/Heine, Bernd
(eds.), Approaches to Grammaticalization, Volume I: Focus on Theoretical and Method-
ological Issues. Amsterdam: Benjamins, 149⫺187.
Hopper, Paul/Traugott, Elisabeth Closs
2003 Grammaticalization (2nd Edition). Cambridge: Cambridge University Press.
Itkonen, Esa
2005 Analogy as Structure and Process. Amsterdam: Benjamins.
Janzen, Terry
1995 The Polygrammaticalization of FINISH in ASL. MA Thesis, University of Manitoba,
Winnipeg.
Janzen, Terry
1998 Topicality in ASL: Information Ordering, Constituent Structure, and the Function of
Topic Marking. PhD Dissertation, University of New Mexico, Albuquerque.
Janzen, Terry
1999 The Grammaticization of Topics in American Sign Language. In: Studies in Language
23(2), 271⫺306.
Janzen, Terry
2003 finish as an ASL Conjunction: Conceptualization and Syntactic Tightening. Paper Pre-
sented at the Eighth International Cognitive Linguistics Conference, July 20⫺25, 2003,
Logroño, Spain.
Janzen, Terry
2007 The Expression of Grammatical Categories in Signed Languages. In: Pizzuto, Elena/
Pietrandrea, Paola/Simone, Raffaele (eds.), Verbal and Signed Languages: Comparing
Structures, Constructs and Methodologies. Berlin: Mouton de Gruyter, 171⫺197.
Janzen, Terry/Shaffer, Barbara
2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard/
Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spo-
ken Languages. Cambridge: Cambridge University Press, 199⫺223.
Janzen, Terry/Shaffer, Barbara/Wilcox, Sherman
1999 Signed Language Pragmatics. In: Verschueren, Jef/Östman, Jan-Ola/Blommaert, Jan/
Bulcaen, Chris (eds.), Handbook of Pragmatics, Installment 1999. Amsterdam: Benja-
mins, 1⫺20.
Johnston, Trevor/Schembri, Adam
1999 On Defining Lexeme in a Signed Language. In: Sign Language & Linguistics 2(2),
115⫺185.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
34. Lexicalization and grammaticalization 839

Jorio, Andrea de
2000 [1832] Gesture in Naples and Gesture in Classical Antiquity: A Translation of La mimica
degli antichi investigata nel gestire napoletano, Gestural Expression of the Ancients in
the Light of Neapolitan Gesturing, and with an Introduction and Notes by Adam Kendon
(translated by Adam Kendon). Bloomington, IN: Indiana University Press.
Kendon, Adam
2002 Some Uses of the Headshake. In: Gesture 2(2), 147⫺182.
Klima, Edward S./Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Krug, Manfred G.
2001 Frequency, Iconicity, Categorization: Evidence from Emerging Modals. In: Bybee, Joan/
Hopper, Paul (eds.), Frequency and the Emergence of Linguistic Structure. Amsterdam:
Benjamins, 309⫺335.
Lehmann, Christian
2002 New Reflections on Grammaticalization and Lexicalization. In: Wischer, Ilse/Diewald,
Gabriele (eds), New Reflections on Grammaticalization: Proceedings from the Interna-
tional Symposium on Grammaticalization 1999. Amsterdam: Benjamins, 1⫺18.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
McClave, Evelyn Z.
2001 The Relationship Between Spontaneous Gestures of the Hearing and American Sign
Language. In: Gesture 1, 51⫺72.
Meir, Irit
2003 Modality and Grammaticalization: The Emergence of a Case-marked Pronoun in ISL.
In: Journal of Linguistics 39(1), 109⫺140.
Meir, Irit/Sandler, Wendy
2008 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erl-
baum.
Pfau, Roland
2002 Applying Morphosyntactic and Phonological Readjustment Rules in Natural Language
Negation. In: Meier, Richard/Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality
and Structure in Signed and Spoken Languages. Cambridge: Cambridge University
Press, 263⫺295.
Pfau, Roland/Quer, Josep
2002 V-to-Neg Raising and Negative Concord in Three Sign Languages. Rivista di Grammat-
ica Generativa 27, 73⫺86.
Pfau, Roland/Steinbach, Markus
2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign
Languages. In: Linguistics in Potsdam 24, 3⫺98.
Pfau, Roland/Steinbach, Markus
2011 Grammaticalization in Sign Languages. In: Narrog, Heiko/Heine, Bernd (eds.), The
Oxford Handbook of Grammaticalization. Oxford: Oxford University Press, 683⫺695.
Sallandre, Marie-Anne
2007 Simultaneity in French Sign Language Discourse. In: Vermeerbergen, Myriam/Leeson,
Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Languages: Form and Function.
Amsterdam: Benjamins, 103⫺125.
Schembri, Adam
2003 Rethinking “Classifiers” in Signed Languages. In: Emmorey, Karen (ed.), Perspectives
on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 3⫺34.
Senghas, Ann/Kita, Sotaro/Özyürek, Aslı
2004 Children Creating Core Properties of Language: Evidence from an Emerging Sign Lan-
guage in Nicaragua. In: Science 305, 1779⫺1782.
840 VII. Variation and change

Shaffer, Barbara
2000 A Syntactic, Pragmatic Analysis of the Expression of Necessity and Possibility in Ameri-
can Sign Language. PhD Dissertation, University of New Mexico, Albuquerque.
Shaffer, Barbara
2002 can’t: The Negation of Modal Notions in ASL. In: Sign Language Studies 3(1), 34⫺53.
Shaffer, Barbara
2004 Information Ordering and Speaker Subjectivity: Modality in ASL. In: Cognitive Lin-
guistics 15(2), 175⫺195.
Stokoe, William C.
2001 Language in Hand: Why Sign Came Before Speech. Washington, DC: Gallaudet Univer-
sity Press.
Supalla, Ted
1986 The Classifier System in American Sign Language. In: Craig, Colette (ed.), Noun
Classes and Categorization. Amsterdam: Benjamins, 181⫺214.
Supalla, Ted/Newport, Elissa
1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign
Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language
Research. New York: Academic Press, 91⫺132.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Traugott, Elizabeth Closs
1989 On the Rise of Epistemic Meanings in English. In: Language 65(1), 31⫺55.
Traugott, Elizabeth Closs/Dasher, Richard B.
2002 Regularity in Semantic Change. Cambridge: Cambridge University Press.
Traugott, Elizabeth Closs/König, Ekkehard
1991 The Semantics-pragmatics of Grammaticalization Revisited. In: Traugott, Elizabeth
Closs/Heine, Bernd (eds.), Approaches to Grammaticalization (1). Amsterdam: Benja-
mins, 189⫺218.
Wilcox, Phyllis
1998 give: Acts of Giving in American Sign Language. In: Newman, John (ed.), The Linguis-
tics of Giving. Amsterdam: Benjamins, 175⫺207.
Wilcox, Sherman
2004 Cognitive Iconicity: Conceptual Spaces, Meaning, and Gesture in Signed Language. In:
Cognitive Linguistics 15(2), 119⫺147.
Wilcox, Sherman
2007 Routes from Gesture to Language. In: Pizzuto, Elena/Pietrandrea, Paola/Simone, Raf-
faele (eds.), Verbal and Signed Languages: Comparing Structures, Constructs and Meth-
odologies. Berlin: Mouton de Gruyter, 107⫺131.
Wilcox, Sherman/Shaffer, Barbara
2006 Modality in ASL. In: Frawley, William (ed.), The Expression of Modality. Berlin: Mou-
ton de Gruyter, 207⫺238.
Wilcox, Sherman/Wilcox, Phyllis
1995 The Gestural Expression of Modality in ASL. In: Bybee, Joan/Fleischman, Suzanne
(eds.), Modality in Grammar and Discourse. Amsterdam: Benjamins, 135⫺162.
Wischer, Ilse
2000 Grammaticalization Versus Lexicalization: ‘Methinks’ There Is Some Confusion. In:
Fischer, Olga/Rosenbach, Anette/Stein, Dieter (eds.), Pathways of Change: Grammati-
calization in English. Amsterdam: Benjamins, 355⫺370.
Wylie, Laurence William
1977 Beaux Gestes: A Guide to French Body Talk. Cambridge, MA: The Undergraduate
Press.
35. Language contact and borrowing 841

Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Ben-
jamins.
Zeshan, Ulrike
2003 ‘Classificatory’ Constructions in Indo-Pakistani Sign Language: Grammaticalization
and Lexicalization Processes. In: Emmorey, Karen (ed.), Perspectives on Classifier Con-
structions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 113⫺141.
Zeshan, Ulrike
2004 Hand, Head, and Face: Negative Constructions in Sign Languages. In: Linguistic Typol-
ogy 8, 1⫺58.

Terry Janzen, Winnipeg (Canada)

35. Language contact and borrowing


1. Introduction
2. Language contact and the bilingual situation of sign languages
3. Contact between spoken languages and sign languages
4. Contact between sign languages
5. Language attrition and death
6. Conclusion
7. Literature

Abstract
This chapter is concerned with contact between sign languages and spoken languages,
contact between sign languages, and the outcomes of this contact. Earlier approaches
focusing on diglossia and pidginization and more recent studies of bilingualism and
modality, including code-switching, code-mixing, and code-blending and their features
are reviewed. The consequences of sign language contact with spoken languages will be
detailed, including mouthing and fingerspelling, as will be the outcome of contact be-
tween sign languages such as lexical borrowing and International Sign. Contact resulting
in language attrition and language death will also be briefly discussed.

1. Introduction
The focus in this review will be on bilingualism and externally triggered change in sign
language as a result of language contact and borrowing. Contact can occur between
two sign languages or between a sign language and a spoken language, and both unimo-
dal (two sign languages) and cross-modal (sign language/spoken language) bilingualism
can be found in Deaf communities. However, because of the minority language situa-
842 VII. Variation and change

tion of most sign languages, contact between spoken and sign languages, and cross-
modal bilingualism have been relatively well-researched, with more limited research on
contact between sign languages and almost no research on sign language/sign language
bilingualism.
Using the contrast drawn by Hamers and Blanc (2003) between bilingualism (com-
munity-level use of more than one language) and bilinguality (an individual’s use of
more than one language), it can be said that Deaf communities exhibit bilingualism,
while individuals in Deaf communities exhibit variable degrees of bilinguality in a
signed and spoken/written language. Of particular interest in relation to societal cross-
modal bilingualism are those communities where there is widespread cross-modal bilin-
gualism among both hearing and deaf people (see Woll/Adam (2012) for a review),
but in all Deaf communities there are influences from spoken languages, resulting from
code-blending as well as the more familiar code-mixing and code-switching. Borrowing
can also be extensive, primarily from the dominant spoken/written language to the sign
language, or where two sign languages are in contact, between sign languages. As in
all language contact situations, as minority language speakers become more fluent in
the majority language, their first language loses linguistic features which are not re-
placed; when transmission to children is interrupted, the second generation become
semi-speakers (Dorian 1982). The final section, therefore, will focus on language shift,
including an exploration of language attrition in terms of the individual, and language
death in relation to the community.

2. Language contact and the bilingual situation of sign languages

Deaf communities form minority language communities within dominant spoken lan-
guage communities. The effects of language contact in such settings can be seen across
a range of linguistic phenomena, including borrowings and loans, interference, conver-
gence, transference, bilingualism, code switching, foreigner talk, language shift, lan-
guage attrition, language decline, and language death (Thomason 2001). The effects of
contact between sign languages and spoken languages parallels contact between spo-
ken languages in similar sociolinguistic contexts. Additionally, sign languages can be in
contact with other sign languages, and the same power asymmetries can often be seen
in the outcomes of contact.
Language contact can result in bilingualism (Grosjean 1982), and as well as bilin-
gualism, contact between languages can result in phonological, lexical, and grammatical
change in either or both languages. As Sankoff (2001) notes, languages used by bilin-
guals may undergo additional changes that are different from those that are found in
monolingual communities, as additional factors may drive change. With respect to sign
languages, Lucas and Valli (1989, 1991, 1992) report that the major outcomes of lan-
guage contact, such as lexical influence from one language on the other, foreigner talk,
interference (Weinreich 1968), and the creation of pidgins, creoles, and mixed systems,
are also found in signed-spoken language contact. Johnston and Schembri (2007) adapt
Lucas and Valli’s (1992) model of the different varieties of signing to describe the
differences between contact and artificial varieties of signing in relation to the situation
of Australian Sign Language (Auslan). In contact signing between Auslan and English,
35. Language contact and borrowing 843

a simplified English word order, reduced use of space and non-manual features, as well
as some idiosyncratic patterns are found, whereas in artificially created varieties de-
signed to represent English such as Australasian Signed English, the syntax follows the
syntax of English.
In some situations, contact results in the creation of pidgins and creoles. Language
contact can also result in bilingualism. As Sankoff (2001) notes, languages used by
bilinguals may undergo additional changes that are different from those that are found
in monolingual communities, as additional factors may drive change.
Cross-modal societal bilingualism has been reported in many communities in which
Deaf people live. Different types of language contact and social structure in communi-
ties such as those, for example, of Martha’s Vineyard, Bali, and Yucatan, are described
and contrasted by Woll and Ladd (2003). In most of these communities, there is a high
incidence of deafness and a high proportion of hearing people are fluent in both a
spoken language and a sign language (see chapter 24, Shared Sign Languages, for
further discussion).

3. Contact between spoken languages and sign languages

Sign language researchers in the 1970s and 1980s, noting how people’s signing changed
in different contexts, drew on the sociolinguistic literature to explain this phenomenon.
Having observed influence from English on American Sign Language (ASL) to varying
degrees, Stokoe (1969) proposed that this be characterised as a form of diglossia. Clas-
sically, diglossia refers to communities where there are High and Low varieties of a
single language, used in different settings, for example in Switzerland, where Swiss
German (Low) and Standard German (High) are both in use. In such communities,
the Low variety is used for everyday communication, while the High variety is used in
literature and formal education (Ferguson 1959). Fishman (1967) extended this to ad-
dress the relationship between diglossia and bilingualism. Woodward (1973) described
a ‘deaf diglossic continuum’ to reflect the variable mix of ASL and English found in
the American Deaf community, using the term ‘Pidgin Signed English’ to refer to the
variety found in the middle of the continuum. Deuchar (1984) applied Woodward’s
deaf diglossic continuum to the British Deaf community but contended that this is an
oversimplification of the language contact phenomena.
Contemporary with Woodward and Stokoe’s work, Tervoort (1973) argued that un-
der the classic definition of diglossia, the High and Low forms had to be varieties of
the same spoken language. Since ASL and English were two different languages in
contact, it would be more appropriate to describe the Deaf community as a bilingual
community. Therefore if diglossia existed, it was not between ASL and English, but
rather between ASL and manual varieties of English, sometimes called Manually
Coded English (MCE), which seek to represent the grammar of English in manual
form. The modality differences between signed and spoken languages also render the
diglossia model problematic in this contact situation. Cokely (1983) moved on from
the diglossia model and described how interaction between fluent Deaf signers and
hearing learners of sign language results in ‘foreigner talk’. Lucas and Valli (1992)
proposed the term ‘contact signing’, and this is now generally used to refer to mixes
844 VII. Variation and change

between a signed and spoken language. The prevailing view nowadays is that the Deaf
community is a bilingual community with individual Deaf people having varying de-
grees of fluency in the signed and spoken languages of the community.

3.1. Pidgins

A pidgin is a simplified language which arises from contact between two languages,
and which is not a stable variety of language. A creole is formed when a pidgin is
nativized, that is, acquired by children as a first language. Creoles often have grammar
different from the languages that they are derived from, as well as some evidence of
phonological and semantic shift (Hall 1966). Fischer (1978) pointed out a number of
linguistic and socioeconomic similarities between pidgin forms resulting from contact
between sign language and spoken language and pidgins and creoles resulting from
contact between spoken languages (see chapter 36 for further discussion of creolisa-
tion). Woodward (1973, 1996) proposed the concept of a Pidgin Signed English which
included grammatical structures which were reduced and mixed from ASL and English,
along with new structures which did not originate from either ASL or English. Because
pidgins are the result of language contact and creoles are learnt as a first language, the
age of acquisition and the context of language use can influence whether a Deaf person
uses a pidgin form of sign language or a sign language (Mayberry/Fischer/Hatfield
1983).
However, there are significant differences between the contexts in which spoken
language pidgins arise, and those described for sign language-spoken language contact:
for example, the people who mix signed and spoken languages regularly tend to be
fluent users of both a signed and spoken language (Johnston/Schembri 2007). Varieties
arising spontaneously are now referred to as contact signing (Lucas/Valli 1992), while
terms such as Pidgin Signed English and Manually Coded English (Bornstein 1990;
Schick 2003) are used for manual representations of spoken English which often use
additional signs created to represent English function words.

3.2. Code-switching and code-mixing

Of all the possible forms of interference between two languages, code-switching and
code-mixing are the most studied (Thomason 2001) and refer to the use of material
(including vocabulary and grammar) from more than one language within a conversa-
tion. With respect to contact between sign language and spoken language, code-mixing
and code-switching are seen as context- and content-dependent (Ann 2001; Kuntze
2000; Lucas/Valli 1992). Code switching occurs inter-sententially (switching at a sen-
tence boundary) while code-mixing occurs intra-sententially. However, Ann (2001)
points out that code-switching and code-mixing in sign language-spoken language con-
tact would require a person to stop signing and start speaking or vice versa. This hardly
ever occurs in communication between individuals who are bilingual in both a spoken
and sign language (Emmorey et al. 2008).
35. Language contact and borrowing 845

3.3. Code-blending

Myers-Scotton (1993) proposed the matrix language-frame model which takes into
account the languages that play a part in code-switching and code-mixing; the most
dominant language in the sentence is the matrix language (ML) while the other lan-
guage is called the embedded language (EL). Romaine (1995) describes how in intense
language contact a third language system may emerge which shows properties not
found in either of the input languages. In relation to sign languages, Lucas and Valli
(1992) discuss the existence of a third system, which is neither ASL nor English and
in which phonological, morphological, syntactic, lexical, and pragmatic features are
produced simultaneously. In this system, stretches of discourse cannot be assigned
either to ASL or to English, as they combine elements of both languages and also
include some idiosyncratic characteristics. This is known as code-blending.
Code-blending in sign language-spoken language contact has unique properties be-
cause of the different language modalities involved. Because the articulators for spoken
languages and sign languages are different, it is possible to use both types of articula-
tors at the same time. This is not only found in contact between spoken language-
dominant and sign language-dominant signers, but also between native signers who are
also fluent in a spoken language. Emmorey, Borinstein, and Thompson (2005) discuss
the presence of ‘code-blending’ in bimodal bilingual interactions. Van den Bogaerde
(2000) also found this phenomenon in interactions between deaf adults and hearing
children. Emmorey et al. (2008) report that full switches between languages in ASL-
English bilinguals are exceptional because the different modalities allow for the simul-
taneous production of elements of both languages. In a study designed to elicit
language mixing from hearing native signers, the predominant form of mixing was
code-blends (English words and ASL signs produced at the same time). They also
found that where ASL was the matrix language, no single-word code-blends were pro-
duced.
Baker and van den Bogaerde (2008), in an investigation of language choice in Dutch
families with Deaf parents and deaf or hearing children, found that code-blending
varies, depending on which is the matrix (or base) language. Both the Emmorey et al.
study and Baker and van den Bogaerde’s research contrast with Lucas and Valli’s
(1992) claim that code-blending is a third system and that there is no matrix language.
The examples in (1) to (4) illustrate the various types of code-blending occurring
with different matrix languages. In (1), Dutch is the matrix language: the utterance is
articulated fully in Dutch, but the verb vallen (‘fall’) is accompanied by the correspond-
ing sign from Sign Language of the Netherlands (NGT). Example (2) shows the reverse
pattern; the utterance is expressed in NGT, but the final sign blauw (‘blue’) is accom-
panied by the corresponding Dutch word (Baker/van den Bogaerde 2008, 7 f.).

(1) Dutch matrix language [Dutch/NGT]


Signed vallen
Spoken die gaat vallen
that goes fall
‘That [doll] is going to fall.’
846 VII. Variation and change

(2) NGT matrix language


Signed index jas blauw
Spoken blauw
coat blue
‘He has a blue coat.’

Example (3) is different from (1) and (2) in that both spoken and signed elements
contribute to the meaning of the utterance; note the Dutch verb doodmaken (‘kill’)
accompanying the sign schieten (‘shoot’). Thus the sign specifies the meaning of the
verb (Baker/van den Bogaerde 2008, 9). It is also noteworthy that in the spoken utter-
ance, the verb does not occupy the position it would usually occupy in Dutch (the
Dutch string would be (De) politie maakt andere mensen dood); rather, it appears
sentence-finally, as is common in NGT. Baker and van den Bogaerde (2008) refer to
this type as ‘mixed’ because there is no clearly identifiable matrix language. The same
is true for (4), but in this example, a full blend, all sentence elements are signed and
spoken (van den Bogaerde/Baker 2002, 191).

(3) Mixed (no matrix language) [Dutch/NGT]


Signed politie ander mensen schieten
Spoken politie andere mensen doodmaken
police other people shoot/kill
‘The police shot the other people.’
(4) Full blending of Dutch and NGT
Signed boek pakken
Spoken boek pakken
book fetch
‘I will fetch the book.’

Code-blends were found both in mothers’ input to their children and in the children’s
output. NGT predominated as the matrix language when Deaf mothers communicated
with their deaf children, while spoken Dutch was more often used as the matrix lan-
guage with hearing children. The hearing children used all four types of code-blending
whereas the deaf children tended to use NGT as a matrix language (van den Bogaerde/
Baker 2005). Code-blending took place more often with nouns than with verbs.
Bishop and Hicks (2008) investigated bimodal bilingualism in hearing native signers,
describing features in their English that are characteristic of sign languages but are not
present in English. These hearing native signers also combined features of both ASL
and English, illustrating their fluent bilingualism and shared cultural and linguistic
background.
As mentioned before, Emmorey et al. (2008) found that adult bimodal bilinguals
produced code-blending much more frequently than code-switching. Where code-
blending occurred, semantically equivalent information was provided in the two lan-
guages. They argue that this challenges current psycholinguistic models of bilingualism,
because it shows that the language production system does not require just one single
lexical representation at the word level.
Although there are independent articulators (hands for ASL and mouth for Eng-
lish), two different messages are not produced simultaneously ⫺ in line with Levelt’s
35. Language contact and borrowing 847

(1989, 19) constraints which prevent the production or interpretation of two concurrent
propositions. However, there are disagreements about whether mouthing (unvoiced
articulation of a spoken word with or without a manual sign; see section 3.4.2 below)
is a case of code-blending or whether code-blending only occurs when both English
and ASL become highly active (see Vinson et al. 2010).
There are many linguistic and social factors that trigger code-blending, with code-
blending having the same social and discourse function for bimodal bilinguals that
code-switching has for unimodal bilinguals (Emmorey et al. 2008). Triggers previ-
ously identified for code-switching include discourse and social functions, such as
identity, linguistic proficiency, signaling topic changes, and creating emphasis (Ro-
maine 1995). Nouns generally switch more easily than verbs; however, Emmorey et
al. (2008) found that in a single sign code-blend or code-mix, it was more likely that
ASL verbs were produced. They explain this by noting that it is possible to articulate
an ASL verb and to produce the corresponding English verb with tense inflection at
the same time.

3.4. Borrowing from spoken language to sign language

Thomason and Kaufman (1988) define ‘borrowing’ as the incorporation of foreign


features into a group’s native language. Lexical borrowing generally occurs when
speakers in contact with another more dominant language perceive a gap or a need
for reference to new or foreign concepts in their first language; the outcome is to
expand the lexicon, or to create substitutes for existing words.
Battison (1978) is the first major study of lexical borrowing into ASL from English.
He describes how fingerspelled words are restructured and borrowed and argues that
this restructuring and borrowing is no different from that which occurs between spoken
languages. McKee et al. (2007) describe how ‘semantic importation’ of spoken lexical
items into sign languages has specific features arising from their modality difference:
borrowing generally occurs through mechanisms such as fingerspelling, mouthing, ini-
tialized sign formations, and loan translation. Foreign forms that combine structural
elements from two languages may be described as hybrids: ‘Māoridom’, which refers
to the Māori people, their language, and culture, is an example in New Zealand Eng-
lish, while initialized signs and the co-articulation of a manual sign with a mouthing
specifying the meaning of the sign are forms of hybrid loans commonly found in sign
languages including New Zealand Sign Language (NZSL).
Two social preconditions for borrowing between languages are extended social con-
tact and a degree of bilinguality in speakers (Thomason/Kaufman 1988). In language
contact settings, bilingual individuals are instrumental in introducing new usages and
coinages from a second language to the community, which are then transmitted to
monolingual speakers who would not otherwise have access to them. As for the New
Zealand situation, an important factor in contact between Te Reo Māori and NZSL is
the emergence of bilingual individuals and of domains where the two languages are in
use by Deaf and hearing participants. Māori sign language interpreters, and other hear-
ing Māori with NZSL skills, have in some instances been key agents of motivating,
coining, and disseminating contact forms (McKee et al. 2007). Exposure to a second
language, resulting in indirect experience of that language, rather than actual bilingual-
848 VII. Variation and change

ism, can be sufficient to prompt lexical borrowing. This describes the circumstances of
Māori Deaf themselves, who have created contact sign forms as a result of indirect
exposure to Te Reo Māori, rather than through direct use of it as bilinguals.

3.4.1. Fingerspelling

Fingerspelling is the use of a set of manual symbols which represent letters in a written
language (Sutton-Spence 1998). There are many different manual alphabets in use
around the world, some of which are two-handed (e.g. the system used in the UK) and
others which are one-handed (e.g. the systems used in the US and the Netherlands)
(Carmel 1982). Fingerspelling is treated differently by different researchers; some con-
sider it as part of the sign language, while others see it as a foreign element coming
from outside the core lexicon. Battison’s (1978) study of loan forms from fingerspelling
was based on the premise that fingerspelled events were English events. Other re-
searchers, such as Davis (1989), have argued that fingerspelling is not English. Davis
goes on to argue that fingerspelling is an ASL phonological event because ASL mor-
phemes are never borrowed from the orthographic English event; they are simply used
to represent the orthographic event. Loans from fingerspelling are restructured (Lucas/
Valli 1992, 41) to fit the phonology of the sign language. Sutton-Spence (1994) discusses
fingerspellings and single manual letter signs (SMLS) as loans from English, whatever
their form or degree of integration into British Sign Language (BSL). The articulatory
characteristics of the fingerspelled word, the phonological and orthographic character-
istics of the spoken and written word, and the phonological characteristics of the sign
language all influence how words are borrowed and in what form.
Quinto-Pozos (2007) views fingerspelling as one of the points of contact between a
signed and a spoken language, with fingerspelling available as a way of code-mixing.
Waters et al. (2007, 1287) investigated the cortical organization of written words, pic-
tures, signs, and fingerspelling, and whether fingerspelling was processed like signing
or like writing. They found that fingerspelling was processed in areas in the brain
similar to those used for sign language, and distinct from the neural correlates involved
in the processing written text.
Although the written form of spoken language can be a source for borrowing of
vocabulary through fingerspelling, Padden and LeMaster (1985), Akamatsu (1985),
and Blumenthal-Kelly (1995) have all found that children recognize fingerspelled
words in context long before the acquisition of fingerspelling, and so those finger-
spelled words are considered signs. Additionally, lexical items can be created, according
to Brentari and Padden (2001) and Sutton-Spence (1994) through the compounding of
fingerspelling and signs, for example, fingerspelled -p- C mouth for Portsmouth.
The American manual alphabet is one-handed; the British manual alphabet is two-
handed. In both sign languages, fingerspelling can be used to create loan signs. How-
ever, there is an influence of the use of two hands for fingerspelling on loan formation.
In a corpus of 19,450 fingerspelled BSL items, Sutton-Spence (1998) found that very
few were verbs and most were nouns. There are various possible reasons, including the
influence of word class size on borrowing frequency: nouns make up 60 % and verbs
make up 14 % of the vocabulary. However, she also suggests that the difference might
be due to phonotactic reasons. In order to add inflection, fingerspelled loan verbs
35. Language contact and borrowing 849

would have to move through space while simultaneously changing handshapes; this,
however, would violate phonotactic rules of BSL relating to the movement of two
hands in contact with each other.
There is a process of nativization of fingerspelling (Kyle/Woll 1985; Sutton-Spence
1994; Cormier/Tyrone/Schembri 2008), whereby a fingerspelled event becomes a sign.
This occurs when (i) forms adhere to phonological constraints of the native lexicon,
(ii) parameters of the forms occur in the native lexicon, (iii) native elements are added,
(iv) non-native elements are reduced (e.g. letters lost), and (v) native elements are
integrated with non-native elements (Cormier/Tyrone/Schembri 2008).
Brennan, Colville, and Lawson (1984) discuss the borrowing of Irish Sign Language
(Irish SL) fingerspelling into BSL by Catholic signers in the west of Scotland. Johnston
and Schembri (2007) also mention signs with initialization from the Irish manual alpha-
bet, borrowed into Auslan, although this is no longer a productive process. Initializa-
tion is widely seen in sign languages with a one-handed manual alphabet and refers to
a process by which a sign’s handshape is replaced by a handshape associated with (the
first letter of) a written word. For example, in ASL, signs such as group, class, family,
etc., all involve the same circular movement executed by both hands in neutral space,
but the handshapes differ and are the corresponding handshapes from the manual
alphabet: -g- (@), -c- (:), and -f- (^), respectively. Machabée (1995) noted the pres-
ence of initialized signs in Quebec Sign Language (LSQ), which she categorized into
two groups: those realized in fingerspelling space or neutral space, accompanied by no
movement or only a hand-internal movement, and those which are realized as natural
LSQ signs, created on the basis of another existing but non-initialized sign, through a
morphological process. Initialized signs are rare in sign languages using a two-handed
alphabet; instead SMLS are found. In contrast to initialized signs, SMLS are not based
on existing signs; rather, they only consist of the hand configuration representing the
first letter of the corresponding English word to which a movement may be added
(Sutton-Spence 1994).
Loans from ideographic characters are reported, for example in Taiwanese Sign
Language (TSL) (Ann 2001, 52). These are either signed in the air or on the signer’s
palm. Interestingly, these loans sometimes include phonotactic violations, and hand-
shapes which do not exist in TSL appear in some character loan signs.
Parallels may be seen in the ‘aerial fingerspelling’ used by some signers in New
Zealand. With aerial fingerspelling, signers trace written letters in the air with their
index finger, although this is only used by older people and does not appear in the
data which formed the basis of the NZSL dictionary (Dugdale et al. 2003, 494)

3.4.2. Mouthing

In the literature, two types of mouth actions co-occurring with manual signs are usually
distinguished: (silent) mouthings of spoken language words and mouth gestures, which
are unrelated to spoken languages (Boyes-Braem/Sutton-Spence 2001). Mouthing
plays a significant role in contact signing (Lucas/Valli 1989; Schermer 1990). There is,
however, disagreement about the role of mouthing in sign languages: whether it is a
part of sign language or whether it is coincidental to sign language and reflects bilin-
gualism (Boyes-Braem/Sutton-Spence 2001; Vinson et al. 2010).
850 VII. Variation and change

Schermer (1990) is the earliest study of mouthing, investigating features of the rela-
tionship between NGT and spoken Dutch. Her findings indicate that the mouthing of
words (called ‘spoken components’ in her study) has two roles: to disambiguate mini-
mal pairs and to specify the meaning of a sign. She found differences between signers,
with age of acquisition of a sign language having a strong influence on the amount
of mouthing.
Schermer described three types of spoken components: (i) complete Dutch lexical
items unaccompanied by a manual sign; these are mostly Dutch prepositions, function
words, and adverbs, (ii) reduced Dutch lexical items that cannot be identified without
the accompanying manual sign, and (iii) complete Dutch lexical items accompanying
a sign, which have the dual role of disambiguating and specifying the meaning of signs.
She also mentions a fourth group which are both semantically and syntactically redun-
dant. Example (5a) illustrates type (ii); here the mouthing is reduplicated (koko) in
order to be synchronized with the repeated movement of the sign koken (‘to cook’).
A mouthing of type (iii) is shown in (5b). This example is interesting because the sign
koningin (‘queen’) has a double movement and is accompanied by the corresponding
Dutch word koningin, which, however, is not articulated in the same way as it would
usually be in spoken Dutch; there are three syllables in the Dutch word and the second
syllable is less stressed so that the last syllable coincides with the second movement of
the sign (Schermer 2001, 276).

/koko/ /koningin/
(5) a. koken b. koningin [NGT]
‘to cook’ ‘queen’

Sutton-Spence and Woll (1999, 83) and Johnston and Schembri (2007, 185) also refer
to mouthing as providing a means of disambiguating between SMLS ⫺ in Auslan, the
signs geography and garage, for example, can be disambiguated by mouthing. In an-
other study, Schembri et al. (2002) found that more noun signs had mouthed compo-
nents than verb signs, and this was also reported for German Sign Language (DGS,
Ebbinghaus/Hessmann 2001), Swiss-German Sign Language (SGSL, Boyes-Braem
2001), and Sign Language of the Netherlands (NGT, Schermer 2001). Moreover, Ho-
henberger and Happ (2001) report differences between signers: some signers used ‘full
mouthings’, where strings of signs are accompanied by mouthings, while others used
‘restricted mouthings’, where mouth gestures predominate and signs are only select-
ively accompanied by mouthings. Bergman and Wallin (2001) suggest that mouthings
follow a hierarchical structure similar to other components of spoken and sign lan-
guages.

3.4.3. Loan translations and calques

Sign languages borrow extensively from spoken languages (Johnston/Schembri 2007)


creating calques such as support+group and sports+car. In some cases, a loan transla-
tion in BSL such as break+down exists alongside a native sign breakdown. Brentari
and Padden (2001) discuss ASL examples such as dead+line and time+line. Calques
35. Language contact and borrowing 851

can include semantically incongruous but widely used forms such as baby+sit. Loan
translations can also include compounds composed of a native sign and a fingerspelled
form such as deadC-e-n-d-.

3.5. Borrowing from the gestures of hearing communities

Gesture is universally used within hearing communities. Co-speech gestures include


deictic gestures (pointing), referential gestures (iconically motivated gestures), and em-
blems (gestures highly conventionalised within a community (Kendon 2004); see chap-
ter 27 for further discussion), and all of these often find their way into sign languages,
becoming linguistic elements in the process (see chapter 34, Lexicalisation and Gram-
maticalisation). Elements borrowed into sign languages include both manual and non-
manual gestures. One major group consists of manual emblems, that is, conventional,
culture-specific gestures. Manual emblems can be lexicalized (e.g. good (thumb up) in
BSL and many other sign languages; hungry in Italian Sign Language (LIS); yummy
in NGT). Deictic manual gestures such as points can be grammaticalized (for example,
in pronominal forms); and non-manual gestures may become markers with a linguistic
function. Pyers and Emmorey (2008), for instance, found that non-linguistic facial ex-
pressions such as brow movement, commonly used by hearing people in questions and
“if-then” sentences, appear with linguistic function in sign languages. Antzakas (2006)
suggests that the backwards head tilt used by hearing communities in the eastern Medi-
terranean area as a gesture meaning ‘No’ ⫺ contrasting with the headshake gesture
used in Northern Europe (Antzakas/Woll 2002) ⫺ has been borrowed and grammati-
calized as a negation marker in Greek Sign Language (GSL).
Cultural factors in the relationship between the gestures of the hearing community
and the signs of the Deaf community were explored by Boyes-Braem, Pizzuto, and
Volterra (2002). They found that Italian non-signers were better able than signers and
non-signers from other countries at guessing the meanings of signs rooted in Italian
culture, that is manual forms which also occurred as referential gestures and emblems
in Italian co-speech gesture (also see Pizzuto/Volterra 2000).

4. Contact between sign languages


Contact between two sign languages results in similar phenomena to those that occur
when two spoken languages come into contact, particularly with respect to interference
and code-switching (Lucas/Valli 1989, 1992; Quinto-Pozos 2008). Many of the processes
discussed above relating to the outcomes of contact between a sign language and a spo-
ken language are also found in sign language to sign language contact. However, there
are also some interesting differences. Code-switching (Quinto-Pozos 2008, 2009) between
sign and spoken language raises the issue of modality differences (see section 3.3), as
opposed to code switching between two sign languages.
To date, only few studies have focussed on contact between two sign languages.
This may be due to the fact that in order to investigate contact between two sign
languages, a detailed description of each of the sign languages is necessary, that is,
852 VII. Variation and change

a description of their individual phonetic, phonological, morphological, and syntactic


structures as well as the extent to which these differ between the two languages. How-
ever, a few studies of borrowing between two sign languages exist. Meir and Sandler
(2008), for instance, note how signs in Israeli Sign Language (Israeli SL) are borrowed
from other sign languages, brought by immigrants (e.g. from Germany and Russia).
Using Muysken’s (2000) typology, Adam (2012) examined the contact between dialects
of BSL (including Auslan) and dialects of Irish SL (including Australian Irish Sign
Language) and found examples of all three types of code-mixing in this typology al-
though congruent lexicalisation was the most common form of code-mixing. Valli and
Lucas (2000) discuss how contact between two sign languages can not only result in
lexical borrowing, but also code-switching, foreigner talk, interference as well as pidg-
ins, creoles, and mixed systems.

4.1. International Sign


Deaf people in the Western and Middle Eastern world have gathered together using
sign language for at least 2,000 years (Woll/Ladd 2003). The international Deaf com-
munity is highly mobile and in the 21st century there are regular international events,
including the World Federation of the Deaf Congresses, Deaflympics Games, and other
international and regional events.
Cross-national signed communication was first reported in the early 19th century
(Laffon de Ladebat 1815; Murray 2009). Laffon de Ladébat describes the meeting of
Laurent Clerc with the deaf children at the Braidwood school in London:

As soon as Clerc beheld this sight [the children at dinner] his face became animated; he
was as agitated as a traveller of sensibility would be on meeting all of a sudden in distant
regions, a colony of his own countrymen. […] Clerc approached them. He made signs
and they answered him by signs. This unexpected communication caused a most delicious
sensation in them and for us was a scene of expression and sensibility that gave us the
most heartfelt satisfaction. (Laffon de Ladébat 1815, 33)

This type of contact was not uncommon within Europe. The Paris banquets for deaf-
mutes (sic) in the 19th century are another example of the coming together of Deaf
people in a transnational context:

There were always foreign deaf-mutes in attendance, right from the first banquet. At the
third, there were deaf-mutes from Italy, England, and Germany. […] It seems that many
of these foreign visitors […] were painters drawn to Paris to learn or to perfect their art,
and even to stay on as residents. Several decades later, deaf American artists […] and the
painter J. A. Terry (father of the Argentinean deaf movement) probably all participated
in the banquets. (Mottez 1993, 32)
Deaf-mute foreigners, in their toasts, never missed a chance to emphasize the universal
nature of signs, claiming that “it easily wins out over all the separate limiting languages of
speaking humanity, packed into a more or less limited territory. Our language encompasses
all nations, the entire globe.” (Mottez 1993, 36)

Such cross-linguistic communication can be regarded as a pidgin. In a sign pidgin,


Deaf people from different communities communicate by exploiting their awareness
35. Language contact and borrowing 853

of iconicity and their access to visual-spatial expression. Such pidgins, however, cannot
easily be used to convey complex meanings, especially to Deaf people who have had
little exposure to or practice with cross-linguistic communication. The description of
Clerc at the deaf school suggests a situational pidgin created between a Deaf adult
using French Sign Language (LSF) and BSL-using Deaf children. In the case of the
Paris banquets, it is not known whether a situational pidgin was used or whether, due
to the length of stay of the banqueters, LSF was the language of interaction.
Most of what is known about pidgins is based on language contact with spoken
languages (Supalla/Webb 1995), and there has been relatively little research on the
linguistic outcome of contact between sign languages. However, there has been some
research on International Sign (IS), a contact variety which results from contact be-
tween sign languages. Use of the term International Sign, rather than International
Sign Language, emphasises that IS is not recognised as having full linguistic status.
Although used for communication across language boundaries, it is not comparable to
Esperanto in that it is not a planned language with a fixed lexicon and a fixed set of
grammatical rules. In the 1970s, Gestuno: International Sign Language of the Deaf was
an attempt by the World Federation of the Deaf to create a standardised artificial
international sign language, but this attempt was not successful (Murray 2009).
IS is a pidgin with no native signers or extended continuous usage (Moody 1994;
Supalla/Webb 1995). However, the structure of IS is much more complex than that
usually found in pidgins. In their study on the grammar of IS, Supalla and Webb report
finding SVO word order, five types of negation, and verb agreement, all used with
consistency and structural regularity (Supalla/Webb 1995, 348). This complexity is most
likely the result of the similarity of the grammatical and morphological structures of
the sign languages in contact ⫺ to the extent that IS has been considered a koine or
universal dialect. However, as Supalla and Webb also point out, studies of IS have
largely been concerned with contact among European sign languages (including ASL,
which is of European origin) and this may provide a misleading picture.
Unlike sign languages, IS does not have its own lexicon (Allsop/Woll/Brauti 1995).
Signers therefore have to decide whether to use signs from their own language, or
from another sign language, or whether to use mime, gesture, referents in the environ-
ment, or one of the few signs recognised as conventional in IS. Consequently, signers
of IS often chain together strings of signs and gestures to represent a single referent.
Thus signers of IS combine a relatively rich and structured grammar with a severely
impoverished lexicon (Allsop/Woll/Brauti 1995). This pattern is very different from
that found in spoken language pidgins, where the grammar is relatively more impover-
ished than the lexicon. Allsop, Woll, and Brauti also found that IS texts were longer
in duration and slower in production. This has implications for those seeking to provide
interpretation in IS at international meetings (McKee/Napier 2002; for issues in sign
language interpreting, also see chapter 36).
In fact, IS shares many features with foreigner talk: it incorporates the same types
of language modification native signers use when interacting with non-native signers,
such as slower rate of production, louder speech (or in the case of sign languages,
larger signs), longer pauses, common vocabulary, few idioms, greater use of gesture,
more repetition, more summaries of preceding utterances, shorter utterances, and more
deliberate articulation (Alatis 1990, 195).
The increasing mobility of deaf people within some transnational regions (e.g. Eu-
rope) has resulted in greater opportunities for contact with Deaf people from other
854 VII. Variation and change

countries within those regions, greater knowledge of the lexicons of other sign languages,
and more frequent use of IS strategies. The effectiveness of IS is undoubtedly enhanced
by the historical relationships that many European sign languages have with each other.
It is unknown how effective IS is for signers from Asia and Africa or for users of village
sign languages. IS is, however, an effective mode of communication for many Deaf peo-
ple in transnational contexts and has been used as a ‘lingua franca’ at international
events such as the Deaflympics since their beginning with the first ‘Silent Games’ in 1924,
in which nine European countries took part. IS is also used by the World Federation of
the Deaf (WFD), a global lobbying organisation of Deaf communities, where interpreta-
tion into IS has been provided since 1977 (Scott-Gibson/Ojala 1994).
When two Deaf individuals meet, with similar experiences of interacting gesturally
with non-signers and with experience of using a language in the visual modality, a
situational pidgin can be created effectively. The more experience signers have in com-
municating with users of other sign languages, the greater their exposure to different
visually-motivated lexicons will be. This in turn will result in an increased number of
strategies and resources to create a situational pidgin. Strings of actions and descrip-
tions are presented from an experiential perspective for interlocutors to understand
context-specific meanings. This communication also heavily relies on the inferential
processes of the receiver to understand semantic narrowing or broadening.

4.2. Education and colonisation

The travels of Deaf people are not the only form of transnational contact within the
Deaf community. The history of deaf education and of the training of teachers of the
deaf is often linked with sign language contact. As McCagg (1993) notes, teachers of
the deaf for the Habsburg empire were trained in Germany. In Ireland, the education
system and Irish SL were originally influenced by BSL and later by LSF when Deaf
nuns came from France to establish a school for the deaf in Dublin (Burns 1998; Woll/
Elton/Sutton-Spence 2001). All three of these languages ⫺ BSL, Irish SL, and LSF ⫺
have influenced or been the progenitors of other sign languages. These influences have
spread around the world from Europe to the Americas and to the Antipodes.
The colonial influence on sign languages via educational establishments has in all
likelihood influenced IS. European sign languages were brought to many countries
across the globe. LSF, for instance, has had a profound influence on many sign lan-
guages, including ASL (Lane 1984) and Russian Sign Language (Mathur/Rathmann
1998), and its footprint spreads across central Asia and transCaucasia in the area of
the old Soviet empire (Ojala-Signell/Komarova 2006). Other colonial powers in Europe
influenced the education systems of the Americas (such as the influences of LIS and
Spanish Sign Language (LSE) on the sign language in Argentina), and DGS has had
an influence on Israeli SL as a result of post-war immigration (Namir et al. 1979).
Moreover, Irish SL and ASL have been brought to many countries in the southern
hemisphere through education and religious missionary work (e.g. use of ASL in deaf
education in Ghana). As well as lexical influences, European sign languages may also
influence the types of linguistic structures that we see in IS, including the metaphoric
use of space (for example, timelines).
35. Language contact and borrowing 855

5. Language attrition and death

All languages may undergo attrition and death. For many sign languages, death has
been and continues to be likely, given the status of sign languages around the world,
the history of oppression of Deaf communities, and technological advances (including
cochlear implants and genetic screening; Arnos 2002). Brenzinger and Dimmendaal
(1992) note that language death is always accompanied by language shift, which occurs
when a language community stops using one language and shifts to using another
language, although language shift does not always result in language death.
Language death is influenced by two aspects:

(i) the environment, consisting of political, historical, economic, and linguistic reali-
ties;
(ii) the community with its patterns of language use, attitudes, and strategies.

Brenzinger and Dimmendaal (1992) observe that every case of language death is em-
bedded in a bilingual situation, which involves two languages, one of which is dying
and one of which continues. Sign languages are always under threat from the dominant
spoken language community, particularly in relation to education and intervention for
deaf children, contexts in which over a long period of time, sign language has not been
seen to have a place. Besides direct pressures to abandon bilingualism in a sign lan-
guage and spoken language, in some countries communities have shifted from using
one sign language to another. For example, ASL has replaced indigenous sign lan-
guages in some African, Asian, and Caribbean countries (Schmaling 2001).
There is a limited literature on sign language attrition. Yoel (2007) identified a set
of linguistic changes in a study of Russian Sign Language (RSL) users who had immi-
grated to Israel. She found that all parameters of a sign underwent phonological inter-
ference. Errors made by the signers she studied were mainly miscues and temporary
production errors, which are explained as language interference between RSL and
Israeli SL. These changes can be seen as precursors of language attrition. In a study
of Maritime Sign Language in Canada, a sign language historically descended from
BSL, Yoel (2009) found that as a result of language contact with ASL, a shift of lan-
guage use had taken place, and that Maritime Sign Language is now moribund with
few and only elderly users.

6. Conclusion

The political and historical aspects of language use and their influence cannot be sepa-
rated from studies of languages in contact. In contact with spoken languages, the fa-
vouring of communication other than a sign language and the view that sign language
is not appropriate for some situations are the direct results of a sociolinguistic situation
in which sign languages have been ignored and devalued, and in which the focus has
traditionally been on the instruction and use of spoken languages. It is only if sign
languages become more highly valued, formally and fully recognised, and used in a
856 VII. Variation and change

wide range of contexts of communication, that the outcomes of language contact in


the Deaf community will change.
The impact of contact with another language on a sign language also needs to be
addressed in terms of modality: cross-modal contact involving contact between a sign
language and a spoken language versus unimodal contact between two sign languages.
There is a larger body of research into the first type of contact, with new understand-
ings beginning to emerge. From earlier explorations of diglossia and pidginization,
researchers have moved towards the study of bimodal language contact and code-
blending, as well as other features of cross-modal language contact. With respect to
contact between two sign languages, further research is needed to fully understand
whether it parallels contact between two spoken languages, exhibiting features such
as code-switching, borrowing, language transfer, and interference. This new area of
research will contribute both to sociolinguistic theory and language processing re-
search.

7. Literature

Adam, Robert
2012 Unimodal Bilingualism in the Deaf Community: Contact Between Dialects of BSL and
ISL in Australia and the United Kingdom. PhD Dissertation, University College
London.
Akamatsu, C. Tane
1985 Fingerspelling Formulae: A Word is More or Less than the Sum of Its Letters. In:
Stokoe, William/Volterra, Virginia (eds.), Sign Language Research ’83. Silver Spring,
MD: Linstok Press, 126⫺132.
Alatis, James E.
1990 Linguistics, Language Teaching, and Language Acquisition: The Interdependence. In:
Georgetown University Round Table on Language and Linguistics (GURT) 1990. Wash-
ington, DC: Georgetown University Press.
Allsop, Lorna/Woll, Bencie/Brauti, Jon-Martin
1995 International Sign: The Creation of an International Deaf Community and Sign Lan-
guage. In: Bos, Heleen/Schermer, Trude (eds.), Sign Language Research 1994. Hamburg:
Signum, 171⫺188.
Ann, Jean
2001 Bilingualism and Language Contact. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign
Languages. New York: Cambridge University Press, 33⫺60.
Antzakas, Klimis
2006 The Use of Negative Head Movements in Greek Sign Language. In: Zeshan, Ulrike
(ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara
Press, 258⫺269.
Antzakas, Klimis/Woll, Bencie
2002 Head Movements and Negation in Greek Sign Language. In: Wachsmuth, Ipke/Sowa,
Timo (eds.), Gesture and Sign Language in Human-computer Interaction Berlin:
Springer, 193⫺196.
Arnos, Kathleen S.
2002 Genetics and Deafness: Impacts on the Deaf Community. In: Sign Language Studies
2(2), 150⫺168.
35. Language contact and borrowing 857

Baker, Anne/Bogaerde, Beppie van den


2008 Code-mixing in Signs and Words in Input to and Output from Children. In: Plaza-Pust,
Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism. Amsterdam: Benjamins,
1⫺27.
Battison, Robin
1978 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
Bergman, Brita/Wallin, Lars
2001 A Preliminary Analysis of Visual Mouth Segments in Swedish Sign Language. In:
Boyes-Braem, Penny/Sutton-Spence, Rachel (eds.), The Hands are the Head of the
Mouth: The Mouth as Articulator in Sign Languages. Hamburg: Signum, 51⫺68.
Bishop, Michele/Hicks, Sherry
2008 Coda Talk: Bimodal Discourse Among Hearing, Native Signers. In: Bishop, Michele/
Hicks, Sherry (eds.), Hearing, Mother Father Deaf: Hearing People in Deaf Families.
Washington, DC: Gallaudet University Press, 54⫺98.
Blumenthal-Kelly, Arlene
1995 Fingerspelling Interaction: A Set of Deaf Parents and Their Deaf Daughter. In: Lucas,
Ceil (ed.), Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University
Press, 62⫺73.
Bogaerde, Beppie van den
2000 Input and Interaction in Deaf Families. PhD Dissertation, University of Amsterdam.
Utrecht: LOT.
Bogaerde, Beppie van den/Baker, Anne
2002 Are Young Deaf Children Bilingual? In: Morgan, Gary/Woll, Bencie (eds.), Directions
in Sign Language Acquisition, Amsterdam: Benjamins, 183⫺206.
Bogaerde, Beppie van den/Baker, Anne
2005 Code Mixing in Mother-Child Interaction in Deaf Families. In: Baker, Anne/Woll, Ben-
cie (eds.), Sign Language Acquisition. Amsterdam: Benjamins, 141⫺163.
Bornstein, Harry (ed.)
1990 Manual Communication: Implications for Education. Washington, DC: Gallaudet Uni-
versity Press.
Boyes-Braem, Penny/Pizzuto, Elena/Volterra, Virginia
2002 The Interpretation of Signs by (Hearing and Deaf) Members of Different Cultures. In:
Schulmeister Ralf/Reinitzer, Heimo (eds.), Progress in Sign Language Research. In
Honor of Siegmund Prillwitz. Hamburg: Signum, 187⫺219.
Boyes-Braem, Penny/Sutton-Spence, Rachel (eds.)
2001 The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages.
Hamburg: Signum.
Brennan, Mary/Colville, Martin/Lawson, Lilian
1984 Words in Hand: A Structural Analysis of the Signs of British Sign Language. Edinburgh:
Edinburgh British Sign Language Research Project, Moray House College of Educa-
tion.
Brentari, Diane/Padden, Carol
2001 Native and Foreign Vocabulary in American Sign Language: A Lexicon with Multiple
Origins. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A Cross-
Linguistic Investigation of Word Formation. Mahwah, NJ: Lawrence Erlbaum, 87⫺119.
Brenzinger, Matthias/Dimmendaal, Gerrit
1992 Social Contexts of Language Death. In: Brenzinger, Matthias (ed.), Language Death.
Factual and Theoretical Explorations with Special Reference to East Africa. Berlin: Mou-
ton de Gruyter, 3⫺6.
Burns, Sarah E.
1998 Irish Sign Language: Ireland’s Second Minority Language. In: Lucas, Ceil (ed.), Pinky
Extension and Eye Gaze: Language Use in Deaf Communities. Washington, DC: Gallau-
det University Press, 233⫺274.
858 VII. Variation and change

Carmel, Simon J.
1982 International Hand Alphabet Charts. Silver Spring, MD: National Association of the
Deaf.
Cokely, Dennis
1983 When Is a Pidgin Not a Pidgin? In: Sign Language Studies 38, 1⫺24.
Cormier, Kearsy/Tyrone, Martha/Schembri, Adam
2008 One Hand or Two? Nativisation of Fingerspelling in ASL and BANZSL. In: Sign Lan-
guage and Linguistics 11, 3⫺44.
Davis, Jeffrey
1989 Distinguishing Language Contact Phenomena in ASL Interpretation. In: Lucas, Ceil
(ed.), The Sociolinguistics of the Deaf Community. San Diego, CA: Academic Press,
85⫺102.
Deuchar, Margaret
1977 Sign Language Diglossia in a British Deaf Community. Sign Language Studies 17,
347⫺356.
Dorian, Nancy
1982 Defining the Speech Community in Terms of Its Working Margins. In: Romaine, Su-
zanne (ed.), Sociolinguistic Variation in Speech Communities. London: Edward Arnold,
25⫺33.
Dugdale, Patricia/Kennedy, Graeme/McKee, David/McKee, Rachel
2003 Aerial Spelling and NZSL: A Response to Forman (2003). In: Journal of Deaf Studies
and Deaf Education 8, 494⫺497.
Emmorey, Karen/Borinstein, Helsa/Thompson, Robin
2005 Bimodal Bilingualism: Code-blending between Spoken English and American Sign
Language. In: Cohen, James/McAlister, Tara/Rolstad, Kellie/MacSwan, Jeff (eds.), ISB4:
Proceedings of the 4 th International Symposium on Bilingualism. Somerville, MA: Cas-
cadilla Press, 663⫺673.
Emmorey, Karen/Borinstein, Helsa/Thompson, Robin/Gollan, Tamar
2008 Bimodal Bilingualism. In: Bilingualism: Language and Cognition 11, 43⫺61.
Ferguson, Charles A.
1959 Diglossia. In: Word 15, 325⫺340.
Fischer, Susan
1978 Sign Language and Creoles. In: Siple, Patricia (ed.), Understanding Language through
Sign Language Research. New York: Academic Press, 309⫺331.
Fischer, Susan
1996 By the Numbers: Language-internal Evidence for Creolization. In: International Review
of Sign Linguistics 1, 1⫺22.
Fishman, Joshua
1967 Bilingualism with and Without Diglossia; Diglossia with and Without Bilingualism. In:
Journal of Social Issues 32(2), 29⫺38.
Grosjean, François
1982 Life with Two Languages: An Introduction to Bilingualism. Cambridge, MA: Harvard
University Press.
Hall, Robert A.
1966 Pidgin and Creole Languages. Ithaca, NY: Cornell University.
Hamers, Josiane/Blanc, Michel
2003 Bilinguality and Bilingualism. Cambridge: Cambridge University Press.
Hohenberger, Annette/Happ, Daniela
2001 The Linguistic Primacy of Signs and Mouth Gestures Over Mouthing: Evidence from
Language Production in German Sign Language (DGS). In: Boyes-Braem, Penny/Sut-
ton-Spence, Rachel (eds.), The Hands Are the Head of the Mouth: The Mouth as Articu-
lator in Sign Languages. Hamburg: Signum, 153⫺189.
35. Language contact and borrowing 859

Johnston, Trevor/Schembri, Adam


2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge:
Cambridge University Press.
Kendon, Adam
2004 Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.
Kuntze, Marlon
2000 Codeswitching in ASL and Written English Contact. In: Emmorey, Karen/Lane, Harlan
(eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and
Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 287⫺302.
Kyle, Jim/Woll, Bencie
1985 Sign Language: the Study of Deaf People and their Language. Cambridge: Cambridge
University Press.
Laffon de Ladébat, Andre-Daniel
1815 Recueil des Définitions et Réponses les plus Remarquables de Massieu et Clerc, Sourds-
Muets, aux Diverses Questions qui leur ont été Faites dans les Séances Publiques de M.
l’Abbé Sicard à Londres [A Collection of the Most Remarkable Definitions and Answers
of Massieu and Clerc]. London: Cox and Baylis.
Lane, Harlan
1984 When the Mind Hears. New York: Random House.
Levelt, Willem J. M.
1989 Speaking: From Intention to Articulation. Cambridge, MA: MIT Press.
Lucas, Ceil/Valli, Clayton
1989 Language Contact in the American Deaf Community. In: Lucas, Ceil (ed.), The Socio-
linguistics of the Deaf Community. San Diego: Academic Press, 11⫺40.
Lucas, Ceil/Valli, Clayton
1991 ASL or Contact Signing: Issues of Judgment. In: Language in Society 20, 201⫺216.
Lucas, Ceil/Valli, Clayton
1992 Language Contact in the American Deaf Community. San Diego, CA: Academic Press.
Machabée, Dominique
1995 Description and Status of Initialized Signs in Quebec Sign Language. In: Lucas, Ceil
(ed.), Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University
Press, 29⫺61.
Mathur, Gaurav/Rathmann, Christian
1998 Why not “give-us”: an Articulatory Constraint in Signed Languages. In: Dively, Valerie/
Metzger, Melanie/Taub, Sarah/Baer, Anne-Marie (eds.), Signed Languages: Discoveries
from International Research. Washington, DC: Gallaudet University Press, 1⫺25.
Mayberry, Rachel/Fischer, Susan/Hatfield, Nancy
1983 Sentence Repetition in American Sign Language. In: Kyle, Jim/Woll, Bencie (eds.),
Language in Sign: An International Perspective. London: Croom Helm, 206⫺215.
McCagg, William
1993 Some Problems in the History of Deaf Hungarians. In: Vickery van Cleve, John (ed.),
Deaf History Unveiled. Washington, DC: Gallaudet University Press, 252⫺271.
McKee, Rachel/McKee, David/Smiler, Kirsten/Pointon, Karen
2007 Maori Signs: The Construction of Indigenous Deaf Identity in New Zealand Sign Lan-
guage. In: Quinto-Pozos, David (ed.), Sign Languages in Contact. Washington, DC:
Gallaudet University Press, 31⫺81.
McKee, Rachel/Napier, Jemina
2002 Interpreting into International Sign Pidgin: An Analysis. In: Sign Language & Linguis-
tics 5, 27⫺54.
Meir, Irit/Sandler, Wendy
2008 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erl-
baum.
860 VII. Variation and change

Mottez, Bernard
1993 The Deaf Mute Banquets and the Birth of the Deaf Movement. In: Fischer, Renate/
Lane, Harlan (eds.), Looking Back: A Reader on the History of Deaf Communities and
Their Sign Languages. Hamburg: Signum, 143⫺156.
Murray, Joseph
2009 Sign Languages. In: Iriye, Akira/Saunier, Pierre-Yves (eds.), The Palgrave Dictionary
of Transnational History. Basingstoke: Palgrave Macmillian, 947⫺948.
Muysken, Pieter
2000 Bilingual Speech. A Typology of Code-Mixing. Cambridge: Cambridge University Press.
Myers-Scotton, Carol
1993 Duelling Languages: Grammatical Structure in Codeswitching. Oxford: Oxford Univer-
sity Press.
Namir, Lila/Sela, Israel/Rimor, Mordecai/Schlesinger, Israel M.
1979 Dictionary of Sign Language of the Deaf in Israel. Jerusalem: Ministry of Social Welfare.
Ojala-Signell, Raili/Komarova, Anna
2006 International Development Cooperation Work with Sign Language Intepreters. In:
McKee, Rachel (ed.), Proceedings of the Inaugural Conference of the World Association
of Sign Language Interpreters, Worcester, South Africa, 31 October⫺2 November 2005.
Coleford, Gloucestershire: Douglas McLean, 115⫺122.
Padden, Carol/LeMaster, Barbara
1985 An Alphabet on Hand: The Acquisition of Fingerspelling in Deaf Children. In: Sign
Language Studies 47, 161⫺172.
Pizzuto, Elena/Volterra, Virginia
2000 Iconicity and Transparency in Sign Languages: A Cross-Linguistic Cross-Cultural View.
In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthol-
ogy to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum,
261⫺286.
Pyers, Jennie/Emmorey, Karen
2008 The Face of Bimodal Bilingualism: Grammatical Markers in American Sign Language
are Produced When Bilinguals Speak to English Monolinguals. In: Psychological Sci-
ence 19(6), 531⫺536.
Quinto-Pozos, David
2007 Outlining Considerations for the Study of Sign Language Contact. In: Quinto-Pozos,
David (ed.), Sign Languages in Contact. Washington, DC: Gallaudet University Press,
1⫺28.
Quinto-Pozos, David
2008 Sign Language Contact and Interference: ASL and LSM. In: Language in Society 37,
161⫺189.
Quinto-Pozos, David
2009 Code-Switching Between Sign Languages. In: Bullock, Barbara/Toribio, Jacqueline
(eds.), The Handbook of Code-Switching. Cambridge: Cambridge University Press,
221⫺237.
Romaine, Suzanne
1995 Bilingualism. Oxford: Blackwell.
Sankoff, Gillian
2001 The Linguistic Outcome of Language Contact. In: Trudgill, Peter/Chambers, Jack/ Schil-
ling-Estes, Natalie (eds.), The Handbook of Sociolinguistics. Oxford: Blackwell, 638⫺
668.
Schembri, Adam/Wigglesworth, Gillian/Johnston, Trevor/Leigh, Greg/Adam, Robert/Barker, Roz
2002 Issues in the Development of the Test Battery for Australian Sign Language Morphol-
ogy and Syntax. In: Journal of Deaf Studies and Deaf Education 7, 18⫺40.
35. Language contact and borrowing 861

Schermer, Trude
1990 In Search of a Language. Delft: Eburon.
Schermer, Trude
2001 The Role of Mouthings in Sign Language of the Netherlands: Some Implications for
the Production of Sign Language Dictionaries. In: Boyes Braem, Penny/Sutton-Spence,
Rachel (eds.), The Hands are the Head of the Mouth: The Mouth as Articulator in Sign
Languages. Hamburg: Signum, 273⫺284.
Schick, Brenda
2003 The Development of ASL and Manually-Coded English Systems. In: Marschark, Marc/
Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education.
New York: Oxford University Press, 219⫺231.
Schmaling, Constanze
2001 ASL in Northern Nigeria: Will Hausa Sign Language Survive? In: Dively, Valerie/Metz-
ger, Melanie/Taub, Sarah/Baer, Anne-Marie (eds.), Signed Languages: Discoveries from
International Research. Washington, DC: Gallaudet University Press, 180⫺196.
Scott-Gibson, Elizabeth/Ojala, Rail
1994 International Sign Interpreting. Paper Presented at the Fourth East and South African
Sign Language Seminar, Uganda.
Stokoe, William
1969 Sign Language Diglossia. In: Studies in Linguistics 21, 27⫺41.
Supalla, Ted/Webb, Rebecca
1995 The Grammar of International Sign: A New Look at Pidgin Languages. In: Emmorey,
Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erl-
baum, 333⫺352.
Sutton-Spence, Rachel
1994 The Role of the Manual Alphabet and Fingerspelling in British Sign Language, PhD
Dissertation, University of Bristol.
Sutton-Spence, Rachel
1998 Grammatical Constraints on Fingerspelled English Verb Loans in BSL. In: Lucas, Ceil
(ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Washington,
DC: Gallaudet University Press, 41⫺58.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: an Introduction. Cambridge: Cambridge Uni-
versity Press.
Tervoort, Bernard
1973 Could There Be a Human Sign Language? In: Semiotica 9, 347⫺382.
Thomason, Sarah
2001 Language Contact: an Introduction. Washington, DC: Georgetown University Press.
Thomason, Sarah/Kaufman,Terrence
1988 Language Contact, Creolization, and Genetic Linguistics. Berkeley, CA: University of
California Press.
Valli, Clayton/Lucas, Ceil
2000 Linguistics of American Sign Language. Washington, DC: Gallaudet University Press.
Vinson, David/Thompson, Robin/Skinner, Robert/Fox, Neil/Vigliocco, Gabriella
2010 The Hands and Mouth Do Not Always Slip Together in British Sign Language: Dissoci-
ating Articulatory Channels in the Lexicon. In: Psychological Science 21(8), 1158⫺1167.
Waters, Dafydd/Campbell, Ruth/Capek, Cheryl/Woll, Bencie/David, Anthony/McGuire, Philip/
Brammer, Michael/MacSweeney, Mairead
2007 Fingerspelling, Signed Language, Text and Picture Processing in Deaf Native Signers:
The Role of the Mid-Fusiform Gyrus. In: Neuroimage 35, 1287⫺1302.
Weinreich, Uriel
1968 Languages in Contact: Findings and Problems. The Hague: Mouton.
862 VII. Variation and change

Woll, Bencie/Adam, Robert


2012 Sign Language and the Politics of Deafness. In: Martin-Jones, Marilyn/Blackledge,
Adrian/Creese, Angela (eds.), The Routledge Handbook of Multilingualism. London:
Routledge, 100⫺116.
Woll, Bencie/Elton, Frances/Sutton-Spence, Rachel
2001 Multilingualism: The Global Approach to Sign Languages. In: Lucas, Ceil (ed.), The
Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 8⫺32.
Woll, Bencie/Ladd, Paddy
2003 Deaf Communities. In: Marschark, Marc/Spencer, Patricia (eds.), The Handbook of
Deaf Studies, Language and Education. Oxford: Oxford University Press, 151⫺163.
Woodward, James
1973 Some Characteristics of Pidgin Sign English. In: Sign Language Studies 3, 39⫺46.
Yoel, Judith
2007 Evidence for First-language Attrition of Russian Sign Language Among Immigrants
to Israel. In: Quinto-Pozos, David (ed.), Sign Languages in Contact. Washington DC:
Gallaudet University Press, 153⫺191.
Yoel, Judith
2009 Canada’s Maritime Sign Language. PhD Dissertation, University of Manitoba.

Robert Adam, London (United Kingdom)

36. Language emergence and creolisation


1. Introduction
2. Creolisation: state of the art
3. Emerging sign languages
4. Structural similarities between creole and sign languages
5. Similarities in acquisition conditions
6. Creolisation and recreolisation revisited
7. Conclusion
8. Literature

Abstract

It has been argued that there are numerous interesting similarities between sign and
creole languages. Traditionally, the term ‘creolisation’ has been used to refer to the devel-
opment of a pidgin into a creole language. In this chapter, I take creolisation to apply
when children create a new language because they do not have access to a conventional
language model during acquisition. In this light, creolisation equals nativisation. Sign
and creole languages can be compared to each other because of certain structural similar-
ities as well as similarities in acquisition conditions. Crucial to the discussion here is the
role of children in language acquisition when there is no conventional language model.
36. Language emergence and creolisation 863

1. Introduction
In the recent past, there has been renewed interest in the phenomenon of creolisation
in sign language circles (Kegl 2002; Aronoff/Meir/Sandler 2005). The term ‘creolisation’
has been traditionally used in the field of creole studies to refer to the development
of a pidgin into a creole language (Todd 1990; Bickerton 1977, 1981; Andersen 1983;
Hymes 1971), a definition which implies that as soon as a pidgin functions as a first
language for its speakers, it has become a creole. This link between pidgin and creole,
especially the question whether a creole always develops from a pidgin, has been one
of the central issues in the field of creole studies for decades. Several proposals were
put forward to account for the emergence of creole languages. Mühlhäusler (1986) and
Bickerton (1981), among others, proposed different scenarios for the genesis of creole
languages. What these proposals had in common is that they analysed creolisation from
a sociolinguistic perspective.
Within the field of sign linguistics, several researchers have pointed out a number
of interesting similarities between sign and creole languages. Early comparisons be-
tween them were based on studies investigating American Sign Language (ASL) (Fis-
cher 1978; Woodward 1978; Gee/Goodhart 1988). These scholars have argued that sign
languages have creole structures and that the structural properties shared by sign and
creole languages are not accidental. It is clear that there is no genetic affiliation be-
tween these two groups of languages given that they belong to two different modalities.
Language contact between these two groups of languages is also excluded as a possible
explanation for the observed similarities since most of the sign languages do not coexist
with creole languages. Hence, there is a need for an adequate explanation. In this
chapter, I discuss the similarities described so far between these two language groups
which can be explained by acquisition conditions. Compelling evidence comes from
different areas: homesigns, young sign languages (e.g. Nicaraguan Sign Language), and
acquisition studies of creole languages (Adone 2001b, 2008a; Adone/Vainikka 1999).
A few scholars have argued that creolisation takes place in the formation of sign lan-
guages (Kegl/Senghas/Coppola 1999; Adone 2001b; Aronoff/Meir/Sandler 2005) ⫺ an
issue that will be addressed in this chapter in more detail. Thus, the coupling of studies
on creole and sign languages, especially young sign languages, can be expected to pro-
vide a unique perspective on the early stages of language genesis and development.
This chapter has two goals. First, it analyses creolisation as a process observed in
the genesis of creole languages. Second, it discusses the development of sign languages
as a case of creolisation that bears similarities to the development of creole languages.
Here, I take a psycholinguistic stand on creolisation thus allowing for cross-modal
comparison. Creolisation in the broader sense can be regarded as a process that takes
place under certain specific circumstances of acquisition, that is, when children are not
exposed to a conventional language model. Studies conducted by Newport (1999) and
others have shown that, in the absence of a conventional language model, children use
some of the other abilities they are equipped with to learn a language. They are even
capable of surpassing the inconsistent language model. They are also able to regularise
their input as seen in the case of children acquiring creole languages today (Adone
forthcoming). As the case of homesigners reveals, children are also capable of invent-
ing their own linguistic systems (Goldin-Meadow 2003). Taken together, these studies
bring to light the significant contribution of children to language acquisition.
864 VII. Variation and change

This chapter is organised as follows: section 2 examines the term creolisation in


depth as it is understood in the field of creole studies. In section 3, we will introduce
two types of emerging sign languages, Deaf community sign languages and young vil-
lage sign languages. In section 4, the most important structural similarities between
creole languages and sign languages are presented while in section 5, the acquisition
conditions of creole languages and sign languages are compared. The emphasis here is
on the role that children play in the acquisition of these languages. Section 6 discusses
the implications of the findings with respect to creolisation and recreolisation. This
section highlights that humans have ‘language-ready’ brains. In the absence of ad-
equate input, humans, i.e. children, still create a language system. Section 7 summarizes
the main ideas presented in this chapter and concludes that creolisation takes place
across modalities.

2. Creolisation: state of the art

2.1. Creolisation within the field of creole studies

Without doubt, creolisation has been one of the most controversial issues within the
field of creole studies. In this section, we will address two central issues in the discus-
sion on creolisation that have influenced the field of creole studies, namely, the genesis
of creole languages per se, and the ‘exceptional’ status of creole languages.

2.1.1. The genesis of creole languages

Several accounts of creolisation have been articulated so far. In some of the earliest
ones, it is assumed that a pidgin precedes a creole, whereas according to other accounts,
a pidgin is not necessary for a creole to emerge.
The classical view that creolisation is a process that takes place when a pidgin be-
comes the mother tongue of its speakers has been supported by several scholars (Hall
1966; Todd 1990; among others). According to this view, a pidgin is a structurally and
lexically simplified system which emerges in a language contact situation, and eventu-
ally develops into a fully-fledged language, that is, a creole. As a simplified system, a
pidgin typically has the following characteristics: a very restricted lexicon, no inflec-
tional morphology, no functional categories, and a highly variable word order. In con-
trast, a creole system typically shows an elaborate lexicon, derivational and some in-
flectional morphology, functional categories, and an underlying word order. The creole
system, as compared to the pidgin one, is less variable. The creolisation process is
assumed to take place as soon as the first generation of children acquires the pidgin
as a first language.
Another view is that creolisation takes place when pidgins expand into creole lan-
guages without nativisation. Scholars such as Sankoff (1979), Chaudenson (1992), Sin-
gler (1992, 1996), Arends (1993), and McWhorter (1997) have argued against the nativ-
isation-based view of creolisation. Based on a detailed historical reconstruction,
scholars have argued that creolisation can be a gradual process taking place over sev-
36. Language emergence and creolisation 865

eral generations of speakers (Arends (1993), Plag (1993), and Roberts (1995) for Ha-
waiian Creole; Baptista (2002) for Cape Verde Creole; Bollée (2007) for Reunion Cre-
ole). Under this view, creolisation equates to language change and the development of
grammatical structures in the formation of creoles can be accounted for by universal
principles of grammaticalisation (e.g. Plag 1993; Mufwene 1996). This view of creolisa-
tion can be assumed to account for the emergence of some creoles. More recently,
some scholars have discussed grammaticalisation and creolisation as processes that are
not mutually exclusive (Plag 1998; Adone 2009; among others).
Taking a universalist perspective on creolisation, Bickerton (1981, and subsequent
work) rejected this view and proposed that there is a break in the transmission between
the lexifier languages and the creoles. This has led him to argue that creolisation must
be abrupt if there is a breakdown in transmission of language. In his Language Biopro-
gram Hypothesis, Bickerton (1984) argues that adult pidgin speakers pass on their
pidgin to their children. These children, that is, the first generation of creole speakers,
are thus exposed to deficient input. As a result, they have to rely on their ‘Language
Bioprogram’ to invent language. The basic idea here is that creolisation is an instance
of first language acquisition in the absence of input. It is nativisation which takes place
as soon as a pidgin becomes the first language for its speakers (cf. Bickerton 1974;
Thomason/Kaufmann 1988; Adone 1994, 2001b, 2003; Mufwene 1999).
On the basis of a series of well-documented socio-historical facts, Arends (1993),
Singler (1993, 1996), and others, questioned the plausibility of Bickerton’s claim. Since
then the role of children and adults in the process of creolisation has become a subject
of considerable debate within the field. In the current debate, most of the scholars
adhere to the view that adults rather than children must have been the ones creolising
the system (e.g. Lumsden 1999; Lefebvre 1998; Siegel 1999; Veenstra 2003; Singler
1992). For other scholars (e.g. Bickerton 1984, 1990; Adone/Vainikka 1999; Adone
2001b; Bruyn/Muysken/Verrips 1999; Mufwene 1999), children were the ones mainly
responsible for creolisation. Following DeGraff (1999), nowadays many scholars within
the field assume that both adults and children must have contributed to the process of
creolisation (cf. Plag 1998; Baptista 2002; among others). However, little research has
been undertaken to provide evidence for either view. One reason for this are insuffi-
cient historical records on the development of most creole languages, especially during
the early stages of formation within colonial plantation communities in the seventeenth
and eighteenth century. Most creole languages emerged in the context of European
colonial expansion which was practised from the sixteenth century onwards and was
characterised by rigid social stratification of the society, master-slave relationships, and
plantation environments ⫺ all important socio-historical components typically present
in creolisation (Arends/Muysken/Smith 1995). It is this socio-historical dimension that
distinguishes creole languages from non-creole languages (see DeGraff 2003).

2.1.2. On the ‘exceptional’ status of creoles

The second question which has been controversially discussed concerns the exceptional
status of creoles. Muysken (1988) already argued that creole languages are not excep-
tional. However, the debate peaked with McWhorter’s (2001) proposal in which he
presented arguments for a distinction between creole and non-creole languages.
866 VII. Variation and change

McWhorter argues that creole grammars can be regarded as “the world’s simplest
grammars”, a view that has evoked much controversy among scholars in the field of
creole studies. Behind the view of McWhorter is the widely spread assumption that
creole languages are unique in the sense that they form a distinct and fairly homog-
enous group of languages with special features that set them apart from other lan-
guages. This view has been referred to in the literature as “creole exceptionalism”.
Numerous scholars within the field of creole studies have argued against creole excep-
tionalism or uniqueness (DeGraff 2003; Mufwene 2000; Ansaldo/Matthews/Lim 2007).
According to them, in terms of structure, creole languages are neither simple nor infe-
rior as compared to other languages. The only relevant difference between creole and
non-creole languages can be explained in terms of age. Creole languages, like sign
languages, are ‘young’ languages. Many of the creole languages emerged in the eight-
eenth century, and some are about 200 years of age. The recent emergence of these
languages has fortunately enabled us to observe some of the developmental stages
languages go through, and thus to gain insights into language emergence and develop-
ment.
As we will see in the following sections, creole languages are not structurally excep-
tional. In fact, the similarities between creole and sign languages are so striking that it
is unlikely that they are coincidental. One last point needs to be mentioned here.
Studies that have focussed on the socio-historical/cultural factors involved in creolisa-
tion have clarified what E-creolisation is. This process takes place on the societal level
within a specific time frame (i.e. colonisation) and within a specific type of society
(master-slave relation). E-creolisation will not be discussed further in this paper be-
cause it is not relevant in the present context. However, we note that the deaf commu-
nities in Europe, for instance, went through periods of societal suppression due to
the widespread belief in oral education following the International Congress on the
Education of the Deaf held in Milan in 1880 (see chapter 38 for details).

2.2. Creolisation within the field of sign language research

As briefly mentioned in section 1, a number of scholars within the field of sign lan-
guage research have drawn attention to similarities between sign and creole languages
(Deuchar 1987; Fischer 1978; Woodward 1978; Gee/Goodhart 1988). Both Woodward
(1978) and Fischer (1978) have argued that ASL is the outcome of creolisation of
indigenous American gestures and sign systems and the French Sign Language brought
to the United States by Laurent Clerc in the early nineteenth century. Fischer, for
example, compares ASL to creole languages and argues that ASL looked structurally
similar to creole languages (see section 4 for details). ASL, like Jamaican Creole, also
had a three ‘lect’ distinction (acrolect, mesolect, and basilect). While Fischer focuses
on the syntactic similarities between ASL and creole languages as well as on the paral-
lels in the social situation, Woodward discusses the lexical change in ASL and creole
languages.
More recently, Fischer (1996) presents interesting evidence for creolisation in the
number system of present-day ASL. She argues that the ASL number system is based
on “a hybridisation of American and French numbers” (1996, 1). A closer look at ASL
number signs for 6⫺9 shows an innovation which is typically seen in creole languages
36. Language emergence and creolisation 867

in that they go beyond the languages that provide the lexical bases. Further evidence
for creolisation is seen in the randomness of the mixing between American and French
number forms.
Gee/Goodhart (1988) point out striking grammatical similarities between ASL and
creole languages such as (i) the use of topic-comment word order, (ii) lack of tense
marking, but a rich aspectual system, (iii) use of postverbal free morphemes for com-
pletive aspect, and (iv) absence of pleonastic subjects and passive constructions. Some
of these features will be discussed in section 4 (also see Kegl/Senghas/Coppola 1999;
Kegl 2002; Meier 1984),

3. Emerging sign languages


Meir et al. (2010a) make a distinction between two types of emerging sign languages:
Deaf community sign languages and village sign languages. According to this distinc-
tion, Deaf community sign languages arise when signers of different backgrounds are
brought together in cities or schools. Nicaraguan Sign Language, Israeli Sign Language,
and Mauritian Sign Language are typical examples for this type of sign language. Vil-
lage sign languages, on the other hand, emerge in small communities or villages in
which a number of deaf children are born. Transmission of these sign languages takes
place within and between families. Socially speaking, these villages are more or less
insular. In section 3.2, we will briefly address one recently described young village sign
language, Al-Sayyid Bedouin Sign Language (see chapter 24 for discussion of other
village sign languages).
Yet another type of sign languages worth mentioning here are alternate (or second-
ary) sign languages. These sign languages have been described in the literature as
linguistic systems that are used by both deaf and hearing people. For the hearing com-
munity, these sign languages function as second languages and are mostly used for
cultural reasons (e.g. for ceremonies, when silence is requested in the presence of sa-
cred objects, or in the case of a death), thus serving a secondary purpose (Adone/
Maypilama 2012; Cooke/Adone 1994; Kendon 1988; Maypilama/Adone 2012). Because
of their apparent restricted use, alternate sign languages are generally not regarded as
full-fledged languages (see chapter 23, Manual Communication Systems: Evolution and
Variation, for further discussion).

3.1. Deaf community sign languages

Recent studies have documented the genesis of a few sign languages. Senghas (1995)
and Kegl, Senghas, and Coppola (1999) report one of the first cases of the birth of a
natural language, namely Nicaraguan Sign Language (ISN) in Nicaragua. Senghas
(1995) tracks the historical development of ISN which started in the late 1970s in
Nicaragua when the government established special education programs for deaf chil-
dren in the capital Managua. Deaf children from scattered villages were sent to this
school and brought with them their individual homesign systems. Although teachers
insisted on an oral language approach, that is, the development of oral language skills
868 VII. Variation and change

as well as lip-reading ability in Spanish, children used gestures and signs with each
other. In this environment of intense contact, the deaf children developed a common
system to communicate with each other, and within only a couple of years, a new sign
language emerged. These signs formed the input for new groups of deaf children enter-
ing school every year. Current work on ISN reveals the gradual development of gram-
matical features such as argument structure, use of space, and grammatical markings,
among others, across different cohorts of learners (Senghas 2000; Senghas et al. 1997;
Coppola/So 2005).
Adone (2007) investigated another interesting case of recent language genesis in
the Indian Ocean on the island of Mauritius. In Mauritius, the first school for the deaf
opened in September 1969 in Beau Bassin, one of the major cities on the island. Ac-
cording to information disclosed by the Society for the Welfare of the Deaf (Joonas,
p.c.), in the early seventies deaf children were recruited across the island and sent to
school in Beau Bassin. Children stayed in dormitories at school during the week and
were sent back to their villages to spend the weekends with their families. In 2004,
Adone and Gébert found several generations of Mauritian Sign Language (MSL) users.
Parallel to MSL users, in 2004, I discovered a small group of children in the north of
the island, Goodlands, who were using a sign system different from that of the deaf
population in Beau Bassin. Given that these children did not have contact with the
deaf community (children and adults) in Beau Bassin, it seemed worthwhile to take a
closer look at them. There were around 30 children of different ages. The older chil-
dren between 6 and 7 years of age were taught to lip-read, read, and write by a teacher
who had no training in deaf education. The younger ones were allowed to play and
interact freely with each other as communication with the teachers was extremely diffi-
cult. Based on first-hand observations of the Mauritian situation, I proposed that this
system could easily be regarded structurally as a homesign system and that it provided
us with insights into the earliest stages in the formation of a sign language (Adone
2007, 2009). Extrapolating results from work done so far, it becomes clear that MSL,
in contrast to other established sign languages, has developed little morphology. This
is evidenced by the distribution of plain, spatial, and agreement verbs: there are less
agreement verbs than plain and directional verbs. Native signers use SVO order quite
frequently, but they do show variability.

3.2. Young village sign languages

Another extremely interesting case of an emerging sign language is seen in the devel-
opment of Al-Sayyid Bedouin Sign Language (ABSL). This sign language emerged in
the Al-Sayyid Bedouin community (Negev region, Israel), “a small, insular, endoga-
mous community with a high incidence of nonsyndromic, genetically recessive, pro-
found prelingual neurosensory deafness” (Aronoff et al. 2010, 134). According to re-
searchers, this sign language is approximately 70 years old (Sandler et al. 2005). The
sign language is remarkable for a number of reasons. First, while the two examples
discussed in the previous section illustrate clearly the case of children deprived of
exposure to language, who as a result invent a new system, ABSL exemplifies language
emergence within a village community with little influence from sign languages in the
environment. Second, ABSL appears to have a regular syntax ⫺ SOV and head-modi-
36. Language emergence and creolisation 869

fier order (Sandler et al. 2005) ⫺ and regular compounding (Meir et al. 2010b), but
has no spatial morphology and has also been claimed to lack duality of patterning
(Aronoff et al. 2010).
Thus, these three studies on ISN, MSL, and ABSL, while tracking individual linguis-
tic developments, bring to light the various stages and mechanisms involved in the
creation of a new sign language.

4. Structural similarities between creole and sign languages


In this section, I focus on some key structural similarities between sign and creole
languages reported in the literature. A complete overview of the attested parallels
would go beyond the scope of this study. Therefore, I will address only five aspects:
word order, aspectual marking, reduplication, serial verb constructions, and morpho-
logical structure. At this point, it is important to establish a distinction that will be
relevant throughout the whole discussion, namely the distinction between ‘mature, es-
tablished’ and ‘young’ sign languages. This distinction is crucial because not only age
plays a role in the development and life-cycles of languages but also the degree of
conventionalisation.
The term ‘mature, established’ language is used mainly to refer to sign languages
which are reported to have a history of more or less 200 years, such as ASL, British
Sign Language (BSL), German Sign Language (DGS), and others. There are other
sign languages such as Adamorobe Sign Language (AdaSL) in Ghana (Nyst 2007) or
Yolngu Sign Language (YSL), an alternate sign language used in the Top End of the
Northern Territory of Australia, which, according to this criterion, are also mature, but
these languages seem to be linguistically different from the mature established sign
languages mentioned above. In contrast, the term ‘young’ sign language refers to sign
languages such as ISN, ABSL, Israeli Sign Language, and MSL, which all have a rela-
tively short history and are not yet (fully) established. As shown in section 3.1, in both
ISN and MSL, we are dealing with similar conditions playing a role in the genesis of
these languages.
Another crucial aspect in the below discussion is the non-relatedness of creole and
sign languages. Languages from these two groups are not genetically related to each
other and they generally do not have any close contact with each other to the extent
that they could influence each other directly. As a result, it is safe to assume that the
similarities found between these two language groups cannot be explained in terms
of genetic affiliation or language contact. Furthermore, creole languages are spoken
languages, thus belonging to the auditive-oral modality, whereas sign languages use the
visual-manual modality.

4.1. Word order

Many of the earlier studies on word order in both creole and sign languages concen-
trated on discourse notions such as topic and comment. Researchers in the early seven-
ties proposed that creole languages have no basic word order, and that the order of
870 VII. Variation and change

sentence elements instead depended on discourse ⫺ an issue heavily debated in the


eighties. Over the years, an increasing body of studies on individual creoles has shown
that creole languages seem to have a basic word order in matrix and embedded senten-
ces, i.e. SVO, as well as hierarchical structure (Adone 1994; Baptista 2002; Veenstra
1996; Syea 1993). Interestingly, a similar debate took place in sign language linguistics
circles: while some early studies stressed the importance of discourse notions (e.g.
Ingram 1978), later research demonstrated for various sign languages that they do have
a basic word order ⫺ be it SVO or SOV (see chapter 12, Word Order, for discussion
and methodological problems).
Clearly, the issue of word order is closely related to information packaging, that is,
the organisation of sentences to convey information structure, such as topic and focus.
Topic and focus are relative terms used to refer to old and new information, respec-
tively (see chapter 21, Information Structure, for details). It has been claimed inde-
pendently that both creole and sign languages, even though they have basic word order,
make heavy use of topic-comment structures. Also, it has been observed for various
sign languages that elements may be repeated in sentence-final position in order to
foreground (i.e. focus) these elements. In ASL, for instance, wh-signs (1a), verbs (1b),
and quantifiers may be doubled (Petronio 1993; in Sandler/Lillo-Martin 2006, 417 f.).

wh
(1) a. who buy c-a-r who [ASL]
‘Who bought the car?’
hn
b. anne like ice-cream like
‘Anne likes ice cream.’

Similar doubling structures are also attested in various creole languages ⫺ however,
there is a restriction on the elements that can be doubled as well as on the position of
the doubled element. Generally, the doubled element appears sentence-initially. Inves-
tigating the phenomenon for Isle de France Creole (a French-based creole), Corne
(1999) refers to it as ‘double predication’. A look at the Mauritian Creole examples in
(2), shows that in this language, verbs can be doubled (2a) but wh-words cannot (2b).

(2) a. Galupe ki mo fin galupe [Mauritian Creole]


run that I asp run
‘I ran a lot.’
b. * kisana in aste loto la kisana?
who asp buy car det who
‘Who bought the car?’

These surface similarities between creole and sign languages, I believe, are best ex-
plained as resulting from their discourse-oriented character. These languages use pro-
sodic prominence when elements are focused.

4.2. Aspectual marking


The use of aspectual marking is another area of similarity between creole and sign
languages. Over the years, it has become clear that the majority of creole languages
36. Language emergence and creolisation 871

have both tense (anteriority) and aspect (completion) markers. Furthermore, in the
past few decades, several studies have shown that, across creole languages, tense and
aspect markers were attested in the early stages of creolisation (Arends 1994; Bicker-
ton 1981; Bollée 1977, 1982; Baker/Fon Sing 2007; Corne 1999). Both tense and aspect
markers can be used with verbs and predicative adjectives in creole languages. The
prominence of aspectual marking in the creole TAM-system has led Bickerton (1981)
and others to argue that the system is primarily aspect-oriented.
Studies on sign languages also reveal that verbs and predicative adjectives may
inflect for aspect. Inflection for tense, however, is not attested. Klima and Bellugi
(1979) provide an overview of aspectual inflections in ASL, such as iterative, habitual,
and continuative (see also Rathmann 2005). Similar aspectual markings with the same
functions have been reported for other ‘mature’ as well as ‘young’ sign languages,
for example, BSL (Sutton-Spence/Woll 1999) and MSL (Adone 2007). Another very
interesting phenomenon is the use of the sign finish in ASL and other sign languages
to mark completive aspect (Fischer 1978; Rathmann 2005; see chapter 9, Tense, Aspect,
and Modality, for further discussion). Interestingly, we find a parallel development in
creole languages. Most creole languages have developed a completion marker for as-
pect which derives from the superstrate/lexifier languages involved in their genesis. In
the case of Mauritian Creole, French is the lexifier language and the aspectual marker
fin derives from the French verb finir (‘finish’; see example (3b) below). This develop-
ment is interesting for two reasons. First, in one of the emergent sign languages studied,
MSL, the sign finish is also used to mark completion across generations of signers.
Second, Adone (2008a) found that young homesigners overgeneralise the gesture ‘GO/
END/FINISH’ to end sentences in narratives. Taken together, this provides empirical
support for the view that aspectual marking, i.e. completion, is part of the basic set of
features/markings found in the initial stages of language genesis and development.
Further evidence comes from studies on first language acquisition of spoken languages
which indicate that cross-linguistically children seem to mark aspect first (Slobin 1985).

4.3. Reduplication

The next two structures to be discussed, namely reduplication and serial verb construc-
tions, have been selected because of their relevance in the current theoretical discus-
sion on recursion as a defining property of UG (Roeper 2007; Hauser/Fitch 2003). A
closer look at reduplication reveals that both language groups commonly make use of
reduplication, as seen in the following examples from spoken creoles (3a⫺d) and DGS
(4a⫺b) (in the sign language examples, reduplication is marked by ‘CC’).

(3) a. Olabat bin wokwok oldei [Ngukurr Kriol]


‘They were walking all day.’
b. … lapli lapli ki fin tonbe … [Mauritian Creole]
‘… there was a lot of rain …’
c. Ai bin luk munanga olmenolmen [Ngukurr Kriol]
‘I saw many white men.’
(4) a. night index1 driveCC [DGS]
872 VII. Variation and change

‘I drove the whole night.’


b. garden childCC play
‘The children are playing in the garden.’

In both creole and sign languages, verb reduplication fulfils (at least) two functions:
(i) realization of aspectual meaning ‘to V habitually, repeatedly, or continuously’, as in
(3a) and (4a); (ii) expression of intensive or augmentative meaning in the sense of
‘V a lot’, as in (3b). Such patterns are attested in various creoles including French-
based, Portuguese-based, and English-based creoles (Bakker/Parkvall 2005) as well as
in established and emerging sign languages (Fischer 1973; Senghas 2000).
Adone (2003) drew a first sketch of the similarities between creoles and sign lan-
guages with respect to reduplication to mark plurality, collectivity, and distribution
in nominals. Interestingly, among these three distinct functions, plurality (3c, 4b) and
collectivity are found to be widespread in both creole and sign languages (cf. Bakker/
Parkvall 2005; Pfau/Steinbach 2006). Given the extensive use of reduplication in these
two language groups, it is plausible to argue that (full) reduplication is a syntactic
structure that emerges early in language genesis because it is part of the basic set of
principles available for organising human linguistic behaviour, a view that has been
previously taken by several scholars (Bickerton 1981; Goldin-Meadow 2003; Myers-
Scotton p.c.).

4.4. Serial verb constructions

Serial verb constructions (SVCs) are typically defined as complex predicates containing
at least two verbs within a single clause. Classical examples come from Kwa languages
of West Africa, and the Austronesian languages of New Guinea and Malagasy. In the
field of creole studies, SVCs have long been been regarded as evidence ‘par excellence’
for the substrate hypothesis according to which creole grammars reflect the substrate
languages that have been involved in the genesis of creole languages (e.g. Lefebvre
1986, 1991). While the role played by substrate languages cannot be denied, I believe
that universal principles operating in language acquisition are likely to offer a better
explanation on the formation of creole languages.
A look at SVCs shows that some properties can be regarded as core properties;
these include: (i) only one subject; (ii) no intervening marker of co-ordination or subor-
dination; (iii) only one negation with scope over all verbs; (iv) TMA-markers on either
one verb or all verbs; (v) no pause; and (vi) optional argument sharing (Veenstra 1996;
Muysken/Veenstra 1995).
Several types of SVCs have been distinguished in the literature. Here, however, I
will discuss only two types, which are found in both creole and sign languages (see
Adone 2008a): (i) directional SVCs involving verbs such as ‘run’, ‘go’, and ‘get’, as
illustrated by the Seychelles Creole example in (5a); (ii) benefactive SVCs involving
the verb ‘give’, as in the Saramaccan example in (5b) (Byrne 1990; in Aikhenvald
2006, 26):
36. Language emergence and creolisation 873

(5) a. Zan pe tay Praslin al sers son marmay komela [Seychelles Creole]
Zan asp run Praslin go get 3poss child now
‘Zan is getting his child from Praslin now.’
b. Kófi bi bái dí búku dá dí muyé [Saramaccan]
Kofi tns buy the book give the woman
‘Kofi had bought the woman the book.’
(6) a. person limp-cllegs move-in-circle [ASL]
‘A person limping in a circle.’
/betalen/
b. please index1 pay index1 1give2 index2 pu [NGT]
‘I want to pay you (for it).’

Supalla (1990) discusses ASL constructions involving serial verbs of motion. In (6a),
the first verb expresses manner, the second one path (adapted from Supalla 1990, 134).
The NGT example in (6b), from Bos (1996), is similar to (5b). It is interesting to note
that the mouthing betalen (‘to pay’) stretches over both verbs as well as an intervening
index, which is evidence that the two verbs really form a unit (pu stands for ‘palm-up’).
Senghas (1995) reports on the existence of such constructions in her study on ISN.
Senghas, Kegl, and Senghas (1997) examine the development of word order in ISN
and show that the first generation signers have a rigid word order with the two verbs
and the two arguments consistently interleaved in a N1V1N2V2 pattern (e.g. man push
woman fall). In contrast, the second generation of signers initiates patterns such as
N1N2V1V2 (man woman push fall) or N1V1V2N2 (man push fall woman). These
patterns illustrate that signers in the first generation have SVSV (not an SVC) while
second generation signers prefer both SOVV and SVVO patterns. These latter struc-
tures display the defining core properties of SVC, syntactically and prosodically.

4.5. Morphological structure − an apparent difference

Aronoff, Meir, and Sandler (2005) have argued that many sign languages, despite the
fact that they are young languages, paradoxically show complex morphology. Interest-
ingly, most of the attested morphological processes are simultaneous, i.e. stem-internal,
in nature (e.g. verb agreement and classifiers) while there is only little concatenative
(sequential) morphology (which usually involves the grammaticalisation of free signs,
as e.g. in the case of the ASL zero-suffix). In an attempt to explain this paradox,
Aronoff, Meir, and Sandler (2005) argue that the complex morphology found in sign
languages is iconically motivated. On the one hand, reduplication is clearly iconic (see
section 4.3.). On the other hand, sign languages being visual languages, they are
uniquely suited for reflecting spatial cognitive categories and relations in an iconic way.
Given this, sign languages do lend themselves to iconicity to a much higher degree
than spoken languages do. As a result, it is not surprising that even young sign lan-
guages may develop surprisingly complex morphology.
As an example, consider spatial inflection on verbs, that is, the classical distinction
between plain, spatial, and agreement verbs, which is pervasive in sign languages
around the world (see chapter 7 for details). Padden et al. (2010) perused the agree-
874 VII. Variation and change

ment and spatial types of verbs in two ‘young’ sign languages (ABSL and Israeli SL)
and found compelling evidence for the development from no agreement to a full agree-
ment system. MSL, also a young sign language, confirms this pattern: plain and spatial
verbs are common while agreement verbs are less common (Gébert/Adone 2006). The
reason given for the scarcity of agreement verbs is that these verbs often entail gram-
matical marking of person, number, and syntactic roles. If we assume that spatial verbs
involve spatial mapping but no morphosyntactic categories, then we expect them to
develop earlier than agreement verbs, that is, we expect the grammatical use of space to
develop gradually. YSL seems to confirm this hypothesis. Although this sign language is
a mature sign language, it still has not developed much morphology. A careful examina-
tion of verbs in this sign language shows that both plain and spatial verbs are abundant;
the verbs see, look, come, and go, for instance, may be spatially modified to match
the location of locative arguments. In contrast, only two instances of agreement verbs,
namely give and tell, have been observed so far (Adone 2008b).

4.6. Summary

To sum up, we have seen that there are some striking structural similarities between
creole and sign languages, and that these are far from superficial. Due to space limita-
tions, only a few aspects have been singled out for comparison. In addition, both creole
and sign languages seem to share similar patterns of possessive constructions (simple
juxtaposition of possessor and possessee), rare use or lack of passive constructions, and
paucity of prepositions, among others. Obviously, all of these structures are not specific
to these two types of languages as they are attested in other languages, too. What
makes these structures highly interesting is that they are available in these two ‘young’
language groups. Having established the structural similarities between these language
groups, we may turn now to their genesis. A closer look at their acquisition conditions
makes the comparison even more compelling.

5. Similarities in acquisition conditions

5.1. The pidgin-creole language context

Bickerton drew some very interesting parallels between creole languages and first lan-
guage acquisition data to support his view of a Language Bioprogram which children
can fall back on when they do not have access to sufficient input. This view has been
much debated and is still a source of dispute within the field of creole studies. Investi-
gating data from Hawaiian Pidgin, Bickerton (1981, 1984, 1995) assumed that adult
pidgin speakers learned the target language of the community, which was English, and
passed on fragments of that language to their children. Several studies on various
pidgins have reported a restricted vocabulary, absence of morphology, and high vari-
ability, among other features. Based on the structural similarities between creoles and
the initial stages in first language acquisition, Bickerton proposed that a similar system,
protolanguage, must have been the evolutionary precursor to language. In broad terms,
36. Language emergence and creolisation 875

protolanguage is regarded as “the beginnings of an open system of symbolic communi-


cation that provided the bridge to the use of fully expressive languages, rich in both
lexicon and grammar” (Arbib/Bickerton 2010, vii). As such, a pidgin can be understood
as the precursor of a creole. Due to space limitations, I cannot discuss this matter any
further; however, I would like to add that the evidence for such a comparison is com-
pelling (Bickerton 1995; Arbib/Bickerton 2010).
I have discussed elsewhere that the acquisition of Mauritian Creole syntax confirms
Bickerton’s nativist view to some extent. Additional support comes from Ngukurr
Kriol (Adone 1997). It is important to note that there are two situations to be distin-
guished: the first generation of creole speakers, and the subsequent generations of
creole children. The major difference between the first and subsequent generations
lies in their access to input. The situation is less extreme for subsequent generations
of creole-acquiring children than for the first generation because the former do indeed
have access to input, which, however, can be highly variable and unsystematic in nature
(Adone 1994, 2001b).
An example is the use of lexical and null subjects. The overall development of
null subjects in Mauritian Creole showed a U-shape development which can be partly
explained by the highly unsystematic nature of the input (Adone 1994), in particular,
the unsystematic use of null/lexical subjects by adults. This in turn can be partly ex-
plained by the fact Mauritian Creole is mostly an oral language. It is safe to say that
the language is in a state of flux. There is no conventional language model to reinforce
the knowledge of the adult native speaker. Children spend the first years of their lives
acquiring a creole spoken in their environment. When they go to school, they become
literate in English or French, languages which are not always spoken in their environ-
ment (Florigny 2010). Given that they do not get a conventional language model as
input, they develop an ‘approximate’ open system.

5.2. The homesign − sign language context

There are various circumstances in which deaf children acquire language. First, there
are those deaf children who grow up with deaf parents and therefore have access to
their parents’ sign language from birth. This type of first language acquisition is known
to proceed just like normal language acquisition (see chapter 28 for details). In a sec-
ond group, we find deaf children who are surrounded by hearing people with no or
very little knowledge of a sign language. The statistics indicate that in roughly 90 % of
cases, deaf children are born to hearing parents. For the United States, for instance, it
has been reported that only about 8.5 % of deaf children grow up in an environment
in which at least one of the parents or siblings uses a sign language. As such, in most
homes, the adult signers are non-native users of a sign language. In these cases, deaf
children are in a position similar to that of the first generation of creole-speaking
children whose parents are pidgin speakers. While we still do not know the exact nature
of the input to the first generation of creole-speaking children, we have a better picture
in the case of the deaf children.
Hearing parents often use gestures to communicate with their deaf children. The
gestural system generated by adults is structurally similar to pidgins in that it is irregu-
lar, arbitrary, and unsystematic in nature (Goldin-Meadow 2003; Senghas et al. 1997).
876 VII. Variation and change

This spontaneous and unsystematic repertoire of gestures does not provide the deaf
children with sufficient input to acquire their L1, but may allow them to develop a
homesign system. Homesign is generally regarded as an amorphous conglomeration of
gestures or signs invented by deaf children in a predominately hearing environment
without access to sign language input. Homesign can be regarded as a possible precur-
sor of a sign language in the same way as a pidgin is a precursor for a creole. In both
the pidgin and the homesign contexts, children have no access to a conventional lan-
guage model (see chapter 26, Homesign, for further discussion).
It is crucial to note that creole and sign languages are prevalently languages without
standardised written forms. More importantly, deaf children and creole-speaking chil-
dren do not become literate in their first language for various reasons. Many deaf
children are sent to mainstream schools, thus forced to integrate into the hearing com-
munity and learn a spoken language at the expense of their sign language. Studies on
children homesigners around the world show that they develop a successful communi-
cative system (Goldin-Meadow 2003; Adone 2005). Taken together, these studies
strongly suggest that children play an important role in acquisition.

5.3. The role of children in creolisation

Now, if we take creolisation to be nativisation, we need to take a closer look at the


role played by children during this process. Before we move on to the discussion, let
us establish some conceptually necessary assumptions. There is little doubt that the
ability to acquire and use language is a species-specific property of humans. Although
animals can also demonstrate coordination of their behaviour, their communication
skills are limited (cf. Hultsch/Mundry/Todt 1999). It is also uncontroversial that lan-
guage, as one of the human activities, is highly rule-governed. Within the field of gener-
ative linguistics, it is generally assumed that humans have a ‘language faculty’ that is
partially genetically determined. Evidence for this genetic predisposition for language
comes from studies supporting the view of ‘a critical period’, or, most recently, ‘sensi-
tive periods’ for language acquisition. Evidence for this sort of constraint on human
language development comes from children who have undergone hemispherectomy
and from feral or seriously deprived children like Genie, who did not have exposure
to language until the age of 13 (Curtiss 1977). Taken together, these findings indicate
clearly that complete and successful acquisition depends very much on early exposure
to a language model.
A closer look at cross-linguistic studies emphasises the ability of children to over-
generalise and regularise. Several cases of overgeneralisations in various languages
have been documented by Pinker (1999). He argues that although children are capable
of generalising and overgeneralising, such overgeneralisations are actually rare in the
acquisition data from English and German (among other languages). We might thus
ask ourselves why these children apparently rarely overgeneralise. Some scholars might
argue that the parents’ corrections could be a reason. However, if this was the case,
then we would not expect any overgeneralisations at all. Alternatively, a more interest-
ing explanation might be that these languages are all established languages, that is,
children have access to a conventional language model that they have to follow. As a
result, there is no ‘space’ for overgeneralisations. Cases of overgeneralisations in the
36. Language emergence and creolisation 877

child’s grammar illustrate the creativity of children and thus confirm the children’s
ability to generate rules and to apply them elsewhere. This explains why English-speak-
ing children regularise the past tense -ed and produce forms such as goed for a period
of time and only later acquire the irregular target form went. By the time children
acquire went, they have encountered went in the input many times. Indeed Pinker
(1999) argues that the frequency of the form contributes to reinforcing the memory
for the correct form went (also cf. breaked, eated, and the ‘double inflected’ forms
broked, ated).
Looking at the case of children acquiring creole languages offers us a unique per-
spective on the system these children generate when confronted with a variable, non-
conventional input. Adone (2001b, forthcoming) claims that creole-acquiring children
seem to have more ‘freedom’ to generalise and that they do so extensively. She
presents evidence for children’s creative abilities deployed while forming innovative
complex constructions such as passive, double-object, and serial verb constructions.
Obviously, these children regularise the input and reorganise inconsistent input, thus
turning it into a consistent system (see Adone (2001a) for further empirical evidence
involving the generalisation of verbal endings to novel verbs by Seychelles Creole-
speaking children). In an artificial language study, Hudson Kam and Newport (2005)
investigated what adults and children learners acquire when their input is inconsistent
and variable. Their results clearly indicate that adults and children learn in different
ways. While adults did not regularise the input, children did. Adult learners reproduced
variability detected in the input whereas children did not learn the variability but in-
stead systematised the input.
In this context, studies examining the sign language acquisition of Simon, a deaf
child of deaf parents, are also interesting (Singleton 1989; Singleton/Newport 2004;
Ross/Newport 1996; Newport 1999). Crucially, Simon’s deaf parents were late L2
learners of ASL. Their use of morphology and complex syntax was inconsistent when
compared to the structures produced by native speakers. However, the difference be-
tween Simon’s morphology and syntax and that of his parents was striking. Overall,
Simon clearly surpassed the language model of his parents by regularising each mor-
pheme he was exposed to and using them in over 90 % of the contexts required. With
this study, we have strong evidence for children’s capacity of going beyond the input
they have access to.
There is a substantial body of psycholinguistic studies that stress the ability of chil-
dren to ‘create’ language in the absence of a conventional model. But it is also a well-
established observation that humans in general are capable of dealing with a so-called
‘unstructured environment’ in a highly systematic way. Interestingly, a wide array of
empirical studies reveals the human ability to learn and to modify knowledge even in
the absence of ‘environmental systematicity’, that is, to create a system out of inconsist-
ency (Frensch/Lindenberger/Ulman 1999). Taken together, empirical studies on the
acquisition of creole and sign languages shed light on the human ability to acquire
language in the absence of systematic, structured input for two reasons. First, children
acquiring a creole today illustrate the case of children who are confronted with a highly
variable input. Second, the study of homesigners illustrates the case of children without
language input. In both cases, the children behave similarly to their peers who receive
a conventional language model, that is, they generalise, regularise, and reorganise what
they get as input.
878 VII. Variation and change

More recently, several studies have focussed on the statistical learning abilities that
children display (Safran/Aslin/Newport 1996; Tenenbaum/Xu 2005). Several observa-
tions indicate that we cannot exclude the possibility of children analysing the input for
regularities and making use of strong inference capacities during the acquisition proc-
ess. Various experiments conducted with a child population seem to support the hy-
pothesis that children are equipped with the ability to compute statistics. If children
can overgeneralise, regularise, and create new structures in language, it is also plausible
that they can learn statistically. However, future work is crucial to determine the extent
to which this statistical ability is compatible with a language acquisition scenario firmly
grounded in a UG framework.
To sum up, we have seen that children are equipped with language-creating skills
such as regularising and systematising a (possibly impoverished) linguistic system. Now
that we have been able to establish the role of children in creolisation, we will turn to
the role adults play in the creolisation process.

5.4. The role of adults in creolisation

Several studies concerned with the creolisation of creole languages assume that adults
are the major agents (Lumsden 1999; Mufwene 1999). More recently, interdisciplinary
studies have helped to clarify the role of adults in language acquisition. Findings on
L2 acquisition (Birdsong 2005; Gullberg/Indefrey 2006) reinforce the view that the
ability for language learning decreases with age.
Based on results from experimental second language acquisition data, it can be
argued that in the initial stages, pidgin speakers must have relied on both UG and
transfer. This constitutes evidence for the main role played by adults in pidginisation.
However, they played a less important role in the creolisation process. Their role here
consists mainly of providing input to children.
Findings in various sub-disciplines of cognitive science show clearly that adults ap-
proach the language acquisition task differently. First of all, adult learners, in contrast
to children, have more elaborate cognitive capacities, but no longer have a child-like
memory and perceptual filter (Newport 1988, 1990; Goldowsky/Newport 1993). In par-
ticular, differences between adults and children in terms of memory have consequences
on language abilities (Caplan/Waters 1999; Salthouse 1991; Gullberg/Indefrey 2006). In
addition, Hudson Kam and Newport (2005) demonstrated very clearly that the learning
mechanisms in adults and children are different: adults, in contrast to children, do not
regularise unpredictable variation in the input.
Interestingly, differences between adults and children are not only observed in lan-
guage acquisition but also in gesture generation. In a series of experiments, Goldin-
Meadow, McNeill, and Singleton (1996) demonstrated that when hearing adults gener-
ated gestures, their goal was to produce a handshape that represented the object appro-
priately. These adults instantly invented a gesture system with segmentation and combi-
nation but compared to children, their gestures were not systematically organised into
a system of internal contrasts, that is, they did not have morphological structure. They
also did not have combinatorial form. The unreliable and unsystematic nature of these
gestures can be explained by the fact that these gestures accompany speech and do
36. Language emergence and creolisation 879

not carry the full burden of communication as is the case with people using gestures
primarily to communicate.
From various studies gathered so far, we thus have good reasons to stick to the
assumption that adults can learn from other adults and transmit variable input to chil-
dren. However, the role of regularising the input can be ascribed to children only.

6. Creolisation and recreolisation revisited

Following Chomsky’s (1986) discussion of I(nternalised)-language versus E(xterna-


lised)-language, I propose ‘I-creolisation’ to refer to the process of nativisation that
takes place in the absence of a conventional language model in contrast to ‘E-creolisa-
tion’, which refers to the process at the societal level.
The converging evidence on the genesis of languages from the two groups is consist-
ent with the view that creolisation is also at work in the development of sign languages.
In this case, we refer to I-creolisation, given that this process takes place in the brain
of the speakers concerned. As already mentioned, there are no reasons to focus on E-
creolisation in the case of sign languages. The latter type of creolisation refers to the
sociolinguistic process that is intrinsic to the formation of creole languages. I have
argued that the homesign context is comparable to the creole context in two ways,
namely in terms of acquisition conditions and in terms of the role played by children
in acquisition. Creolisation here is understood in a broader sense as a process of nativi-
sation that takes place when input in language acquisition is incomplete. Both data on
creole languages and sign languages illustrate the process of first language acquisition
with either incomplete input (i.e. creole languages) or without input (i.e. homesigners).
Under these circumstances, children either reorganise the variable input or invent a
new system. This scenario differs from the acquisition of established languages in that
variable input leads children to be creative. Several factors such as memory, age, and
the lack or small amount of experience (e.g. in the form of exposure to literacy in their
L1) play an important role in the linguistic behaviour of adults (cf. Kuhl (2000) and
Petersson et al. (2000) for effects of literacy, especially the development of a formatted
system, for phonological processing in literate and illiterate subjects).
Parallel to creolisation, the process of recreolisation is worth mentioning. In the
field of creole studies, the term was used in the seventies to refer to a phenomenon
observed in the second generation of Jamaican speakers in Great Britain. According
to Sebba (1997), these speakers ⫺ in contrast to the first generation of creole-speaking
immigrants ⫺ apparently altered their creole to make it sound more like the one spo-
ken in Jamaica. In this sense, the second generation of creole speakers recreolise their
linguistic system through phonological and syntactic alterations. This view of recreoli-
sation fits well with the concept of E-creolisation because of its societal implications.
Using a computer model that analysed language change and its trajectory, Niyogi and
Berwick (1995) convincingly showed that in a population of child learners, a small
portion of this population fails to converge on pre-existing grammars. After exposure
to a finite amount of data, some children converge on the pre-existing grammar, while
others do not and consequently reach a different grammar. As a result, the second
generation then becomes linguistically heterogeneous. The third generation of children
880 VII. Variation and change

hears sentences produced by the second, and they, in turn, will attain a different set of
grammars. Consequently, over successive generations, the linguistic composition be-
comes a dynamic system. A systematic look at Mauritian Creole in the seventies, nine-
ties, and today shows a pattern of change that looks very much like the one predicted
by Niyogi and Berwick (1995), namely a very dynamic but highly variable creole system
(cf. also Syea 1993).
Based on the evidence provided by Niyogi (2006), I propose that the changes seen
in both creoles and sign languages can be adequately explained by I-recreolisation.
However, this takes place in every generation of speakers, if and only if each genera-
tion of children does not have a conventional language model. The fact that genera-
tions of speakers do not become literate in their L1 (be it a creole or a sign language)
contributes to the non-availability of conventional patterns in these two language
groups.
The acquisition of creole languages today is comparable to the acquisition of ASL
by Simon (see section 5.3.) because both the child population and Simon are exposed
to a highly variable, non-conventional language model. Taken together, these studies
highlight the role of children in acquisition in the absence a conventional language
model. It is exactly this condition that leads children to surpass their language models.
In comparison, the first generation of creole speakers and children homesigners do
invent language because of the non-availability of a language model.

7. Conclusion

At the beginning of this chapter, I attempted to provide a definition of creolisation


and related issues. Parallels between creole and sign languages have been established
in terms of acquisition and input. My main goal has been to show that sign languages
do share certain structural similarities with creole languages, some of which had al-
ready been observed in early studies on sign languages. Recent studies have shown
that some of the similarities cannot be explained by the ‘age’ of these languages, but
should rather be related to the acquisition conditions in the genesis of these languages.
Based on a close examination and comparison of sign languages and creole languages,
I have argued that there is solid evidence for the view that creolisation is also at
work in the formation of sign languages, in spite of modality-specific characteristics of
languages that may influence the process. The acquisition studies on creole and sign
languages do shed light on the role of children in shaping language. Both cases of
acquisition can be taken as cogent empirical evidence for the human ability, especially
children’s ability, to invent language (homesigners and most probably the first genera-
tion of creole speakers), or surpass their language model (Simon and creole-acquiring
children today). Several studies have shown that under particular circumstances, i.e.
variable, unconventional input, children regularise and systematise their input. These
findings converge with the view that children play a crucial role in language formation
(Bickerton 1984, and subsequent work; Traugott 1977). Creolisation is then taken to
refer to nativisation across modality.
While it is plausible to argue that creolisation can also take place in the formation
of sign languages, the recreolisation issue remains unclear at this stage. Interestingly,
36. Language emergence and creolisation 881

continuing studies on sign languages might give us deeper insights into the process of
recreolisation itself. At this stage, there are still many open questions. An agenda for
future research should definitely address the following issues. In the light of what has
been discussed in creole studies, the question arises whether recreolisation can also
take place in sign languages. If yes, then we need to clarify whether it takes place in
every single generation of speakers/signers. Other related issues are the links between
creolisation, grammaticalisation, and language change. Furthermore, if there is creoli-
sation and possibly recreolisation, can we expect decreolisation in the life cycle of a
language? Both the theoretical analysis and the empirical findings substantiating the
analysis in this paper should be regarded as a first step towards disentangling the
complexities of creolisation across modality.

Acknowledgements: I would like to thank Adam Schembri, Trude Schermer, Marlyse


Baptista, and Susanne Fischer for remarks on previous versions of this chapter. I am
very grateful to Roland Pfau for his insightful comments. Thank you also to Markus
Steinbach and Timo Klein. All disclaimers apply.

8. Literature
Adone, Dany
1994 The Acquisition of Mauritian Creole. Amsterdam: Benjamins.
Adone, Dany
1997 The Acquisition of Ngukurr Kriol as a First Language. A.I.A.T.S.I.S. Project Report.
Darwin/Canberra, Australia.
Adone, Dany
2001a Morphology in Two Indian Ocean Creoles. Paper Presented at the Meeting of the Soci-
ety of Pidgin and Creole Languages. University of Columbia, Portugal.
Adone, Dany
2001b A Cognitive Theory of Creole Genesis. Habilitation Thesis. Heinrich-Heine Univer-
sität, Düsseldorf.
Adone, Dany
2003 Reduplication in Creole and Sign Languages. Paper Presented at the Meeting of the
Society of Pidgin and Creole Languages. University of Manoa, Hawaii.
Adone, Dany
2005 The Case of Mauritian Home Sign. In: Brugos, Alejna/Clark-Cotton, Manuella/Ha,
Seungwan (eds.), Proceedings of the 29 th Annual Boston University Conference on Lan-
guage Development. Somerville, MA: Cascadilla Press, 12⫺23.
Adone, Dany
2007 From Gestures to Mauritian Sign Language. Paper Presented at the Current Issues in
Sign Language Research Conference. University of Cologne, Germany.
Adone, Dany
2008a From Gesture to Sign. The Leap to Language. Manuscript, University of Cologne.
Adone, Dany
2008b Looking at Yolngu Sign Language a Decade Later. A Ministudy on Language Change.
Manuscript, University of Cologne.
Adone, Dany
2009 Grammaticalisation and Creolisation: The Case of Ngukurr Kriol. Paper Presented at
the Meeting of the Society of Pidgin and Creole Languages. San Francisco.
882 VII. Variation and change

Adone, Dany
forthcoming The Acquisition of Creole Languages. Cambridge: Cambridge University Press.
Adone, Dany/Maypilama, E. Lawurrpa
2012 Yolngu Sign Language from a Sociolinguistics Perspective. Manuscript, Charles Dar-
win University.
Adone, Dany/Vainikka, Anne
1999 Acquisition of Wh-questions in Mauritian Creole. In: DeGraff, Michel (ed.), Language
Creation and Language Change: Creolization, Diachrony, and Development. Cam-
bridge, MA: MIT Press, 75⫺94.
Aikhenvald, Alexandra Y.
2006 Serial Verb Constructions in Typological Perspective. In: Aikhenvald, Alexandra/
Dixon, Robert M.W. (eds.), Serial Verb Constructions. A Cross-linguistic Typology. Ox-
ford: Oxford University Press, 1⫺68.
Andersen, Roger (ed.)
1983 Pidginization and Creolization as Language Acquisition. Rowley, MA: Newbury House.
Ansaldo, Umberto/Matthews, Stephen/Lim, Lisa (eds.)
2007 Deconstructing Creole. Amsterdam: Benjamins.
Arbib, A. Michael/Bickerton, Derek
2010 Preface. In: Arbib, A. Michael/Bickerton, Derek (eds.), The Emergence of Protolan-
guage. Holophrasis vs. Compositionality. Amsterdam: Benjamins, vii⫺xi.
Arends, Jacques
1993 Towards a Gradualist Model of Creolization. In: Byrne, Francis/Holm, John (eds.), At-
lantic Meets Pacific. Amsterdam: Benjamins, 371⫺380.
Arends, Jacques
1994 The African-born Slave Child and Creolization (a Post-script to the Bickerton/Singler
Debate on Nativization). In: Journal of Pidgin and Creole Languages 9, 115⫺119.
Arends, Jacques/Muysken, Pieter/Smith, Norval (eds.)
1995 Pidgins and Creoles: An Introduction. Amsterdam: Benjamins.
Aronoff, Mark/Meir, Irit/Sandler, Wendy
2005 The Paradox of Sign Language Morphology. In: Language 81(2), 301⫺344.
Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy
2010 The Roots of Linguistic Organization in a New Language. In: Arbib, A. Michael/Bicker-
ton, Derek (eds.), The Emergence of Protolanguage. Holophrasis vs. Compositionality.
Amsterdam: Benjamins, 133⫺152.
Baker, Philip/Fon Sing, Guillaume (eds.)
2007 The Making of Mauritian Creole. London: Battlebridge.
Bakker, Peter/Parkvall, Mikael
2005 Reduplication in Pidgins and Creoles. In: Hurch, Bernhard (ed.), Studies on Reduplica-
tion. Berlin: Mouton de Gruyter, 511⫺553.
Baptista, Marlyse
2002 The Syntax of Cape Verdean Creole: the Sotavento Varieties. Amsterdam: Benjamins.
Bickerton, Derek
1974 Creolisation, Linguistic Universals, Natural Semantax and the Brain. In: University of
Hawaii Working Papers in Linguistics 6, 125⫺141.
Bickerton, Derek
1977 Pidginization and Creolization: Language Acquisition and Language Universals. In:
Valdmann, Albert (ed.), Pidgin and Creole Linguistics. Bloomington: Indiana Univer-
sity Press, 46⫺69.
Bickerton, Derek
1981 Roots of Language. Ann Arbor, MI: Karoma Publishers.
Bickerton, Derek
1984 The Language Bioprogram Hypothesis. In: The Behavioral and Brain Sciences 7,
173⫺221.
36. Language emergence and creolisation 883

Bickerton, Derek
1990 Language and Species. Chicago: University of Chicago Press.
Bickerton, Derek
1995 Language and Human Behavior. London: UCL Press.
Birdsong, David
2005 Interpreting Age Effects in Second Language Acquisition. In: Kroll, Judith F./De
Groot, Annette M.B. (eds.), Handbook of Bilingualism. Psycholinguistic Approaches.
Oxford: Oxford University Press, 109⫺127.
Bollée, Annegret
1977 Le Creole Français des Seychelles. Tübingen: Niemeyer.
Bollée, Annegret
1982 Die Rolle der Konvergenz bei der Kreolisierung. In: Ureland, Sture Per (ed.), Die
Leistung der Strataforschung und der Kreolistik: Typologische Aspekte der Sprachfor-
schung. Tübingen: Niemeyer, 391⫺405.
Bollée, Annegret
2007 Im Gespräch mit Annegret Bollée. In: Reutner, Ursula (ed.), Annegret Bollée: Beiträge
zur Kreolistik. Hamburg: Helmut Buske, 189⫺215.
Bos, Heleen F.
1996 Serial Verb Constructions in Sign Language of the Netherlands. Manuscript, University
of Amsterdam.
Bruyn, Adrienne/Muysken, Pieter/Verrips, Maaike
1999 Double-object Constructions in the Creole Languages: Development and Acquisition.
In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Di-
achrony, and Development. Cambridge, MA: MIT Press, 329⫺373.
Caplan, David/Waters, Gloria S.
1999 Verbal Working Memory and Sentence Comprehension. In: The Behavioral and Brain
Sciences 1(1), 77⫺94.
Chaudenson, Robert
1992 Des Îles, des Hommes, des Langues: Essai sur Créolisation Linguistique et Culturelle.
Paris: L’Harmattan.
Chomsky, Noam
1986 Knowledge of Language: Its Nature, Origin and Use. New York: Praeger.
Cooke, Michael/Adone, Dany
1994 Yolngu Signing ⫺ Gestures or Language? In: CALL Working Papers, Centre for Abo-
riginal Languages and Linguistics, Batchelor College, Northern Territory, Australia,
1⫺15.
Coppola, Marie/So, Wing Chee
2005 Abstract and Object-anchored Deixis: Pointing and Spatial Layout in Adult Homesign
Systems in Nicaragua. In: Brugos, Alejna/Clark-Cotton, Manuella/Ha, Seungwan (eds.),
Proceedings of the 29 th Annual Boston University Conference on Language Develop-
ment. Somerville, MA: Cascadilla Press, 144⫺155.
Corne, Chris
1999 From French to Creole. The Development of New Vernaculars in the French Colonial
World. London: Battlebridge.
Curtiss, Susan
1977 Genie: a Psycholinguistic Study of a Modern-day “Wild Child”. New York: Academic
Press.
DeGraff, Michel
1999 Creolization, Language Change, and Language Acquisition: A Prolegomenon. In: De-
Graff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony,
and Development. Cambridge, MA: MIT Press, 1⫺46.
884 VII. Variation and change

DeGraff, Michel
2003 Against Creole Exceptionalism. In: Language 79(2), 391⫺410.
Deuchar, Margaret
1987 Sign Languages as Creoles and Chomsky’s Notion of Universal Grammar. In: Modgil,
Sohan/Modgil, Celia (eds.), Noam Chomsky: Consensus and Controversy. New York:
Falmer, 81⫺91.
Fischer, Susan D.
1973 Two Processes of Reduplication in the American Sign Language. In: Foundations of
Language 9, 469⫺480.
Fischer, Susan D.
1978 Sign Language and Creoles. In: Siple, Patricia (ed.), Understanding Language through
Sign Language Research. New York: Academic Press, 309⫺331.
Fischer, Susan D.
1996 By the Numbers: Language-internal Evidence for Creolization. In: Edmondson, Will-
iam/Wilbur, Ronnie B. (eds.), International Review of Sign Linguistics. Hillsdale, NJ:
Lawrence Erlbaum, 1⫺22.
Florigny, Guilhem
2010 Acquisition du Kreol Mauricien et du Français. PhD Dissertation, University of Paris
Ouest Nanterre la Defense.
Frensch, Peter A./Lindenberger, Ulman/Kray, Jutta
1999 Imposing Structure on an Unstructured Environment: Ontogenetic Changes in the
Ability to Form Rules of Behavior under Condition of Low Environmental Predictabil-
ity. In: Friederici, Angela D./Menzel, Randolf (eds.), Learning: Rule Extraction and
Representation. Berlin: Mouton de Gruyter, 139⫺158.
Gébert, Alain/Adone, Dany
2006 A Dictionary and Grammar of Mauritian Sign Language, Vol. 1. Vacoas, République
de Maurice: Editions Le Printemps.
Gee, James Paul/Goodhart, Wendy
1988 American Sign Language and the Human Biological Capacity for Language. In: Strong,
Michael (ed.), Language Learning and Deafness. Cambridge: Cambridge University
Press, 49⫺79.
Goldin-Meadow, Susan
2003 The Resilience of Language. What Gesture Creation in Deaf Children Can Tell Us About
How All Children Learn Language. New York: Psychology Press.
Goldin-Meadow, Susan/McNeill, David/Singleton, Jenny
1996 Silence is Liberating: Removing the Handcuffs on Grammatical Expression in the Man-
ual Modality. In: Psychological Review 103, 34⫺55.
Goldowsky, Bois N./Newport, Elissa L.
1993 Modelling the Effects of Processing Limitations on the Acquisition of Morphology: The
Less is More Hypothesis. In: Clark, Eve (ed.), Proceedings of the 24 th Annual Child
Language Research Forum. Stanford, CA, 124⫺138.
Gullberg, Marianne/Indefrey, Peter (eds.)
2006 The Cognitive Neuroscience of Second Language Acquisition. Oxford: Blackwell.
Hall, Robert
1966 Pidgin and Creole Languages. Ithaca, NY: Cornell University Press.
Hauser, Marc D./Fitch, W. Tecumseh
2003 What Are the Uniquely Human Components of the Language Faculty? In: Christian-
sen, Morten H./Kirby, Simon (eds.), Language Evolution. Oxford: Oxford University
Press, 158⫺181.
Hudson Kam, Carla/Newport, Elissa, L.
2005 Regularizing Unpredictable Variation: The Roles of Adult and Child Learners in Lan-
guage Formation and Change. In: Language Learning and Development 1(2), 151⫺195.
36. Language emergence and creolisation 885

Hultsch, Henrike/Mundry, Roger/Todt, Dietmar


1999 Learning, Representation and Retrieval of Rule-related Knowledge in the Song System
of Birds. In: Friederici, Angela D./Menzel, Randolf (eds.), Learning: Rule Extraction
and Representation. Berlin: Mouton de Gruyter, 89⫺115.
Hymes, Dell H.
1971 Pidginization and Creolization of Languages. Cambridge: Cambridge University Press.
Ingram, Robert M.
1978 Theme, Rheme, Topic, and Comment in the Syntax of American Sign Language. In:
Sign Language Studies 20, 193⫺218.
Kegl, Judy
2002 Language Emergence in a Language-ready Brain: Acquisition. In: Morgan, Gary/Woll,
Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins, 207⫺
254.
Kegl, Judy/Senghas, Ann/Coppola, Marie
1999 Creation through Contact: Sign Language Emergence and Sign Language Change in
Nicaragua. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creoli-
zation, Diachrony, and Development. Cambridge, MA: MIT Press, 179⫺237.
Kendon, Adam
1988 Sign Languages of Aboriginal Australia. Cultural, Semiotic and Communicative Perspec-
tives. Cambridge: Cambridge University Press.
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Kuhl, Patricia K.
2000 Language, Mind, and Brain: Experience Alters Perception. In: Gazzaniga, Michael S.
(ed.), The New Cognitive Neurosciences. Cambridge, MA: MIT Press, 99⫺115.
Lefebvre, Claire
1986 Relexification in Creole Genesis Revisited: The Case of Haitian Creole. In: Muysken,
Pieter/Smith, Norval (eds.), Substrata Versus Universals in Creole Genesis. Amsterdam:
Benjamins, 279⫺300.
Lefebvre, Claire
1991 Take Serial Verb Constructions in Fon. In: Lefebvre, Claire (ed.), Serial Verbs: Gram-
matical, Comparative and Cognitive Approaches. Amsterdam: Benjamins, 37⫺78.
Lefebvre, Claire
1998 Creole Genesis and the Acquisition of Grammar: The Case of Haitian Creole. Cam-
bridge: Cambridge University Press.
Lumsden, John
1999 Language Acquisition and Creolization. In: DeGraff, Michel (ed.), Language Creation
and Language Change: Creolization, Diachrony, and Development. Cambridge, MA:
MIT Press, 129⫺157.
Maypilama, E. Lawurrpa/Adone, Dany
2012 Bimodal Bilingualism in the Top End. Manuscript, Charles Darwin University.
McWhorter, John
1997 Towards a New Model of Creole Genesis. New York: Peter Lang.
McWhorter, John
2001 The World’s Simplest Grammars Are Creole Grammars. In: Linguistic Typology 5(2/
3), 125⫺166.
Meier, Richard P.
1984 Sign as Creole. In: The Behavioural and Brain Sciences 7, 201⫺202.
Meir, Irit/Sandler, Wendy/Padden, Carol/Aronoff, Mark
2010a Emerging Sign Languages. In: Marschark, Mark/Spencer, Patricia E. (eds.), Oxford
Handbook of Deaf Studies, Language, and Education, Volume 2. Oxford: Oxford Uni-
versity Press, 267⫺280.
886 VII. Variation and change

Meir, Irit/Aronoff, Mark/Sandler, Wendy/Padden, Carol


2010b Sign Languages and Compounding. In: Scalise, Sergio/Vogel, Irene (eds.), Cross-disci-
plinary Issues in Compounding. Amsterdam: Benjamins, 301⫺322.
Mühlhäusler, Peter
1986 Pidgin and Creole Linguistics. Oxford: Blackwell.
Mufwene, Salikoko
1996 Creolisation and Grammaticalization: What Creolistics Could Contribute to Grammati-
calization. In: Baker, Philip/Syea, Anand (eds.), Changing Meanings, Changing Func-
tions. Papers Relating to Grammaticalisation in Contact Languages. London: University
of Westminster Press.
Mufwene, Salikoko
1999 On the Language Bioprogram Hypothesis: Hints from Tazie. In: DeGraff, Michel (ed.),
Language Creation and Language Change: Creolization, Diachrony, and Development.
Cambridge, MA: MIT Press, 95⫺127.
Mufwene, Salikoko
2000 Creolization Is a Social, Not a Structural, Process. In: Neumann-Holzschuh, Ingrid/
Schneider, Edgar (eds.), Degrees of Restructuring in Creole Languages. Amsterdam:
Benjamins, 65⫺84.
Muysken, Pieter
1988 Are Creoles a Special Type of Language? In: Newmeyer, Frederick J. (ed.), Linguistics:
The Cambridge Survey. Vol. II: Linguistic Theory: Extensions and Implications. Cam-
bridge: Cambridge University Press, 285⫺301.
Muysken, Pieter/Veenstra, Tonjes
1995 Serial Verbs. In: Arends, Jacques/Muysken, Pieter/Smith, Norval (eds.), Pidgins and
Creoles: An Introduction. Amsterdam: Benjamins, 289⫺301.
Newport, Elissa L.
1988 Constraints on Learning and Their Role in Language Acquisition. In: Language Scien-
ces 10, 147⫺172.
Newport, Elissa L.
1990 Maturational Constraints on Language Learning. In: Cognitive Science 14, 11⫺28.
Newport, Elissa L.
1999 Reduced Input in the Acquisition of Sign Languages: Contributions to the Study of
Creolisation. In: DeGraff, Michel (ed.), Language Creation and Language Change: Cre-
olization, Diachrony, and Development. Cambridge, MA: MIT Press, 161⫺178.
Niyogi, Partha
2006 The Computational Nature of Language Learning and Evolution. Cambridge: MIT
Press.
Niyogi, Partha/Berwick, Robert C.
1995 The Logical Problem of Language Change. Cambridge, MA: MIT Memo No. 1516.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Padden, Carol/Meir, Irit/Aronoff, Mark/Sandler, Wendy
2010 The Grammar of Space in Two New Sign Languages. In: Brentari, Diane (ed.), Sign
Languages (Cambridge Language Surveys). Cambridge. Cambridge University Press,
570⫺592.
Petersson, Karl M./Reis, Alexandra/Askelöf, Simon/Castro-Caldas, Alexandre/Ingvar, Martin
2000 Language Processing Modulated by Literacy: A Network Analysis of Verbal Repetition
in Literate and Illiterate Subjects. In: Journal of Cognitive Neuroscience 12(3), 364⫺382.
Pfau, Roland/Steinbach, Markus
2006 Pluralization in Sign and in Speech: A Cross-modal Typological Study. In: Linguistic
Typology 10, 135⫺182.
36. Language emergence and creolisation 887

Pinker, Steven
1999 Words and Rules: Ingredients of Language. New York: Harper Collins.
Plag, Ingo
1993 Sentential Complementation in Sranan: On the Formation of an English-based Creole
Language. Tübingen: Niemeyer.
Plag, Ingo
1998 On the Role of Grammaticalization in Creolization. In: Gilbert, Glenn (ed.), Pidgin
and Creole Languages in the 21st Century. New York: Peter Lang.
Rathmann, Christian
2005 Event Structure in American Sign Language. PhD Dissertation, University of Texas
at Austin.
Roberts, Julian
1995 Pidgin Hawaiian: a Sociohistorical Study. In: Journal of Pidgin and Creole Languages
10, 1⫺56.
Roeper, Tom
2007 The Prism of Grammar. How Child Language Illuminates Humanism. Cambridge, MA:
MIT Press.
Ross, Danielle S./Newport, Elissa L.
1996 The Development of Language from Non-native Linguistic Input. In: Stringfellow,
Andy/Cahana-Amitay, Dalia/Hughes, Elizabeth/Zukowski, Andrea (eds.), Proceedings
of the 20 th Annual Boston University Conference on Languages Development, 634⫺645.
Safran, Jenny R./Aslin, Richard N./Newport, Elissa L.
1996 Statistical Learning by 8-month-old Infants. In: Science 274, 1926⫺1928.
Salthouse, Timothy A.
1991 Theoretical Perspectives on Cognitive Aging. Hillsdale, NJ: Lawrence Erlbaum.
Sandler, Wendy/Lillo-Martin, Diane
2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press.
Sandler, Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark
2005 The Emergence of Grammar: Systematic Structure in a New Language. In: Proceedings
of the National Academy of Sciences 102(7), 2661⫺2665.
Sankoff, Gillian
1979 The Genesis of a Language. In: Hill, Kenneth (ed.), The Genesis of Language. Ann
Arbor: Karoma Press, 23⫺47.
Sebba, Mark
1997 Contact Languages. Pidgins and Creoles. New York: St Martin’s Press.
Senghas, Ann
1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation,
Cambridge, MA, MIT.
Senghas, Ann
2000 The Development of Early Spatial Morphology in Nicaraguan Sign Language. In: How-
ell, Catherine/Fish, Sarah/Keith-Lucas, Thea (eds.), Proceedings of the 24 th Annual Bos-
ton University Conference on Language Development. Somerville, MA: Cascadilla Press,
696⫺707.
Senghas, Ann/Coppola, Marie/Newport, Elissa L./Supalla, Ted
1997 Argument Structure in Nicaraguan Sign Language: The Emergence of Grammatical
Devices. In: Hughes, Elizabeth/Hughes, Mary/Greenhill, Annabel (eds.), Proceedings
of the 21st Annual Boston University Conference on Language Development. Somerville,
MA: Cascadilla Press, 550⫺561.
Senghas, Richard J./Kegl, Judy/Senghas, Ann
1997 Creation through Contact: the Development of a Nicaraguan Deaf Community. Paper
Presented at the Second International Conference on Deaf History. University of Ham-
burg.
888 VII. Variation and change

Siegel, Jeff
1999 Transfer Constraints and Substrate Influence in Melanesian Pidgin. In: Journal of
Pidgin and Creole Languages 14, 1⫺44.
Singler, John
1992 Nativization and Pidgin/Creole Genesis: A Reply to Bickerton. In: Journal of Pidgin
and Creole Languages 7, 319⫺333.
Singler, John
1993 African Influence Upon Afro-American Language Varieties: A Consideration of Socio-
historical Factors. In: Mufwene, Salikoko (ed.), Africanisms in Afro-American Lan-
guage Varieties. Athens, GA: University of Georgia Press, 235⫺253.
Singler, John
1996 Theories of Creole Genesis, Sociohistorical Considerations, and the Evaluation of Evi-
dence: The Case of Haitian Creole and the Relexification Hypothesis. In: Journal of
Pidgin and Creole Languages 11, 185⫺230.
Singleton, Jenny L.
1989 Restructuring of Language from Impoverished Input: Evidence for Linguistic Compen-
sation. PhD Dissertation, University of Illinois at Urbana-Champaign.
Singleton, Jenny L./Newport, Elissa L.
2004 When Learners Surpass Their Models: The Acquisition of American Sign Language
from Inconsistent Input. In: Cognitive Psychology 49, 370⫺407.
Slobin, Dan I.
1985 Cross-linguistic Evidence for the Language-making Capacity. In: Slobin, Dan I. (ed.),
The Cross-Linguistic Study of Language Acquisition. Vol. 2: Theoretical Issues. Hills-
dale, NJ: Lawrence Erlbaum, 1157⫺1256.
Supalla, Ted
1990 Serial Verbs of Motion in ASL. In: Fischer, Susan/Siple, Patricia (eds.), Theoretical
Issues in Sign Language Research. Vol. 1: Linguistics. Chicago: University of Chicago
Press, 172⫺152.
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge Uni-
versity Press.
Syea, Anand
1993 Null Subjects in Mauritian Creole and the Pro-drop Parameter. In: Byrne, Francis/
Holm, John (eds.), Atlantic Meets Pacific. Amsterdam: Benjamins, 91⫺102.
Tenenbaum, Joshua/Xu, Fei
2005 Word Learning as Bayesian Inference: Evidence from Preschoolers. In: Proceedings of
the 27 th Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence
Erlbaum, 2381⫺2386.
Thomason, Sarah/Kaufman, Terrence
1988 Language Contact, Creolization, and Genetic Linguistics. Berkeley/Los Angeles: Uni-
versity of California Press.
Todd, Loreto
1990 Pidgins and Creoles. London: Routledge.
Traugott, Elisabeth
1977 Pidginization, Creolization, and Language Change. In: Valdman, Albert (ed.), Pidgin
and Creole Linguistics. Bloomington, IN: Indiana University Press, 70⫺98.
Veenstra, Tonjes
1996 Serial Verbs in Saramaccan: Predication and Creole Genesis. PhD Dissertation, Univer-
sity of Amsterdam. The Hague: HAG.
Veenstra, Tonjes
2003 What Verbal Morphology Can Tell Us About Creole Genesis: The Case of French-
related Creoles. In: Plag, Ingo/Lappe, Sabine (eds.), The Phonology and Morphology
of Creole Languages. Tübingen: Niemeyer, 293⫺314.
37. Language planning 889

Woodward, James
1978 Historical Bases of ASL. In: Siple, Patricia (ed.), Understanding Language through Sign
Language Research. New York: Academic Press, 333⫺348.

Dany Adone, Cologne (Germany)

37. Language planning


1. Introduction
2. Language planning
3. Status planning: Recognition of sign languages
4. Corpus planning
5. A case study: Standardisation of Sign Language of the Netherlands (NGT)
6. Lexical modernisation
7. Acquisition planning
8. Conclusion
9. Literature

Abstract

In this chapter, three aspects of language planning will described for sign languages:
status planning, corpus planning, and acquisition planning. As for status planning, in
most countries the focus of attention is usually on the legal recognition of the national
sign language. Corpus planning shall be discussed in relation to standardisation and
lexical modernisation, followed by a short discussion of acquisition planning. Standardi-
sation of languages in general is a controversial issue. There are only few examples of
efforts to standardise a sign language. The process of standardisation of the lexicon of
Sign Language of the Netherlands will be discussed as an example of a specific form of
standardisation, informed by thorough knowledge of the lexical variation existing in
the language.

1. Introduction

In this chapter, selected aspects of sign language politics will be discussed. In describing
issues related to the use and status of a language, various terms have been used in the
literature: language politics, language policy, and language planning. These terms re-
quire some clarification. The term “language planning”, introduced by the American-
Norwegian linguist Einar Haugen in his 1968 article about modern Norwegian, de-
scribes “an activity of preparing a normative orthography, grammar, and dictionary for
890 VII. Variation and change

the guidance of writers and speakers in a non-homogeneous speech community” (Hau-


gen 1968, 673).
In the late 1960s and early 1970s, the scientific interest in language planning mainly
applied to a third world context where the establishment of one standardized national
language was regarded necessary ⫺ from a Western European perspective. Language
planning tended to be considered as an activity which has as its main goal to solve
problems and to establish changes. Two decades after Haugen introduced his definition
of “language planning”, the sociolinguist Robert L. Cooper proposed an alternative
definition which was somewhat less oriented towards problem solving: “language plan-
ning refers to deliberate efforts to influence the behaviour of others with respect to
the acquisition, structure, or functional allocation of their codes” (Cooper 1989, 45).
In the meantime, others had also contributed to the definition of language planning
by questioning the advisability of language planning: “It can be done, but should it
been done?” (Fishman 1983).
The relationship between language politics, language policy, and language planning
may be described in the following way: from certain language politics, a certain lan-
guage policy will follow, which will be implemented through some type of language
planning. In other words: language politics refers to the why, language policy to the
what, and language planning to the how.
A policy is a deliberate plan of action to guide decisions and achieve rational out-
come(s). The term may apply to government, private sector organizations and groups,
and individuals. Policy differs from rules or law. While law can compel or prohibit
behaviours, policy merely guides actions toward those that are most likely to yield a
desired outcome. However, policy may also refer to the process of making important
organizational decisions, including the identification of different alternatives and
choosing among them on the basis of the impact they will have. Policies can also be
understood as political, management, financial, and administrative mechanisms ar-
ranged to reach explicit goals. Since policy refers to both a plan of action and the
process of making a decision, the term may be a little confusing. Therefore, in this
chapter, the term ‘language planning’ will be used, referring to those political and other
opinions and measures that focus on the regulation or improvement of the use and/or
status of a language.
In this chapter, the relevant aspects of language planning, as mentioned above, will
be discussed with respect to sign languages. It is important to stress the fact that for
most languages, but certainly for most sign languages, language planning is not formally
and rationally conducted by some central authority. As Cooper (1989, 41) states: “In
reality, language planning rarely conforms to this ideal and more often than not lan-
guage planning is a messy affair, ad hoc, haphazard, and emotionally driven”. More-
over, although language planning activities may be conducted by a wide range of insti-
tutions ⫺ apart from language academies, governments, and ministries of education ⫺
pressure groups and individuals play a crucial role in the process of sign language
planning activities in various countries.
In section 2, we will address some general aspects of language planning. The discus-
sion of status planning in section 3 comprises perspectives on deafness (section 3.1)
and legal recognition of sign languages (section 3.2). Sections 4 to 6 focus on different
aspects of corpus planning, namely standardisation (section 4.1), codification of the
language (section 4.2), a case study of standardisation (section 5), and lexical moderni-
sation (section 6). Acquisition planning will be discussed in section 7.
37. Language planning 891

2. Language planning

Language planning can be divided into three subtypes: status planning, corpus plan-
ning, and acquisition or educational planning. Status planning refers to all efforts un-
dertaken to change the use and function of a language (or language variety). Deumert
(2001) states that examples of status planning are matters such as:

⫺ recognition (or not) of a language as an official language;


⫺ multilingualism in situations where more than one language is the national language
(for example, Flemish and French in Belgium).

Corpus planning is concerned with the internal structure of a language such as the
prescriptive intervention in the forms of a language. According to Deumert (2001),
corpus planning is often related to matters such as:

⫺ reform or introduction of a written system (spelling system; e.g., the switch from
the Arabic to the Latin writing system in Turkey during the reign of Atatürk);
⫺ standardisation (a codified form) of a certain language or language variety involving
the preparation of a normative orthography, grammar, and dictionary;
⫺ lexical modernisation of a language (for example, Hebrew and Hausa).

Acquisition planning concerns the teaching and learning of languages. Acquisition


planning in spoken languages is often supported and promoted by national institutions
such as the Dante Institute (Italian), the Goethe Institute (German), Maison Descartes
(French), etc. Comparable organisations that are concerned with teaching and learning
of sign languages are often run by international organisations of the Deaf (e.g. the
World Federation of the Deaf), by national organisations of the Deaf, by universities
(e.g. Stockholm University; the Deafness, Cognition and Language (DCAL) Research
Centre at University College in London; Gallaudet University in Washington, DC), or
by national sign language centres, such as the Centre for Sign Language and Sign
Supported Communication (KC) in Denmark, the Institute for German Sign Language
in Hamburg, the CNR in Rome, and the Dutch Sign Centre in the Netherlands. More-
over, many individual researchers all over the world have contributed in significant
ways to the development and spread of their national sign languages. Status planning,
corpus planning, and acquisition planning have all played an important role with re-
spect to sign languages around the globe and will be discussed in the next paragraphs.

3. Status planning: Recognition of sign languages

Since the early days of sign language research in the middle of the 20th century, status
planning and, more specifically, the recognition of a language as a fully-fledged lan-
guage has been a major issue. The status of a sign language depends on the status of
deaf people, the historical background, and the role a language plays within deaf edu-
cation. The history of sign language research is thus closely related to the history of
892 VII. Variation and change

deaf education and the perspectives on deaf people. Therefore, before turning to the
recognition of sign languages in section 3.2, two different views on deafness and deaf
people will first be introduced in the next section.

3.1. Perspectives on deafness: Deficit or linguistic minority

For centuries, deafness has been viewed as a deficit. This, often medical, perspective
focuses on the fact that deaf people cannot hear (well). From this perspective, deaf
people have a problem that needs to be fixed as quickly as possible in order for them
to integrate properly and fully in the hearing society. From the perspective of the
hearing majority, deaf people are different and need to assimilate and adapt. Great
emphasis is therefore put on technological aids, ranging from hearing aids to cochlear
implants (CI). With each new technology that becomes available, the hope to finally
cure deafness increases. Within this mostly hearing perspective there is no room for
Deaf identity or Deaf culture: deaf people are just hearing people who cannot hear
(Lane 2002).
This perspective on deafness has had and still has a tremendous impact on the lives
of deaf people throughout the world (for an overview, see Monaghan et al. (2003) and
Ladd (2003)). The status of sign languages in Western societies varies throughout his-
tory. In some periods, sign languages were used in some way or the other in deaf
education (for instance, in 18th century Paris); at other times, sign languages were
banned from deaf education altogether (from 1880⫺1980 in most Western societies;
see chapter 38, History of Sign Languages and Sign Language Linguistics, for details
on the history of deaf education).
The first study that applied the principles of spoken language linguistics to a sign
language (American Sign Language, ASL) was William Stokoe’s monograph Sign Lan-
guage Structure (Stokoe 1960). This study as well as subsequent, by now ‘classic’, stud-
ies on ASL by American linguists such as Edward Klima and Ursula Bellugi (Klima/
Bellugi 1979) and Charlotte Baker and Dennis Cokely (Baker/Cokely 1980) gave the
impetus to another perspective on deafness and deaf people: if sign languages are
natural languages, then their users belong to a linguistic minority. Consequently, deaf
people are not hearing people with a deficit; they are people who are different from
hearing people. They may not have access to a spoken language, but they do have
access to a visual language which can be acquired in a natural way, comparable to the
acquisition of a spoken language (see chapter 28, Acquisition). Under this view, then,
deaf people form a Deaf community with its own language, identity, and culture. Still,
the Deaf minorities that make up Deaf communities are not a homogenous group.
Paddy Ladd writes:

It is also important to note that within Western societies where there is significant migra-
tion, or within linguistic minorities inside a single nation-state, there are Deaf people who
are in effect, minorities within minorities. Given the oralist hegemony, most of these Deaf
people have been cut off not only from mainstream culture, but also from their own ‘native’
cultures, a form of double oppression immensely damaging to them even without factoring
oppression from Deaf communities themselves. (Ladd 2003, 59)
37. Language planning 893

Furthermore, there is an important difference between Deaf communities and other


language minorities. Sign languages are passed on from one generation to the next
only to a very limited extent. The main reason for this is that more than 95 % of deaf
people have hearing parents for whom a sign language is not a native language. There-
fore, most deaf people have learned their sign language from deaf peers, from deaf
adults outside of the family, or from parents who have acquired a sign language as a
second language.
It has been pointed out that ⫺ contrary to what many believe ⫺ linguistic analyses
and scientific descriptions of sign language did exist in the Unites States as early as
the 19th century, and that deaf educators did have access to literature related to the
role, use, and structure of sign language (Nover 2000). However, these studies never
had an impact comparable to that of the early linguistic studies on the structure of
ASL mentioned above, which gave a major impulse to linguistic research. In many
countries, this legitimization of signing also led to major changes in deaf education
policies and to the emancipation of deaf people. It seems that the timing of Stokoe’s
analysis was perfect: oral methods in Western deaf education had failed dramatically,
deaf people did not integrate into the hearing society, and the reading skills of deaf
school leavers did not reach beyond those of nine year old hearing children (Conrad
1979). Furthermore, around the same time, language acquisition studies stressed the
importance of early mother-child interaction for adequate language development. In
several parts of the world, the awareness of the importance of their natural language
for deaf people increased.
The first European conference on sign language research, held in Sweden in 1979
(Ahlgren/Bergman 1980), inspired other researchers to initiate research establishing
the existence of distinct sign languages in many different European countries. In 1981,
Sweden was the first country in the world to recognise its national sign language, Swed-
ish Sign Language (SSL), as a language by making it mandatory in deaf education.
The legislation followed a 1977 home language reform measure allowing minority and
immigrant children to receive instruction in their native language (Monaghan 2003, 15).

3.2. Legal recognition of sign languages

For a very long time, sign languages have been ignored and as a consequence, their
potential has been underestimated. In areas where deaf people are not allowed to use
their own natural language in all functions of society, their sign language clearly has a
minority status which is closely related to the status of its users. However, being a
minority does not always automatically generate a minority status for the respective
sign language. There are examples of communities in which the hearing majority used
or still uses the sign language of the deaf minority as a lingua franca: for instance,
Martha’s Vineyard (Groce 1985), the village of Desa Kolok in Bali (Branson/Miller/
Marsaja 1996), and the village of Adamorobe in Ghana (Nyst 2007) (see chapter 24,
Shared Sign Languages, for discussion).
The status of sign languages depends very much on the legal recognition of these
languages ⫺ especially from the point of view of Deaf communities and Deaf organisa-
tions ⫺ and has been one of the most important issues in various countries since 1981.
Most of the activities centred around the topic of sign language recognition and bilin-
894 VII. Variation and change

gual education, which is quite understandable given the history of deaf education and
the fact that deaf people have been in a dependent and mostly powerless position for
centuries. Legal recognition may give the power of control, that is, the right of language
choice, back to those who should choose, who should be in control: the deaf people
themselves. A word of caution though is necessary here and is adequately formulated
by Verena Krausneker:

Recognition of a Sign Language will not solve all problems of its users at once and maybe
not even in the near future. But legal recognition of Sign Languages will secure the social
and legal space for its users to stop the tiresome work of constant self-defence and start
creative self-defined processes and developments. Legal recognition of a language will give
a minority space to think and desire a plan and achieve the many other things its members
think they need or want. Basic security in the form of language rights will influence educa-
tional and other most relevant practices deeply. (Krausneker 2003, 11)

The legal status of sign languages differs from country to country. There is no standard
way in which such recognition can be formally or legally extended: every country has
its own interpretation. In some countries, the national sign language is an official state
language, whereas in others, it has a protected status in certain areas, such as education.
Australian Sign Language (Auslan), for example, was recognised by the Australian
Government as a “community language other than English” and as the preferred lan-
guage of the Deaf community in policy statements in 1987 and 1991. This recognition,
however, does not ensure any structural provision of services in Auslan.
Another example of legal recognition is Spain. Full legal recognition of sign lan-
guages in Spain has only been granted in 2007, when a Spanish State law concerning
sign languages was passed. However, several autonomous regional governments had
already passed bills during the 1990s that indirectly recognized the status of sign lan-
guage and aimed at promoting accessibility in Spanish Sign Language (LSE) in differ-
ent areas, featuring education as one of the central ones. It should be pointed out that
legal recognition is not equivalent to official status because the Spanish Constitution
from 1978 only grants official status to four spoken languages (Spanish, Catalan, Gali-
cian, and Basque). The new Catalan Autonomy Law from 2006 includes the right to use
Catalan Sign Language (LSC) and promotes its teaching and protection. The Catalan
Parliament had already passed a non-binding bill in 1994 promoting the use of LSC in
the Catalan education system and research into the language (Josep Quer, personal
communication).
The situation with respect to legal recognition can be summarised as follows
(Wheatley/Pabsch 2010; Krausneker 2008):

⫺ Ten countries have recognised their national sign languages in constitutional laws:
Austria (2005), the Czech Republic (1998), Ecuador (1998), Finland (1995), Iceland
(2011), New Zealand (2006), Portugal (1998), South Africa (1996), Uganda (1995),
and Venezuela (1999).
⫺ In the following 32 countries, the national sign languages have legal status through
other laws: Australia, Belgium (FI), Brazil, Byelorussia, Canada, China, Columbia,
Cyprus, Denmark, France, Germany, Greece, Hungary, Iran, Latvia, Lithuania,
Mozambique, Norway, Peru, Poland, Romania, Russia, the Slovak Republic, Spain,
37. Language planning 895

Sri Lanka, Sweden, Switzerland, Thailand, Ukraine, the United States, Uruguay,
and Zimbabwe.
⫺ In Cuba, Mauritius, the Netherlands, and the United Kingdom, the national sign
languages have been recognised politically, which has resulted in the funding of
large national projects (e.g. DCAL in London) and institutions. In the Netherlands,
for instance, the Dutch Sign Centre is partially funded for lexicographic activities
and the Sign Language of the Netherlands (NGT) Interpreter/Teacher training pro-
gramme was established at the University of Utrecht. Note, however, that this type
of legal recognition is not sufficient under Dutch Law to infer a legal status of NGT
itself as a language.

The European Parliament unanimously approved a resolution about sign languages on


June 17, 1988. The resolution asks all member countries for recognition of their na-
tional sign languages as official languages of the Deaf. So far, this resolution has had
limited effect. In 2003, sign languages were recognised as minority languages in the
European Charter on Regional or Minority Languages. Another way to pursue legal
recognition might be via a new Human Rights charter for which linguistic human rights
are a prerequisite and that will be ratified by all member states. In 1996, a number of
institutions and non-governmental organizations, present at the UNESCO meeting in
Barcelona, presented the Universal Declaration of Linguistic Rights, which takes lan-
guage communities and not states as its point of departure (UNESCO 1996). One of
the relevant articles in the light of recognition of sign languages is article 3.
Sign language users and their languages have been in danger at various times
throughout history. However, a growing number of people have been referring to sign
languages as endangered languages ⫺ in fact, likely to become extinct in the near
future ⫺ since Graham Turner expressed his concerns in 2004:

We have seen a dramatic growth in several major types of threat to heritage sign languages:
demographic shifts which alone will reduce signing populations sharply, the rapid uptake
of cochlear implants […], the development and imminent roll-out of biotechnologies such
as genetic intervention and hair-cell regeneration; and the on-going rise of under-skilled
L2 users of sign language in professional positions, coinciding with a decline in concern
over the politics of language among younger Deaf people. (Turner 2004, 180)

Legal recognition will not be sufficient to ensure the status of sign languages. A com-
munity that wants to preserve its language has a number of options. A spoken language
example is that of Modern Hebrew, which was revived as a mother tongue after centu-
ries of being learned and studied only in its ancient written form. Similarly, Irish has
had considerable institutional and political support as the national language of Ireland,
despite major inroads by English. In New Zealand, Maori communities established
nursery schools staffed by elders and conducted entirely in Maori, called kohanga reo,
“language nests” (Woodbury 2009).
It is the duty of linguists to learn as much as possible about languages, so that even
if a language disappears, knowledge of that language won’t disappear at the same time.
To that end, researchers document sign language use in both formal and informal
settings on video, along with translations and notations. In recent years, a growing
number of projects has been established to compile digital sign language corpora; for
896 VII. Variation and change

instance, for NGT (Crasborn/Zwitserlood/Ros 2008), British Sign Language (Schembri


et al. 2009), and SSL (Bergman et al. 2011). These corpora will not only support the
linguistic research that is needed to describe the individual languages, they will also
provide access for learners of sign languages, and they will ensure preservation of the
language as it is used at a given point in time (for digital sign language corpora, see
also chapter 44, Computer Modelling). However, a language will only be truly alive
and out of danger as long as there is a community of language users and the language
is transmitted from generation to generation. Sign languages are extremely vulnerable
in this respect.

4. Corpus planning
One of the goals of corpus planning is the prescriptive intervention in the forms of a
language. Corpus planning is concerned with the internal structure of a language, that
is, with matters such as writing systems, standardisation, and lexical modernisation.
There is no standardised writing system for sign languages comparable to the writing
systems that exist for spoken languages. Rather, there are many different ways to no-
tate signs and sign sentences based on Stokoe’s notation system (e.g. the Hamburg
Notation System) or on dance writing systems (e.g. Sutton’s Sign Writing System; see
chapter 43, Transcription, for details). The lack of a written system has contributed
greatly to language variation within sign languages. In relation to most sign languages,
standardisation has not been a goal in itself. Linguistic interest in sign languages has
led to documentation of the lexicon and grammar of a growing number of sign lan-
guages. In this section, we will discuss standardisation and codification. In section 5,
we will present a case study of an explicit form of standardisation as a prerequisite for
legal recognition of a sign language, NGT in the Netherlands.

4.1. Standardisation

A standard language is most commonly defined as a codified form of a language, which


is understood as the uniform linguistic norm (Deumert 2001; Reagan 2001). The term
‘codified’ refers to explicit norms of a language specified in documents such as diction-
aries and grammars. The concept of a standard language is often wrongly associated
with the ‘pure’, ‘original’ form of a language ⫺ assuming there would be something like
a ‘pure’ form of a language. Often, the most prestigious form of a language becomes
standardised. The language variety of those who have power and status in society is
often seen as the most prestigious one. Acceptance of this variety as the norm is vital
for a successful standardisation. With respect to spoken languages, the standard form
is most often the dialect that is associated with specific subgroups and with specific
functions. In this context, it is interesting to see which essential features of modern
Standard English David Crystal (1995, 110) has listed:

⫺ It is historically based on one dialect among many, but now has special status, with-
out a local base. It is largely (but not completely) neutral with respect to regional
identity.
37. Language planning 897

⫺ Standard English is not a matter of pronunciation, rather of grammar, vocabulary,


and orthography.
⫺ It carries most ‘prestige’ within English speaking countries.
⫺ It is a desirable educational target.
⫺ Although widely understood, it is not widely spoken.

Status planning and corpus planning are very closely related. If the status of a language
needs to be raised, a form of corpus planning is required. For example, the lexicon
needs to be expanded in order to meet the needs of different functions of the language.
Different degrees of standardisation can be distinguished (based on Deumert 2001):

1. Un-standardised spoken or sign language for which no written system has been de-
veloped.
2. Partly standardised or un-standardised written language used mainly in primary
education. The language is characterised by high degrees of linguistic variation.
3. Young standard language: used in education and administration, but not felt to be
fit for use in science, technology, and at a tertiary or research level.
4. Archaic standard language: languages which were used widely in pre-industrial
times but are not spoken any longer, such as classic Latin and Greek.
5. Mature modern standard language: employed at all areas of communication; for
example English, French, German, Dutch, Italian, Swedish, etc.

Most sign languages can be placed in stages 1⫺3. We have to distinguish active forms
of standardising the language (see section 5 for further discussion) and more natural
processes of language standardisation. Any form of codification of the language, how-
ever, will lead ⫺ even unintentionally ⫺ to some form of standardisation. This is the
case for many sign languages, as will be discussed in the next paragraph.

4.2. Codification of the language

The history of most sign languages is one of oppression by hearing educationalists.


Until the mid 1960s, most sign languages were not viewed as fully-fledged languages
on a par with spoken languages. Once sign language research starts in a country, usually
the first major task one sets out to do is the compilation of a dictionary.
Clearly, dictionaries are much more than just a text that describes the meaning of
words or signs. The word ‘dictionary’ suggests authority, status, and scholarship: the
size of the dictionary, the paper that is used, and its cover all attribute to the status of
the language that has been described. The first dictionaries of sign languages did not
only serve the purpose of describing the lexicon of the language; rather, for most Deaf
communities, a sign language dictionary is a historic publication of paramount social
importance which can be used as a powerful instrument in the advancement of high-
quality bilingual education as well as in the full exercise of the constitutional rights of
deaf people (e.g. the first BSL/English dictionary (Brien 1992) and the first print publi-
cation of the standard signs of NGT, the Van Dale Basiswoordenboek NGT (Schermer/
Koolhof 2009)). Introductions to sign dictionaries often explicitly mention the fact that
the purpose of the dictionary is to confirm and raise the status of the sign language.
898 VII. Variation and change

Sign language dictionaries deal with variation in the language in different ways.
Even though the primary intention of the majority of sign lexicographers is to docu-
ment and describe the lexicon of a sign language, their choices in this process deter-
mine which sign varieties are included and which are not. Therefore, inevitably, many
sign language lexicographers produce a standardising dictionary of the sign language
or at least (mostly unintentionally) nominate one variant to be the preferred one. And
even if this is not the intention of the lexicographer, the general public ⫺ especially
hearing sign language learners ⫺ often interprets the information in the dictionary as
reflecting the prescribed, rather than described language. The fact that sign languages
lack a written form confronts lexicographers with a serious problem: which variant of
a sign is the correct one and should thus be included as the citation form in the diction-
ary. Therefore, lexicographers have to determine, in one way or the other, whether an
item in the language is used by the majority of a given population, or whether it is
used by a particular subset of the population. To date only a few sign language diction-
aries have been based on extensive research on language variation (e.g. for Auslan
(Johnston 1989), NGT (Schermer/Harder/Bos 1988; Schermer et al. 2006; Schermer/
Koolhof 2009), and Danish Sign Language (Centre for Sign Language and Sign Sup-
ported Speech KC 2008)). Also, there are online dictionaries available which document
the regional varieties of the particular sign language (e.g. the Flemish Sign Language
dictionary (www.gebaren.ugent.be) and the work done on Swiss German Sign Lan-
guage by Boyes Braem (2001)).
In cases where sign language dictionaries have indeed been made with the explicit
purpose of standardising the sign language in mind, but have not been based on exten-
sive research on lexical variation, these attempts to lasting standardisation have usually
failed because the deaf community did not accept the dictionary as a reflection of their
sign language lexicon; this happened, for instance, in Flanders and Sweden in the 1970s.
Another example of controversy concerns Japanese Sign Language (NS). Nakamura
(2011) describes the debate about the way in which the dominant organization of deaf
people in Japan (JFD) has tried since 1980 to maintain active control of the lexicon in
a way that is no longer accepted by a growing part of the deaf community. The contro-
versy is mostly about the way in which new lexicon is coined, which ⫺ according to
members of D-Pro (a group of young Deaf people that has been active since 1993) ⫺
does not reflect the pure NS, which should exclude mouthings or vocalisations of words.
Another form of codification is the description of the grammar of the language.
Since sign language linguistics is a fairly young research field, to date very few compre-
hensive sign language grammars are available (see, for example, Baker/Cokely (1980)
for ASL; Schermer et al. (1991) for NGT; Sutton-Spence/Woll (1999) for BSL; John-
ston/Schembri (2007) for Auslan; Gébert/Adone (2006) for Mauritian Sign Language;
Papaspyrou et al. (2008) for German Sign Language; and Meir/Sandler (2008) for Isra-
eli Sign Language). As with dictionaries, most grammars are intended to be descriptive,
but are viewed by language learners as prescriptive.
37. Language planning 899

5. A case study: Standardisation of Sign Language of the


Netherlands (NGT)

In this section, the process of standardisation will be illustrated by means of a case


study: the standardisation of NGT. Schermer (2003) has described this process in full
detail. The information from this article will be briefly summarised below.
As a result of a decade of lobbying for the recognition of NGT by the Dutch Deaf
Council, a covenant was signed in 1998 between all schools for the Deaf, the Organisa-
tion for Parents of Deaf Children (FODOK), the Ministry of Education, and the Minis-
try of Health and Welfare to carry out three projects the goal of which was to imple-
ment bilingual (NGT/Dutch) education for Deaf children. One of these projects was
the Standardisation of the Basic Lexicon of NGT to be used in schools for the Deaf
(referred to as the “STABOL” project).
The projects were carried out between 1999⫺2002 by the Dutch Sign Centre, the
University of Amsterdam, and the schools for the Deaf. The STABOL project was
required by the Dutch government as a prerequisite for the legal recognition of NGT
despite objections by the Dutch Deaf community and NGT researchers. In the period
between 1980 (when research on NGT started) and 1999, a major project had been
carried out which documented extensively the lexicon of NGT. The results of this so-
called KOMVA project, which had yielded information about the extent of regional
variation in NGT (cf. Schermer 1990, 2003), formed the basis for the standardisation
project.
The standardisation of the NGT basic lexicon was a highly controversial issue. As
far as the Dutch government was concerned, it was not negotiable: without a standard
lexicon, there could be no legal recognition of NGT. There was also an economic
argument for standardising part of the lexicon: the development of NGT materials in
different regional variants was expensive. Moreover, hearing parents and teachers were
not inclined to learn different regional variants. The schools for the Deaf were also in
favour of national NGT materials that could be used in NGT tests to monitor the
development of linguistic skills and to set a national standard. The idea of standardisa-
tion, however, met with strong opposition from the Deaf community and from linguists
in the Netherlands at that time. Probably, the concept of standardisation was difficult
for the Deaf community to accept since it was not so long ago that their language had
been suppressed by hearing people. And now again, it was hearing people who were
enforcing some form of standardisation.
The STABOL project was carried out by a group of linguists, native deaf signers
(mostly deaf teachers), and native hearing signers in close cooperation with the Deaf
community and coordinated by the Dutch Sign Centre. A network of Deaf signers
from different regions was established. This network in turn maintained contacts with
larger groups of Deaf people whose comments and ideas were shared with the project
group, which made all of the final decisions. Within the project, a standard sign was
defined as a sign

that will be used nationally in schools and preschool programs for deaf children and their
parents. It does not mean that other variants are not ‘proper signs’ that the Deaf commu-
nity can no longer use. (Schermer 2003, 480)
900 VII. Variation and change

5.1. Method of standardisation

The STABOL project set out to standardise a total of 5000 signs: 2500 signs were
selected from the basic lexicon, which comprises all signs that are taught in the first
three levels of the national NGT courses; 2500 signs were selected in relation to educa-
tional subjects. For this second group of signs, standardisation was not a problem since
these were mostly new signs with very little or no variation. We will expand a little
more on the first set of 2500 signs.
The process of standardising NGT started in the early 1980s with the production of
national sign language dictionaries which included all regional variants and preference
signs. Preference signs are those signs that are identical in all five regions in the Nether-
lands (Schermer/Harder/Bos 1988). Discussions amongst members of the STABOL
project group revealed that the procedures we had used in previous years (selection
of preference signs) had actually worked quite well. The STABOL project group de-
cided to use the set of linguistic guidelines in their meetings that had been developed
based on previous research (see Schermer (2003) for details). In principle, signs that
were the same nationally (i.e. those that were labelled “preference signs” in the first
dictionaries) were accepted as standard signs. The 2500 signs from the basic lexicon
that were standardised in the STABOL project can be characterised as follows:

⫺ 60 % of the signs are national signs that are recognised and/or used with the same
meaning in all regions, no regional variation;
⫺ 25 % of the signs are regional signs that have been included in the standard lexicon;
⫺ for 15 % of the signs, a selection was made for a standard sign.

Fig. 37.1: Regional NGT variants included as synonyms


37. Language planning 901

Fig. 37.2: Regional NGT variants included as signs with refined meaning

Hence, for 25 % of the signs, regional variation was included in the standard lexicon
in the following ways. First, regional variation is included in the standard lexicon in
the form of synonyms. This is true, for example, for the signs careful and mummy
shown in Figure 37.1. The reason for including these regional signs as synonyms was
the fact that the members of the STABOL group could not agree on one standard sign
based on the set of criteria. In this manner, a great number of synonyms were added
to the lexicon.
Apart from synonyms, regional variation is included through refining the meaning
of a sign; for example, the signs horse and ride-on-horseback, baker and bakery. In
the Amsterdam region, the sign horse (Figure 37.2b) was used for both the animal
and the action of riding on a horseback. In contrast, in the Groningen region, the sign
horse (Figure 37.2a) was only used for the animal and not for horseback riding. In the
standardisation process, the Groningen sign became the standard sign horse while
the Amsterdam sign became the standard sign for ride-on-horseback (Figure 37.2b).
Consequently, both regional variants were included.
In the STABOL project, for only a few hundred signs out of the 2500 standardised
signs, an explicit choice was made between regional variants based on linguistic criteria
as mentioned earlier in this chapter. One of the reasons that the NGT standard lexicon
has been accepted by teachers of the Deaf who had to teach standard signs rather than
their own regional variants, might be the fact that the actual number of signs that has
been affected by the standardisation process is quite low. Note, however, that the
standard lexicon was introduced in the schools for the Deaf and in the NGT course
materials; the Deaf adult population continued to use regional variants. It is interesting
to note that Deaf children of Deaf parents who are aware of the fact that there is a
difference in signing between their Deaf parents, their Deaf grandparents, and them-
selves and who have been educated with the standard signs, identify with these stand-
ard signs as their own signs (Elferink, personal communication).

5.2. Results and implementation

As a result of the STABOL project, 5000 signs were standardised and made available
in 2002. Since then, the Dutch Sign Centre has continued to make an inventory of
902 VII. Variation and change

signs, to develop new lexicon, and to disseminate NGT lexicon. The database currently
contains 16,000 signs of which 14,000 have been made available in different ways.
In Figure 37.3, the distribution of these 14,000 signs is shown: 25 % of the standard
signs have been standardised within the STABOL project, 42 % of the signs are existing
national signs (no regional variation), and 33 % of the signs are new lexical items
(mostly signs that are used in health and justice and for school subjects).

Fig. 37.3: Distribution of signs in NGT standard sign language dictionaries

Naturally, the establishment of a standard sign alone is not sufficient for standardis-
ing a lexicon, the implementation of the NGT standard lexicon is coordinated by the
Dutch Sign Centre and involves several activities, some of which are on-going:

⫺ Workshops were organised to inform the Deaf community and NGT teachers about
the new lexicon.
⫺ The lexicon was dispersed via DVD-ROMs and all national NGT course materials
have been adapted to include standard signs.
⫺ All schools for the Deaf have adopted the standard lexicon and all teachers are
required to learn and teach standard NGT signs since 2002.
⫺ On television, only standard signs are used by the NGT interpreters.
⫺ The NGT curriculum that was developed for primary deaf schools also contains
standard NGT signs.
⫺ Since 2006, online dictionaries are available with almost 14,000 standard signs. As
of 2011, regional variants are also shown in the main online dictionary. The diction-
aries are linked to the lexical database; both the dictionaries and the database are
maintained by the Dutch Sign Centre and updated daily.
⫺ In 2009, the first national standard NGT dictionary (3000 signs) has been published
in book form (Schermer/Koolhof 2009), followed by the online version with 3000
sign movies and 3000 example sentences in NGT (2010).

Some people view the production of dictionaries with standard signs as avoiding the
issue of regional variation altogether (see chapter 33, Sociolinguistic Aspects of Varia-
tion and Change). This is not the case in the Netherlands: in the 1980s, an inventory
of regional variation was made based on a large corpus and, contrary to most other
countries at that time, our first sign language dictionaries contained all regional vari-
ants. Without thorough knowledge of lexical variation, standardisation of NGT lexicon
and the implementation of the standard signs in all schools for the deaf and teaching
materials would not have been possible. In 2011, a large project was initiated by the
37. Language planning 903

Dutch Sign Centre to include films of the original data that were collected in 1982 in
the database and make the regional variation available in addition to the standard
lexicon.
Note finally that, despite the fact that the basic lexicon of NGT was standardised
in 2002, the Dutch Government still has not recognised NGT legally as a language.
There are a number of implicit legal recognitions in the Netherlands, such as the right
to have NGT interpreters and the establishment of the NGT teacher/interpreter train-
ing programme, but this is not considered to be a legal recognition of NGT as a lan-
guage used in the Netherlands. An important reason why NGT has not been legally
recognised within the Dutch constitution is that spoken Dutch is not officially recog-
nised as a language in the Dutch constitution either. The Dutch Deaf Council and the
Dutch Sign Centre are still working on some form of legal recognition of NGT as
a language.

6. Lexical modernisation
For almost a century, throughout Western Europe, most sign languages have been
forbidden in the educational systems and have not been used in all parts of society. At
least the latter is also true for sign languages of other continents. As a consequence,
there are deficiencies in the vocabulary compared to the spoken languages of the hear-
ing community. The recognition of sign languages, the introduction of bilingual pro-
grammes in deaf education, and the continuing growth of educational sign language
interpreting at secondary and tertiary levels of education have created an urgent need
for a coordinated effort to determine and develop new signs for various contexts, such
as, for example, signs for technical terms and school subjects.
A productive method for coining new signs is to work with a team of native deaf
signers, (deaf) linguists, and people who have the necessary content knowledge. Nice
examples of a series of dictionaries ⫺ aimed at specific professions for which new signs
had to be developed ⫺ are the sign language dictionaries produced by the Arbeits-
gruppe Fachgebärden (‘team for technical signs’) at the University of Hamburg. In the
past 17 years, the team has compiled, for instance, lexicons on psychology (1996), car-
pentry (1998), and health care (2007).
In the Netherlands, the NGT lexicon has been expanded systematically since 2000.
A major tool in the development and the dispersion of new lexical items is a national
database and an online dictionary, all coordinated by one national centre, the Dutch
Sign Centre. The Dutch Ministry of Education is funding the Dutch Sign Centre specif-
ically for maintaining and developing the NGT lexicon. This is crucial for the develop-
ment of (teaching) materials, dictionaries, and the implementation of bilingual educa-
tion for deaf children.

7. Acquisition planning
As described before, acquisition planning concerns the teaching and learning of lan-
guages. Some form of acquisition planning is required to change the status of a lan-
guage and to ensure the survival of a language. Ideally, a nationally funded institute
904 VII. Variation and change

or academy (such as, for instance, the Academie Française for French or the Fryske
Academie for Frisian) should coordinate the distribution of teaching materials, the
development of dictionaries and grammars, and the development of a national curricu-
lum comparable to the European Framework of Reference for second language learn-
ing. Even though the situation has improved greatly for most sign languages in the last
25 years, their position is still very vulnerable and in most countries depends on the
efforts of a few individuals. Acquisition planning, corpus planning, and status planning
are very closely related. With respect to sign languages, in most cases, there is no
systematic plan in relation to these three types of planning. While there is not one plan
that suits all situations, there are still some general guidelines that can be followed:

⫺ Describe the state of affairs with respect to status planning, corpus planning, and
acquisition planning in your country.
⫺ Identify the stakeholders and their specific interest in relation to sign language; for
example: sign language users (Deaf community, but also hard of hearing people
who use a form of sign-supported speech), educationalists, care workers, research-
ers, parents of deaf children, hearing sign language learners, interpreters, govern-
ment, etc.
⫺ Identify the needs and goals of each of the stakeholders that need to be achieved
for each of the types of planning and make a priority list.
⫺ Identify the steps that need to be taken, the people who need to be involved and
who need to take responsibility, estimate the funding that is necessary, and provide
a timetable.

Acquisition planning is crucial for the development and survival of sign languages and
should be taken more seriously by sign language users, researchers, and governments
than has been done to date. It is time for a National Sign Language Academy in each
country, whose tasks should include the preservation of the language, the protection
of the rights of the language users, and the promotion of the language by developing
adequate teaching materials.

8. Conclusion

In this chapter, three aspects of language planning have been described for sign lan-
guages: status planning, corpus planning, and acquisition planning. Within status plan-
ning, in most countries the focus of attention is usually on the legal recognition of the
national sign language. In the year 2011, only 42 countries have legally recognised a
national sign language in one way or another. Even though legal recognition may imply
some form of protection for sign language users, it does not solve all problems. As
more and more linguists point out, sign languages are endangered languages. Ironically,
now that sign languages are finally taken seriously by linguists and hearing societies,
their time is almost up as a consequence of the medical perspective on deafness and
rapid technological development. Languages only exist within language communities,
but the existence of signing communities is presently at risk for several reasons, the
main one being the decreasing number of native deaf signers around the world. This
37. Language planning 905

decrease is a consequence of reduced or no sign language use with deaf children who
received a cochlear implant at a very young age and, more generally, of the fact that
deaf communities are increasingly heterogeneous.
With respect to corpus planning, we have discussed standardisation and lexical mod-
ernisation. Standardisation of languages in general is a controversial issue. There are
only few examples of efforts to standardise a sign language. At the same token, one
has to be aware of the fact that any form of codification of a language implies some
form of standardisation, even unintentionally. The process of standardisation of the
NGT lexicon has been discussed as an example of a specific form of standardisation,
based on thorough knowledge of the lexical variation existing in the language.
Finally, in order to strengthen the position of sign languages around the world, it is
necessary to work closely together with the Deaf community, other users of sign lan-
guage, and researchers ⫺ within different countries and globally ⫺ in an attempt to
draft an acquisition plan, to provide language learners with adequate teaching materi-
als, and to describe and preserve the native languages of deaf people.

9. Literature
Ahlgren, Inger/Bergman, Brita (eds.)
1980 Papers from the First International Symposium on Sign Language Research. June 10⫺
16, 1979. Leksand, Sweden: Sveriges Dövas Riksförbund.
Baker, Charlotte/Cokely, Dennis
1980 American Sign Language. A Teacher’s Resource Text on Grammar and Culture. Silver
Spring, MD: T.J. Publishers.
Bergman, Brita/Nilsson, Anna-Lena/Wallin, Lars/Björkstrand, Thomas
2011 The Swedish Sign Language Corpus: www.ling.su.se.
Boyes Braem, Penny
2001 A Multimedia Bilingual Database for the Lexicon of Swiss German Sign Language.
In: Bergman, Brita/Boyes Braem, Penny/Hanke, Thomas/Pizzuto, Elena (eds.), Sign
Transcription and Database Storage of Sign Information (Special issue of Sign Lan-
guage & Linguistics 4(1/2)), 241⫺250.
Branson, Jan/Miller, Don/Marsaja, I Gede
1996 Everyone Here Speaks Sign Language, too: A Deaf Village in Bali, Indonesia. In:
Lucas, Ceil (ed), Multicultural Aspects of Sociolinguistics in Deaf Communities. Wash-
ington, DC: Gallaudet University Press, 39⫺61.
Brien, David (ed.)
1992 Dictionary of British Sign Language/English. London: Faber and Faber.
Centre for Sign Language and Sign Supported Speech KC
2008 Ordbog over Dansk Tegnsprok. Online dictionary: www.tegnsprok.dk.
Conrad, Richard
1979 The Deaf Schoolchild: Language and Cognitive Function. London: Harper and Row.
Cooper, Robert
1989 Language Planning and Social Change. Bloomington, IN: Indiana University Press.
Crasborn, Onno/Zwitserlood, Inge/Ros, Johan
2008 Sign Language of the Netherlands (NGT) Corpus: www.ngtcorpus.nl.
Crystal, David
1995 The Cambridge Encyclopedia of the English Language. Cambridge: Cambridge Univer-
sity Press.
906 VII. Variation and change

Deumert, Ana
2001 Language Planning and Policy. In: Mesthrie, Rajend/Swann, Joan/Deumert, Andrea/
Leap, William (eds.), Introducing Sociolinguistics. Edinburgh: Edinburgh University
Press, 384⫺419.
Fishman, Joshua A.
1983 Modeling Rationales in Corpus Planning: Modernity and Tradition in Images of the
Good Corpus. In: Cobarrubias, Juan/Fishman, Joshua A. (eds.), Progress in Language
Planning: International Perspectives. Berlin: Mouton, 107⫺118.
Gébert, Alain/Adone, Dany
2006 A Dictionary and Grammar of Mauritian Sign Language, Vol. 1. Vacoas, République
de Maurice: Editions Le Printemps.
Groce, Nora
1985 Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard. Cam-
bridge, MA: Harvard University Press.
Haugen, Einar
1968 Language Planning in Modern Norway. In: Fishman, Joshua A. (ed.), Readings in the
Sociology of Language. The Hague: Mouton, 673⫺687.
Johnston, Trevor
1989 Auslan Dictionary: A Dictionary of Australian Sign Language (Auslan). Adelaide:
TAFE National Centre for Research and Development.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language. An Introduction to Sign Linguistics. Cambridge: Cambridge
University Press.
Klima, Edward/Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Krausneker, Verena
2003 Has Something Changed? Sign Languages in Europe: The Case of Minorised Minority
Languages. In: Deaf Worlds 19(2), 33⫺48.
Krausneker, Verena
2008 The Protection and Promotion of Sign Languages and the Rights of Their Users in the
Council of Europe Member States: Needs Analysis. Integration of People with Disabili-
ties Division, Social Policy Department, Directorate General of Social Cohesion, Coun-
cil of Europe [http://www.coe.int/t/DG3/Disability/Source/Report_Sign_languages_
final.pdf].
Ladd, Paddy
2003 Understanding Deaf Culture. In Search of Deafhood. Clevedon: Multilingual Matters
Ltd.
Lane, Harlan
2002 Do Deaf People Have a Disability? In: Sign Language Studies 2(4), 356⫺379.
Meir, Irit/Sandler, Wendy
2008 A Language in Space. The Story of Israeli Sign Language. New York, NY: Lawrence Erl-
baum.
Monaghan, Leila
2003 A World’s Eye View: Deaf Cultures in Global Perspective. In: Monaghan, Leila/Schma-
ling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf. Inter-
national Variation in Deaf Communities. Washington, DC: Gallaudet University Press,
1⫺24.
Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.)
2003 Many Ways to Be Deaf. International Variation in Deaf Communities. Washington, DC:
Gallaudet University Press.
Nakamura, Karen
2011 The Language Politics of Japanese Sign Language (Nihon Shuwa). In: Napoli, Donna
Jo/Mathur, Gaurav (eds.), Deaf Around the World. The Impact of Language. Oxford:
Oxford University Press, 316⫺332.
37. Language planning 907

Nover, Stephen
2000 History of Language Planning in Deaf Education; the 19 th Century. PhD Dissertation,
University of Arizona.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Orwell, George
1946 Politics and the English Language. London: Horizon.
Papaspyrou, Chrissostomos/Meyenn, Alexander von/Matthaei, Michaela/Herrmann, Bettina
2008 Grammatik der Deutschen Gebärdensprache aus der Sicht gehörloser Fachleute. Ham-
burg: Signum.
Reagan, Timothy
2001 Language Planning and Policy. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Lan-
guages. Cambridge: Cambridge University Press, 145⫺180.
Schembri, Adam/Fenlon, Jordan/Stamp, Rose/Rentelis, Ramas
2009 British Sign Language Corpus Project: Documenting and Describing Variation and
Change in BSL. Paper Presented at the Workshop Sign Language Corpora: Linguistic
Issues, University College London.
Schermer, Trude
1990 In Search of a Language. PhD Dissertation, University of Amsterdam. Delft: Eburon
Publishers.
Schermer, Trude
2003 From Variant to Standard: An Overview of the Standardisation Process of the Lexicon
of the Netherlands Over Two Decades. In: Sign Language Studies 3(4), 469⫺487.
Schermer, Trude/Fortgens, Connie/Harder, Rita/Nobel, Esther de (eds.)
1991 De Nederlandse Gebarentaal. Deventer: Van Tricht.
Schermer, Trude/Geuze, Jacobien/Koolhof, Corline/Meijer, Elly/Muller, Sarah
2006 Standaard Lexicon Nederlandse Gebarentaal, Deel 1 & 2 (DVD-Rom). Bunnik: Neder-
lands Gebarencentrum.
Schermer, Trude/Harder, Rita/Bos, Heleen
1988 Handen uit de Mouwen: Gebaren uit de Nederlandse Gebarentaal in Kaart Gebracht.
Amsterdam: NSDSK/Dovenraad.
Schermer, Trude/Koolhof, Corline (eds.)
2009 Van Dale Basiswoordenboek Nederlandse Gebarentaal. Utrecht: Van Dale. [www.
gebarencentrum.nl]
Stokoe, William C.
1960 Sign Language Structure: An Outline of the Visual Communication System of the
American Deaf. In: Studies in Linguistics Occasional Papers 8. Buffalo: University of
Buffalo Press [Re-issued 2005, Journal of Deaf Studies and Deaf Education 10(1), 3⫺
37].
Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge Uni-
versity Press.
Turner, Graham
2004 To the People: Empowerment through Engagement in Sign Sociolinguistics and Lan-
guage Planning. In: Theoretical Issues in Sign Language Research (TISLR 8), Barcelona,
Sept. 30⫺Oct. 2, 2004. Abstract booklet, 180⫺181.
UNESCO
1996 Universal Declaration of Linguistic Rights. Barcelona, June 9th 1996. [Available at:
www.linguistic-declaration.org/decl-gb.htm].
Wheatley, Mark/Pabsch, Annika
2010 Sign Language Legislation in the European Union. Brussels: EUD.
908 VII. Variation and change

Woodbury, Anthony
2009 What Is an Endangered Language? Linguistic Society of America (LSA) publication:
www.lsadc.org/info/pdf_files/Endangered_Languages.pdf.

Trude Schermer, Bunnik (The Netherlands)


VIII. Applied issues

38. History of sign languages and sign language


linguistics
1. Introduction
2. Sign languages: some initial considerations
3. Early perceptions of sign languages
4. The development of deaf education and scholarly interest in sign languages
5. The emergence of deaf communities, sign languages, and deaf schools
6. The rise of oralism in the late 19th century
7. Sign language linguistics: a discipline is born
8. The establishment of the discipline
9. Historical relationships between sign languages
10. Trends in the field
11. Conclusion
12. Literature

Abstract
While deaf individuals have used signs to communicate for centuries, it is only relatively
recently (around the time of the Industrial Revolution) that communities of deaf people
have come together and natural sign languages have emerged. Public schools for the
deaf, established first in France and eventually across much of Europe and North
America, provided an environment in which sign languages could flourish. Following a
clear shift toward oralism in the late 19 th century, however, sign languages were viewed
by many as crude systems of gestures and signing was banned in most schools. Sign
languages continued to thrive outside the classrooms and in deaf communities and clubs,
however, and by the mid 20 th century oralism began to wane. While scholarly interest in
sign languages dates back to the Enlightenment, modern linguistic research began only
in 1960. In the years since, the discipline of sign language linguistics has grown consider-
ably, with research on over one hundred sign languages being conducted around the
globe. Although the genetic relationships between the world’s sign languages have not
been thoroughly researched, we know that historical links have in many cases resulted
from migration, world politics, as well as the export of educational systems.

1. Introduction
Natural sign languages, the complex visual-gestural communication systems used by
communities of deaf people around the world, have a unique and fascinating history.
While deaf people have almost certainly been part of human history since the begin-
ning, and have likely always used gesture and signs to communicate, until fairly recent
910 VIII. Applied issues

times (the past three centuries), most deaf people lived in isolation in villages and
towns. Where the incidence of deafness was great enough, village sign languages may
have developed; excepting this, most deaf people used homesigns to communicate. It
is only relatively recently, within the last 300 years or so, that deaf people have come
together in great enough numbers for deaf communities to emerge and full natural
sign languages to develop.
Deaf communities began to emerge in Europe during the Industrial Revolution of
the late 18th and early 19th centuries. With the transformation of traditional agricultural
economies into manufacturing economies, large numbers of people relocated from ru-
ral areas to the towns and cities where manufacturing centers were located, and for
the first time greater numbers of deaf people were brought together. The related devel-
opment that contributed most directly to the formation of deaf communities and the
emergence and development of natural sign languages was the establishment of schools
for deaf children around Europe, beginning in the mid 18th century. It was in the
context of deaf communities and deaf schools that modern sign languages emerged.
This chapter presents an overview of the history of sign languages as well as the
development of the discipline of sign language linguistics. Section 2 provides general
background information on sign languages, and in section 3, early perceptions of sign
languages are discussed. Section 4 traces the development of deaf education and early
scholarly interest in sign languages that took place in 16th century Spain and Britain.
The emergence of deaf communities, full natural sign languages, and public schools for
the deaf are examined in section 5. Section 6 covers the rise of oralism in the late 19th
century and the resulting educational shift that took place. Sections 7 and 8 chronicle
the birth, development, and establishment of the discipline of sign language linguistics.
The historical relationships between sign languages are examined in section 9, and
section 10 reviews some overall trends in the field of sign language linguistics.

2. Sign languages: some initial considerations

As is the case with all human languages, sign languages are the product of human
instinct, culture, and interaction; whenever deaf individuals are great enough in num-
ber, a natural sign language will emerge. Sign languages are, however, unique among
human languages in several respects. Most obvious, and the difference that sets sign
languages apart as a clearly delineated subset of human languages, is the mode of
transmission ⫺ sign languages are visual-gestural as opposed to aural-oral. Indeed,
there are a number of distinctions between the two language modalities, distinctions
that may underlie some of the linguistic differences that have been noted between
signed and spoken languages (see Meier (2002); also see chapter 25, Language and
Modality, for details).
Because so few deaf children are born into deaf signing families (estimates range
between 2 and 10 percent; see Mitchell/Karchmer 2004), very few deaf individuals
acquire sign language in a manner similar to the way most hearing individuals acquire
spoken language ⫺ as a first language, in the home, from parents and siblings who are
fluent in the language. Historically, most deaf people were largely isolated from each
other, and used simple homesigns and gestures to communicate with family and friends
38. History of sign languages and sign language linguistics 911

(see Frishberg (1987) for a framework for identifying and describing homesign systems;
see Goldin-Meadow (2003) for a thorough discussion of gesture creation in deaf chil-
dren and the resilience of the language learning process; see Stone/Woll (2008) for a
look at homesigning in 18th and 19th century Britain; also see chapter 26). There have
been some exceptions to this, however, in a few, mostly remote, communities where
the incidence of hereditary deafness among an isolated population is high enough such
that an indigenous sign language emerged and was used alongside the spoken language
(see chapter 24, Shared Sign Languages, for details).
While sign languages have developed as minority languages nested within spoken
language environments, individual sign languages have complex grammatical structures
that are quite distinct from those found in the majority spoken language surrounding
them. Although most deaf people do not have access to majority languages in their
primary (spoken) form, in literate societies they do have access to, and indeed are
surrounded by, a secondary form of the majority language ⫺ print. Any time two
languages coexist in this manner there are bound to be cross-linguistic influences at
work, and this is definitely the case with sign languages (see chapter 35, Language
Contact and Borrowing).
With so few deaf children born into deaf families, most users of sign languages are
non-native, having been exposed to sign language as older children or, not infrequently,
adults. As a result, deaf social clubs and educational institutions (in particular residen-
tial schools for deaf children) have played, and continue to play, a major role in the
transmission of culture and language within deaf communities around the world. Over
the years, and in most countries where programs of deaf education have been estab-
lished, hearing educators have developed manual codes to represent aspects of the
majority spoken language (usually grammatical aspects). Also referred to as manually
coded languages (MCLs), these artificial sign systems (Signed German and Signed
Japanese, for example) usually adopt the word order of the spoken language but incor-
porate the lexical signs of the native sign language. Because language contact is so
pervasive in most deaf schools and communities, the various forms of language that
are used have been analyzed as comprising a sign language continuum; in the case of
the United States, American Sign Language (ASL), with a grammar distinct from spo-
ken English, would be at one end, and Signed English at the other. The middle region
of this continuum, often referred to as contact signing, exhibits features of both lan-
guages (Lucas/Valli 1989, 1992).
Throughout history, the culture and language of deaf people have been strongly
influenced by, indeed some would argue at times defined by, members of another cul-
ture ⫺ hearing people. A complex relationship exists between members of deaf com-
munities and the individuals who have historically tried to “help” them, in particular
experts in the scientific, medical, and educational establishments (see Lane 1992; Lane/
Hoffmeister/Bahan 1996; Ladd 2003). At various points and to varying degrees
throughout history, sign languages have been rejected by the larger hearing society
and, as a result, communities of deaf people have been forced to take their language
underground. Indeed, some have argued that the goal of many educational policies
and practices has been to prevent deaf people from learning or using sign languages
to communicate (the ‘oralism’ movement, see section 6). This sociolinguistic context,
one laden with discrimination and linguistic oppression, has without question had an
impact on the emergence and use of sign languages around the world.
912 VIII. Applied issues

3. Early perceptions of sign languages


Although we have only very limited historical accounts upon which to rely, knowledge
of sign language use among deaf people dates back at least 2,000 years in Western
civilizations. One of the earliest mentions of sign language surfaces in a series of Egyp-
tian texts dating to approximately 1200 BC. In a section of warnings to the idle scribe,
a magistrate admonishes, “Thou art one who is deaf and does not hear, to whom men
make (signs) with the hand” (Gardiner 1911, 39, in Miles 2005). Also among the earli-
est written records of sign language and deaf people are the statements of Socrates in
Plato’s dialogue Cratylus, which dates back to the 4th century BC: “And here I will ask
you a question: Suppose that we had no voice or tongue, and wanted to communicate
with one another, should we not, like the deaf and dumb, make signs with the hands
and head and the rest of the body?” (Plato, in Jowett 1931, 368). Dating to the late
second century AD, a discussion of the legal status of signing can be found in the
Mishnah, a collection of Jewish oral law: “A deaf-mute may communicate by signs and
be communicated with by signs.” (Gitten 5:7, in Danby 1933, 313).
Much of what we know of deafness during pre-Renaissance times has been gleaned
from the theological literature. Among his writings from the 4th century, St. Augustine
discusses gestures and signs as an alternative to spoken language for the communica-
tion of ideas. He notes that deaf people “signify by gesture and without the use of
words, not only things which can be seen, but also many others and almost everything
that we say” (St. Augustine, in Oates 1948, 377).
Recent historical research has confirmed that a number of deaf people (as many as
200 at one time) worked in the Turkish Ottoman court during the 15th through 20th
centuries. Their sign language was valued, often used by hearing members of the court
(including many sultans), and was recognized as being capable of expressing a full
range of ideas (Miles 2000).
While these early references reveal that sign languages were considered by some to
be appropriate communication systems for deaf people, an alternate view was held by
many; namely, that signing was inferior and that knowledge, as well as spiritual salva-
tion, could only be gained through the spoken word. This perception dates back to the
4th century BC and the writings of the Greek philosopher Aristotle who, in his treatise
On Sensation and the Sensible and other works, suggested that the sense of hearing
was essential for the development of intelligence and reason (Aristotle, in Hammond
1902). It was assumed that sound was the basis of language and, by extension, thought.
While Aristotle never asserted that deaf people could not be educated, his writings
came to be interpreted as characterizing deaf individuals as “senseless and incapable
of reason,” and “no better than the animals of the forest and unteachable” (Hodgson
1954, 62). These sentiments formed the early perceptions of sign language and the
educability of the deaf, and lived on in the minds of many for hundreds of years.

4. The development of deaf education and scholarly interest


in sign languages
It was not until the 16th century that the Aristotelian dogma concerning the status of
deafness began to be challenged in Western Europe. The Italian physician and mathe-
38. History of sign languages and sign language linguistics 913

matician Gerolamo Cardano, the father of a deaf son, recognized that deafness did not
preclude learning and education; on the contrary, he argued that deaf people could
learn to read and write, and that human thoughts could be manifest either through
spoken words or manual gestures (see Radutzky 1993; Bender 1960). At roughly the
same time, the earliest efforts to educate deaf people emerged in Spain, where a hand-
ful of wealthy Spanish families were able to hire private tutors for their deaf children.
Around the mid 16th century, the Benedictine monk Pedro Ponce de León undertook
the education of two deaf brothers, Francisco and Pedro de Velasco. Widely cited as
the first teacher of deaf children, Ponce de León initiated a school for deaf children
within the monastery at Oña. While the prevailing view in Spain held that deaf children
were uneducable and could not be taught to speak, de León was successful in teaching
his students to talk ⫺ an intellectual breakthrough that brought him considerable fame
(Plann 1997). Records indicate that de León taught nearly two dozen students over
the course of his time at the monastery, utilizing a method that included writing, a
manual alphabet, and also signs ⫺ both the Benedictine signs that had been used by
monks who had taken a vow of silence (see chapter 23, Manual Communication Sys-
tems: Evolution and Variation, for details), as well as the “homesigns” that the de
Velasco brothers had developed while living at home with their two deaf sisters (Plann
1993). The manual alphabet used by de León was likely the same set of standardized
handshapes used by the Franciscan monk Melchor de Yebra and eventually published
in 1593 (for a thorough discussion of the role that Benedictines played in the history
of deaf education, see Daniels 1997). Many of the manual alphabets currently used in
sign languages around the world are descendants of this one-handed alphabet.
In the 17th century, though still available only to the privileged class, deaf education
in Spain moved out of the monastery (Plann 1997). With this move came a change of
methodology; methods originally developed for teaching hearing children were em-
ployed with deaf children, with a focus on phonics as a tool to teach speech and read-
ing. During this time, Manuel Ramírez de Carrión served as a private tutor to Luis de
Velasco, the deaf son of a Spanish constable. De Carrión likely borrowed heavily from
the methods of Pedro Ponce de León, though he was quite secretive about his instruc-
tional techniques.
In 1620, the Spanish priest Juan Pablo Bonet published an influential book, Reduc-
ción de las letras y arte para enseñar a hablar a los mudos (“Summary of the letters
and the art of teaching speech to the mute”). The book lays out a method for educating
deaf children that focuses on the teaching of speech (reading and writing were taught
as a precursor to speech), and as such constitutes the first written presentation of the
tenants of oralism. While Bonet himself had little direct experience teaching deaf chil-
dren, he had served as secretary to the head of the Velasco household during the time
de Carrión was employed there, and thus the methods Bonet presents as his own were
likely those of de Carrión (Plann 1997). Nevertheless, Bonet’s book, which includes a
reproduction of de Yebra’s fingerspelling chart, was pivotal to the development of deaf
education and is often referred to as its literary foundation (Daniels 1997).
In Britain, the physician and philosopher John Bulwer was the first English writer
to publish on the language and education of the deaf (Woll 1987). Bulwer’s Chirologia,
or the Natural Language of the Hand (1644), is a study of natural language and gestures
and provides early insight into the sign language of deaf people in 17th century Britain,
including an early manual alphabet. Dedicated to two deaf brothers, Bulwer’s Philoco-
914 VIII. Applied issues

phus (1648) is based largely on Sir Kenelm Digby’s (1644) account of the deaf educa-
tion efforts in Spain, but also lays out Bulwer’s (apparently unfulfilled) plans to start
a school for deaf children in England (Dekessel 1992, 1993).
The education of deaf children had sparked the interest of some of Bulwer’s con-
temporaries as well, among them the British mathematician John Wallis, who served
as a tutor to at least two young deaf children in the 1650s and laid the groundwork for
the development of deaf education in Britain. Records of his teaching techniques indi-
cate that he used, among other things, the sign language and two-handed manual alpha-
bet that were used by deaf people of that time (Branson/Miller 2002). In 1680, Scottish
intellectual George Dalgarno published Didascalocophus; or, the Deaf and Dumbe
Man’s Tutor, a book on the education of deaf children in which he explains in greater
detail the two-handed manual alphabet and advocates for its use in instruction and
communication. While Dalgarno’s alphabet was not widely adopted, the direct ancestor
of the modern British two-handed alphabet first appeared in an anonymous 1698 publi-
cation, Digiti-lingua (Kyle/Woll 1985).
Outside the realm of education, deaf people and sign languages were increasingly
the subject of cultural fascination, and by the early 18th century had emerged as a
compelling focus for philosophical study during the age of enlightenment. Among
scholars of the day, sign languages were considered an important and legitimate object
of study because of the insight they provided into the nature and origin of human
language as well as the nature of the relationship between thought and language (see
Kendon 2002). Italian philosopher Giambattista Vico argued that language started with
gestures that had “natural relations” with ideas, and in his view sign languages of deaf
people were important because they showed how a language could be expressed with
“natural significations”. Following this line of thinking, French philosopher Denis Di-
derot, in his Lettre sur les sourds et muets (1751, in Meyer 1965), posited that the study
of natural sign languages of deaf people, which he believed were free from the struc-
tures of conventional language, might bring about a deeper understanding of the natu-
ral progression of thought. On the origin of language, French philosopher Étienne
Bonnot de Condillac explored the idea that human language began with the reciproca-
tion of overt actions, and that the first forms of language were thus rooted in action
or gesture (see chapter 23 for further discussion).

5. The emergence of deaf communities, sign languages,


and deaf schools

5.1. Europe

In the years before the Industrial Revolution, most deaf people were scattered across
villages and the homesigns used for communication within families and small commu-
nities were likely highly varied (see chapter 26 for discussion of homesign). But with
the onset of the Industrial Revolution, as large numbers of people migrated into towns
and cities across Europe, communities of deaf people came together and natural, more
standardized, sign languages began to emerge. The first published account of a deaf
community and natural sign language was authored by a deaf Frenchman, Pierre Des-
38. History of sign languages and sign language linguistics 915

loges. In his 1779 book, Observations d’un sourd et muèt, sur un cours elémentaire
d’education des sourds et muèts (“A deaf person’s observations about an elementary
course of education for the deaf”), Desloges writes about the sign language used
amongst the community of deaf people that had emerged in Paris by the end of the
18th century (Fischer 2002), now referred to as Old French Sign Language (Old LSF).
Desloges’ book was written to defend sign language against the false charges previously
published by Abbé Deschamps, a disciple of Jacob Pereire, an early and vocal oralist.
Deschamps believed in oral instruction of deaf children, and advocated for the exclu-
sion of sign language, which he denigrated as limited and ambiguous (Lane 1984).
Perhaps because of the unusually vibrant scholarly and philosophical traditions that
were established in the age of the French Enlightenment, and in particular the focus
on language as a framework for exploring the structure of thought and knowledge, the
French deaf community and emerging natural sign language were fairly well docu-
mented when compared to other communities across Europe. Nevertheless, despite a
paucity of historical accounts, we know that throughout the 18th and 19th centuries,
deaf communities took root and independent sign languages began to evolve across
Europe and North America.
One of the most important developments that led to the growth of natural sign
languages was the establishment of public schools for deaf children, where deaf chil-
dren were brought together and sign language was allowed to flourish. The first public
school for deaf children was founded by the Abbé Charles-Michel de l’Epée in Paris,
France in the early 1760s. In his charitable work with the poor, de l’Epée had come
across two young deaf sisters who communicated through sign (possibly the Old LSF
used in Paris at that time). When asked by their mother to serve as the sisters’ teacher,
de l’Epée agreed and thus began his life’s work of educating deaf children. While de
l’Epée is often cited as the “inventor” of sign language, he in fact learned natural sign
language from the sisters and then, believing he needed to augment their signing with
“grammar”, developed a structured method of teaching French language through signs.
De l’Epée’s Institution des sourds et muets, par la voie des signes méthodiques (1776)
outlines his method for teaching deaf children through the use of “methodical signs”,
manual gestures that represented specific aspects of French grammar. This method
relied, for the most part, on manual signs that were either adapted from natural signs
used within the Paris deaf community or invented (and therefore served as a form of
“manually coded French”). In the years following the establishment of the Paris school,
schools for deaf children were opened in locations throughout France, and eventually
the French manual method spread across parts of Europe. Following de l’Epée’s death
in 1789, Abbé Roche Amboise Sicard, a student of de l’Epée’s who had served as
principal of the deaf school in Bordeaux, took over the Paris school. De l’Epée and
his followers clearly understood the value of manual communication as an educational
tool and took a serious interest in the “natural language” (i.e. LSF) used among deaf
people (see Seigel 1969).
One of de l’Epée’s followers, Auguste Bébian, was an avid supporter of the use of
LSF in the classroom. Bébian’s most influential work, the one that relates most directly
to modern linguistic analyses of sign languages, is his 1825 Mimographie: Essai d’écri-
ture mimique, propre à régulariser le langage des sourds-muets. In this work, Bébian
introduces a sign notation system that is based on three cherological (phonological)
aspects: the movement, the “instruments du geste” (the means of articulation), and the
916 VIII. Applied issues

“points physionomiques” (applying mainly to aspects of facial expression) (Fischer


1995). Using this notation system, Bébian presents a dictionary of signs that is orga-
nized in such a way as to facilitate independent learning on the part of the students.
In addition to serving as an educational tool, the Mimographie served as a way of
standardizing and recording LSF signs, which Bébian hoped would lead to further
development and serious study of the language itself (Fischer 1993). Although the
primary focus of Bébian’s work was not exclusively linguistic, his Mimographie is an
early and significant contribution to the linguistic study of sign languages in that his
notation system represented a phonological approach to the analysis of signs.
A contrasting approach to educating deaf children was adopted in Germany by
Samuel Heinicke, considered by many to be the father of oral deaf education (though
Lane (1984) suggests that Heinicke was a follower of Dutchman Johann Konrad
Amann, a staunch oralist himself). Unlike de l’Epée and his colleagues in France,
Heinicke’s primary concern was the integration of deaf children into hearing society,
a goal best accomplished, he believed, by prohibiting the use of manual signs and
instead focusing exclusively on speech and speech reading. These two distinct educa-
tional philosophies (manualism and oralism) spread throughout Europe in the 18th and
19th centuries; the manual approach of de l’Epée took hold in areas of Spain, Portugal,
Italy, Austria, Denmark, Sweden, French Switzerland, and Russia, while the oral ap-
proach of Heinicke was adopted by educators throughout Germany and eventually in
many German-speaking countries, as well as in parts of Scandinavia and Italy.
A third, “mixed” method of teaching deaf children eventually arose in Austria. The
first Austrian school for deaf children was founded in Vienna in 1779 after a visit by
Emperor Josef II to the Paris school. In the years that followed, “daughter institutions”
were founded in cities across the Austro-Hungarian Empire. While the Viennese Insti-
tute initially employed manual methods of educating deaf students, a mixed method
was eventually developed, whereby written language, signs, and the manual alphabet
were used to teach spoken language (Dotter/Okorn 2003).
A combined system of teaching deaf children was also developed and took hold in
Great Britain, beginning in 1760 when Thomas Braidwood began teaching deaf stu-
dents in Scotland (though not a public school, Braidwood’s was the first school for deaf
children in Europe, serving children of the wealthy). Braidwood’s approach focused on
the development of articulation and the mastery of English, and while Braidwood’s
method has often been misrepresented as strictly oral, the original English system
utilized both speech and sign. Braidwood moved to London in 1783, where he opened
another private academy for teaching deaf children, and then in 1792 the first public
school for deaf children opened in London (Braidwood’s nephew, Joseph Watson
served as school head). In the coming decades, additional Braidwood family members
headed schools that were opened in Edinburgh and Birmingham, again with the focus
on developing articulation. Throughout the first half of the 19th century, most major
cities in Britain opened schools for deaf children ⫺ a total of 22 schools by 1870 (see
Kyle/Woll 1985).

5.2. North America


In the United States, one of the earliest attempts to organize a school for deaf children
was instigated by the Bollings family of Virginia, a family in which congenital deafness
was persistent across generations (Van Cleve/Crouch 1989). While the first generation
38. History of sign languages and sign language linguistics 917

of deaf Bollings children received their schooling at the Braidwood Academy in Edin-
burgh, Scotland, the hearing father of the second generation of deaf Bollings children,
William Bollings, sought to school his children in America. Bollings reached out to
John Braidwood, a grandson of the founder of the Braidwood Academy and former
head of the family school in England, who in 1812 had arrived from England with
plans to open a school for deaf children in Baltimore, Maryland. Though Bollings
suggested Braidwood start his endeavor by living with his family and tutoring the
Bollings children, Braidwood’s ambitions were far grander ⫺ he wanted to make his
fortune by opening his own school for deaf children. Though he had hopes of establish-
ing his institution in Baltimore, by the fall of 1812 Braidwood had landed in a New
York City jail, deeply in debt (likely the result of drinking and gambling, with which
he struggled until his death). Braidwood asked Bollings for help, and from late 1812
to 1815 he lived with the Bollings family on their Virginia plantation, named Cobbs,
where he tutored the deaf children. In March of 1815, Braidwood finally opened the
first school for deaf children in America, in which at least five students were enrolled.
Short-lived, the school closed in the fall of 1816 when Braidwood, again battling per-
sonal problems, disappeared from Cobbs.
1815 was also the year that the American minister Thomas Hopkins Gallaudet trav-
eled from Hartford, Connecticut to Europe in order to learn about current European
methods for educating deaf children. Gallaudet’s venture had been prompted by his
associations with a deaf neighbor girl, Alice Cogswell, and was underwritten largely
by her father, Dr. Mason Cogswell. From 1814 to 1817 Alice Cogswell attended a small
local school where her teacher, Lydia Huntley, utilized visual communication to teach
Alice to read and write alongside hearing pupils (Sayers/Gates 2008). But during these
years, Mason Cogswell, an influential philanthropist, continued working toward estab-
lishing a school for deaf children in America, raising money and ultimately sending
Gallaudet to Europe. Gallaudet’s first stop was Britain, where he visited the Braid-
wood School in London. The training of teachers of the deaf was taken very seriously
in Britain, and Braidwood’s nephew Joseph Watson insisted that Gallaudet commit to
a several-year apprenticeship and vow to keep the Braidwoodian techniques secret, an
offer Gallaudet declined. While at the London school, however, Gallaudet met de
l’Epée’s successor, Abbé Sicard, who, along with some former students, was in London
giving lecture-demonstrations on the French method of educating the deaf. Deeply
impressed by the demonstrations of two accomplished former students, Jean Massieu
and Laurent Clerc, Gallaudet accepted Sicard’s invitation to visit the Paris school (Van
Cleve/Crouch 1989). In early 1816, Gallaudet traveled to Paris and spent roughly three
months at the school, observing and learning the manual techniques used there. In
mid-June of 1816, Gallaudet returned by ship to America, accompanied by Laurent
Clerc, the brilliant former student and then teacher from the Paris school, who had
agreed to journey back home with Gallaudet. As the story goes, the journey back to
America provided the opportunity for Gallaudet to learn LSF from Clerc (Lane 1984).
Together with Mason Cogswell, Gallaudet and Clerc established in 1817 the Connecti-
cut Asylum for the Education and Instruction of Deaf and Dumb Persons, the first
permanent school for the deaf in America (Virginia had opened a school in the late
1780s, but it would close after only a few years). Located in Hartford, the Connecticut
Asylum is now named the American School for the Deaf and is often referred to
simply as “the Hartford School”.
918 VIII. Applied issues

Clerc’s LSF, as well as certain aspects of the methodical signing used by de l’Epée
and his followers, was introduced into the school’s curriculum, where it mingled with
the natural sign of the deaf students and eventually evolved into what is now known
as American Sign Language (ASL) (see Woodward (1978a) for a discussion of the
historical bases of ASL). With Gallaudet and Clerc at the helm, the new American
School was strictly manual in its approach to communication with its students; indeed,
speech and speech reading were not formally taught at the school (Van Cleve/
Crouch 1989).
Most of the students at the Hartford School (just seven the first year, but this
number would grow considerably) were from the surrounding New England cities and
rural areas and likely brought with them homesigns as a means of communication.
However, a number of deaf students came to the school from Martha’s Vineyard, the
island off the coast of Massachusetts that had an unusually high incidence of hearing
loss amongst the population. This unique early deaf community, centered in the small
town of Chilmark, flourished from the end of the 17th century into the early 20th cen-
tury. Nearly all the inhabitants of this community, both deaf and hearing, used an
indigenous sign language, Martha’s Vineyard Sign Language (MVSL) (Groce 1985). It
is quite likely that MVSL had an influence on the development of ASL.
In the years following the establishment of the Hartford school, dozens of residen-
tial schools for deaf children, each serving a wide geographic area, were founded in
states across eastern and middle America. The first two that followed, schools in New
York and Philadelphia, experienced difficulties that led them to turn to the Hartford
School for help. As a result, these first three residential schools shared common educa-
tional philosophies, curricula, and teacher-training methods; teachers and principals
were moved between schools, and in time these three schools, with the Hartford School
at the helm, provided the leadership for deaf education in America (Moores 1987).
Many graduates of these schools went on to teach or administrate at other schools for
deaf children that were being established around the country.
While there were advocates of oral education in America (Alexander Graham Bell
was perhaps the most prominent, but Samuel Gridley Howe and Horace Mann as
well), the manual approach was widely adopted, and ASL flourished in nearly all
American schools for deaf children during the first half of the 19th century. American
manualists took great interest in the sign language of the deaf, which was viewed as
ancient and noble, a potential key to universal communication, and powerfully “natu-
ral” in two respects: first was the sense that many signs retained strong ties with their
iconic origins, and secondly, sign language was considered the original language of
humanity and thus was closer to God (Baynton 2002). While this perspective was
hardly unique to America (these same notions were widely held among European
philosophers from the 18th century onward), the degree to which it influenced educa-
tional practices in America during the first half of the 19th century was notable.
The mid 1800s also saw the establishment and growth of deaf organizations in cities
around the country, particularly in those towns that were home to deaf schools. During
this time, sign language thrived, deaf communities developed, and the American Deaf
culture began to take shape (Van Cleve/Crouch 1989).
In 1864 the National Deaf-Mute College came into existence, with Edward M. Gal-
laudet (Thomas Hopkins Gallaudet’s son) as its president (the college had grown out
of a school for deaf and blind children originally founded by philanthropist Amos
38. History of sign languages and sign language linguistics 919

Kendall). Later renamed Gallaudet College, and eventually Gallaudet University, this
institution quickly became (and remains to this day) the center of the deaf world in
America. With its founding, the college expanded the horizons of residential deaf
school graduates by giving them access to higher education in a time when relatively
few young people, hearing or deaf, were provided the opportunity. The college also
provided a community within which ASL could flourish.
From the start, all instruction at the college was based on ASL; even when oralism
eclipsed manualism in the wake of the education debates of the late 19th century (see
section 6), Gallaudet College remained a bastion of sign language (Van Cleve/Crouch
1989). Because it catered to young adults and hired many deaf professors, Gallaudet
was instrumental in the maintenance of ASL. Gallaudet University was then, and re-
mains today, the main source of deaf leadership in America. On an international scale,
Gallaudet has become the Mecca of the Deaf world (Lane/Hoffmeister/Bahan 1996,
128).

6. The rise of oralism in the late 19th century

The second half of the 19th century saw a marked shift in educational practices, with a
move away from manual and combined methods of teaching deaf children and a clear
shift toward oralism. Several factors contributed to this shift, not least of which was
the scientific landscape ⫺ the rise of evolutionary theory in the late 19th century, and
in particular linguistic Darwinism. A strong belief in science and evolution led thinkers
of the day to reject anything that seemed ‘primitive’. Crucially, in this context, sign
language was viewed as an early form of language from which a more advanced, civi-
lized, and indeed superior oral language had evolved (Baynton 1996, 2002). The shift
to oralism that took place throughout the Western world was viewed as an evolutionary
step up, a clear and natural move away from savagery (Branson/Miller 2002, 150).
Additionally, because Germany represented (in the minds of many) science, progress,
and the promise of the future, many educators considered the German oral method
an expression of scientific progress (Facchini 1985, 358).
The latter half of the 19th century also saw the emergence of universal education,
and with this came a critical examination of the private and charity-based systems that
had dominated deaf education. Many early schools for deaf children (including de
l’Epée’s in Paris) were largely mission-oriented, charity institutions that focused on
providing deaf people with education in order to “save” them. It was not uncommon
in both Europe and America for deaf schools to employ former students as instructors;
for example, at mid-century, more than 40 percent of all teachers in American schools
were deaf (Baynton 1996, 60). This was to change as the professionalization of the
teaching field and a concerted focus on the teaching of speech pushed deaf teachers
out of the classroom.
One additional factor that contributed to the rise of oralism was the shifting political
ideology of nations, in general, toward a focus on assimilation and unification within
individual countries. In Italy, for example, signs were forced out of the schools in an
effort to unify the new nation (Radutzky 1993). Likewise, French politicians worked
to unify the French people and in turn forced deaf people to use the national (spoken)
920 VIII. Applied issues

language (Quartararo 1993). Shifting political landscapes were also at play across the
Atlantic where, during the post-Civil War era, American oralists likened deaf commu-
nities to immigrant communities in need of assimilation; sign language, it was argued,
encouraged deaf people to remain isolated, and was considered a threat to national
unity (Baynton 1996).
The march toward oralism culminated in an event that is central to the history of
deaf education and, by extension, to the history of sign languages ⫺ the International
Congress on the Education of the Deaf, held in Milan, Italy in 1880. While various
European countries and the United States were represented at the convention, the
majority of the 164 participants were from Italy and France, most were ardent support-
ers of oral education, and all but one (American James Denison) were hearing (Van
Cleve/Crouch 1989, 109 f.). With only six people voting against it (the five Americans
in attendance and Briton Richard Elliott), members of the Congress passed a resolu-
tion that formally endorsed pure oralism and called for the rejection of sign languages
in schools for deaf children. In subsequent years, there were marked changes in politi-
cal and public policy concerning deaf education in Europe. The consensus now was to
promote oralism; deaf teachers were fired, and signing in the schools was largely
banned (Lane 1984). In reality, there was continued resistance to pure oralism and
signing continued to be used, at least in some capacity, in some schools, and without
question, sign languages continued to flourish in deaf communities around Europe and
North America (Padden/Humphries 1988; Branson/Miller 2002).
This shift toward exclusively oral education of deaf children likely contributed to
sign languages being considered devoid of value, a sentiment that persisted for most
of the following century. Had the manual tradition of de l’Epée and his followers not
been largely stamped out by the Milan resolution, scholarly interest in sign languages
might have continued, and there might well have been earlier recognition of sign lan-
guages as full-fledged human languages by 20th century linguists.
Historically, within the field of linguistics, the philological approach to language
(whereby spoken languages were considered corrupt forms of written languages) had
given way by this time to a focus on the primacy of spoken language as the core form
of human language. This contributed to a view of sign languages as essentially derived
from spoken languages (Woll 2003). In addition, the Saussurean notion of arbitrariness
(where the link between a linguistic symbol and its referent is arbitrary) seemed to
preclude sign languages, which are rich in iconicity, from the realm of linguistic study
(for discussion of iconicity, see chapter 18).
And so, by the first half of the 20th century, the prevailing view, both in the field of
education and in the field of general linguistics, was that sign languages were little
more than crude systems of gestures. American structuralist Leonard Bloomfield, for
example, characterized sign languages as “merely developments of ordinary gestures”
in which “all complicated or not immediately intelligible gestures are based on the
conventions of ordinary speech” (Bloomfield 1933, 39). The “deaf-and-dumb” lan-
guage was, to Bloomfield, most accurately viewed as a “derivative” of language.

7. Sign language linguistics: a discipline is born


Sign language linguistics is often considered to be a discipline that can trace its starting
point back to the work of one scholar, indeed to one monograph. While scholarly
38. History of sign languages and sign language linguistics 921

interest in sign languages predated this period (particularly during the Enlightenment
and into the first half of the 19th century), the inauguration of modern linguistic re-
search on deaf sign languages took place in 1960.

7.1. Sign language research in North America

7.1.1. The pioneering work of William C. Stokoe

The first modern linguistic analysis of a sign language was published in 1960 ⫺ Sign
Language Structure: An Outline of the Visual Communication Systems of the American
Deaf. The author was William C. Stokoe, Jr., a professor of English at Gallaudet Col-
lege, Washington DC, the only college for the deaf in the world. Before arriving at
Gallaudet, Stokoe had been working on various problems in Old and Middle English,
and had come across George Trager and Henry Lee Smith’s An Outline of English
Structure (1951). The procedural methods for linguistic analysis advocated by Trager
and Smith made a lasting impression on Stokoe; as he was learning the basics of sign
language at Gallaudet and, more importantly, watching his deaf students sign to each
other, Stokoe noticed that the signs he was learning lent themselves to analysis along
the lines of minimal pairs (see Stokoe 1979). This initial observation led Stokoe to
explore the possibility that signs were not simply iconic pictures drawn in the air with
the hands, but rather were organized symbols composed of discrete parts. Stokoe spent
the summer of 1957 studying under Trager and Smith at the Linguistic Society of
America sponsored Linguistics Institute held in Buffalo, NY, and then returned to
Gallaudet and began working on a structural linguistic analysis of ASL. In April of
1960, Sign Language Structure appeared in the occasional papers of the journal Studies
in Linguistics, published by the Department of Anthropology and Linguistics at the
University of Buffalo, New York.
Two main contributions emerged from Stokoe’s seminal monograph. First, he pre-
sented an analysis of the internal structure (i.e. the phonology) of individual signs; the
three primary internal constituents identified by Stokoe were the tabula (position of
the sign), the designator (hand configuration), and the signation (movement or change
in configuration). This analysis of the abstract sublexical structure of signs illustrated
that the signs of sign languages were compositional in nature. The second major contri-
bution of the 1960 monograph was the transcription system Stokoe proposed (subse-
quently referred to as “Stokoe notation”). Prior to the publication of Sign Language
Structure, there existed no means of writing or transcribing the language used by mem-
bers of the American deaf community; individual signs had been cataloged in dictionar-
ies through the use of photographs or drawings, often accompanied by written English
descriptions of the gestures. Stokoe notation provided a means of transcribing signs,
and in so doing helped illuminate the internal structure of the language (see chapter
43, Transcription).
In the years directly following the publication of the monograph, Stokoe continued
developing his analysis of ASL. With the help of two deaf colleagues, Carl Croneberg
and Dorothy Casterline, Stokoe published the first dictionary of ASL in 1965, A Dic-
tionary of American Sign Language on Linguistic Principles (DASL). It is an impressive
work, cataloging more than 2000 different lexical items and presenting them according
922 VIII. Applied issues

to the linguistic principles of the language. Stokoe’s two early works formed a solid
base for what was to become a new field of research ⫺ sign language linguistics.
Stokoe’s initial work on the structure of ASL was not, for the most part, well re-
ceived within the general linguistics community (see McBurney 2001). The message
contained within the monograph ⫺ that sign languages are true languages ⫺ stood
counter to the intellectual climate within the field of linguistics at that time. In the
years prior to the publication of Sign Language Structure, language was equated with
speech, and linguistics was defined as the study of the sound symbols underlying speech
behavior. This view of linguistics, and of what constitutes a language, was not easily
changed. Furthermore, Stokoe’s analysis of ASL was nested within a structuralist
framework, a framework that soon fell out of favor. The 1957 publication of Noam
Chomsky’s Syntactic Structures marked the beginning of a new era of linguistic theory,
where the focus shifted from taxonomic description to an explanation of the cognitive
representation of language. Because Chomsky’s emphasis on grammar as a cognitive
capacity was nowhere to be found in the work of Stokoe, it is not entirely surprising
that Stokoe’s monograph received little attention from linguists of the day.
Just as Stokoe’s early work was not well received by linguists, a linguistic analysis
of ASL was not something readily accepted by deaf educators and related profession-
als. Though ASL had been used by students outside the classroom all along, and had
continued to thrive in deaf communities and deaf clubs throughout America, oralism
had been standard practice in most American schools for many years, and Stokoe’s
seemingly obscure and technical analysis of signs did not change that overnight. It was
not until the late 1960s and early 1970s that this began to change, when many American
educators, frustrated by the failure of oral methods, began investigating and consider-
ing the use of signs in the classroom. The result was an eventual shift by educators to
a combined approach where sign and speech were used together (an approach which
previously had been used in many situations where deaf and hearing people needed
to communicate with each other).
In the United States, the 1960s was dominated by issues of civil rights, equality, and
access. This period also saw a shift in the overall perception of deaf people in America;
a changing attitude toward disabled Americans and an increasing articulateness and
visibility of deaf leaders brought about a new appreciation for and acceptance of the lan-
guage of the deaf community. Throughout the 1970s and 1980s, because of educators’ atti-
tudes toward ASL, a version of the combined method came into use; referred to as Simul-
taneous Communication, this system consisted of speaking and signing (ASL signs in
English word order) at the same time. This combination of signs and spoken words
came to be used in most American schools and programs serving deaf children, repre-
senting a marked change from strictly oral education. (In more recent years, as a more
positive view of ASL developed, a bilingual approach to the education of deaf children,
in which ASL is considered the first language of the deaf child and English is learned
as a second language, primarily through reading and writing, has taken hold in some
schools across America as well as many other countries around the globe.)

7.1.2. The development of the discipline

Despite the fact that his early work was not acknowledged or accepted by linguists or
educators, Stokoe continued his research on the linguistics of ASL and inspired many
others to join him (see Armstrong/Karchmer/Van Cleve (2002) for a collection of es-
38. History of sign languages and sign language linguistics 923

says in honor of Stokoe). Over the course of the next few decades, a growing number
of scholars turned their attention to sign languages.
In 1970, the Laboratory for Language and Cognitive Studies (LLCS) was estab-
lished at the Salk Institute for Biological Studies in San Diego, under the directorship
of Ursula Bellugi (the laboratory’s name was eventually changed to Laboratory for
Cognitive Neuroscience). In the years since its founding, this laboratory has hosted a
number of researchers who have conducted an impressive amount of research on the
grammar, acquisition, and processing of ASL, both at the LLCS and at other institu-
tions across North America. Among the researchers involved in the lab in the early
years were Robbin Battison, Penny Boyes Braem, Karen Emmorey, Susan Fischer,
Nancy Frishberg, Harlan Lane, Ella Mae Lentz, Scott Liddell, Richard Meier, Don
Newkirk, Elissa Newport, Carlene Pederson, Laura-Ann Petitto, Patricia Siple, and
Ted Supalla (see Emmorey/Lane (2000), a Festschrift honoring the life and work of
Ursula Bellugi and Edward Klima, her husband and colleague, for original contribu-
tions by many of these researchers). Also in San Diego, graduate students of linguistics
and psychology at the University of California researched various aspects of ASL struc-
ture; the most well known, perhaps, is Carol Padden, a deaf linguist whose 1983 PhD
dissertation (published in 1988) was and continues to be an influential study of the
morphosyntax of ASL.
In 1971, the Linguistics Research Lab (LRL) was established at Gallaudet College.
Stokoe served as its director and continued to work, along with a number of other
researchers, on the linguistic analysis of ASL. Although the LRL closed its doors in
1984, many researchers who were involved there went on to do important work in the
field, including Laura-Ann Petitto, Carol Padden, James Woodward, Benjamin Bahan,
MJ Bienvenu, Susan Mather, and Harry Markowicz. The following year, 1972, Stokoe
began publishing the quarterly journal Sign Language Studies. Although publication
was briefly suspended in the 1990s, for nearly 40 years, Sign Language Studies (cur-
rently published by Gallaudet University Press and edited by Ceil Lucas) has served
as a primary forum for the discussion of research related to sign languages. 1972 also
saw the publication of one of Stokoe’s later influential works, Semiotics and Human
Sign Languages (Stokoe 1972).
The first linguistics PhD dissertation on ASL was written at Georgetown University
in 1973 by James Woodward, one of the researchers who had worked with Stokoe in
his research lab. Since that time, hundreds of theses and dissertations have been written
on sign languages, representing all the major subfields of linguistic analysis.
Also in 1973, a section on sign language was established at the annual conference
of the Linguistic Society of America (LSA), signaling a broadened acceptance of sign
languages as legitimate languages. At the 1973⫺74 LSA meeting, members of the
LLCS presented research on the phonological structure of ASL and its course of his-
torical change into a contrastive phonological system. Henceforth, research on ASL
began to have an impact on the general linguistics community (Newport/Supalla 2000).
In 1974, the first conference on sign language was held at Gallaudet, and in 1979,
the Department of Linguistics was established there. At present, there are several
other academic departments around the United States that have a particular focus on
ASL, including Boston University, University of Arizona, Rutgers University, Univer-
sity of Rochester, University of California at San Diego, University of Texas at Austin,
University of New Mexico, and Purdue University. In 1977, Harlan Lane founded the
924 VIII. Applied issues

Language Research Laboratory at Northeastern University, Boston, Massachusetts. In-


itially working with deaf research assistants Marie Philip and Ella Mae Lentz, Lane’s
laboratory conducted research on the psycholinguistics of sign language into the early
1990s (other collaborators who have gone on to make significant contributions to the
field included François Grosjean, Judy Shepard-Kegl, Howard Poizner, and Trude
Schermer).
Though much of the initial research on ASL focused on phonological structure,
throughout the 1970s and 1980s an increasing number of American researchers pub-
lished works that contributed to a broader understanding of the linguistic structure of
ASL. Among the areas explored during this period were: complex word structure,
particularly verbal constructions (Fischer 1973; Padden 1983); derivational processes
(Supalla/Newport 1978); verbs of motion and location (Supalla 1982); syntactic struc-
ture, including non-manual markers (Liddell 1980); and historical variation and change
(Frishberg 1975; Woodward 1976).
Aside from Stokoe’s 1960 monograph, the single most influential work in the emerg-
ing discipline of sign language linguistics and, more specifically, in the acceptance of
sign languages as real languages, worthy of linguistic study, was Edward Klima and
Ursula Bellugi’s The Signs of Language (SOL). SOL had begun as a collection of
working papers at the Salk Institute’s LLCS, but it was developed into a full-length
book and published by Harvard University Press in 1979. Although this volume was,
and still is, referred to as “the Klima and Bellugi text”, it is in fact a summary of
research conducted by a number of scholars throughout the 1970s, all of whom worked
with Bellugi at Salk. In addition to discussing the internal structure of signs, SOL
contains analyses of, among other things, the origin and development of ASL, historical
changes in the language, the nature of iconicity, the grammatical processes at work,
the coding and processing of signs, and wit and poetry in ASL. The text is particularly
thorough in its treatment of the various aspects of ASL morphology.
In contrast to Sign Language Structure, SOL was widely read by linguists and schol-
ars from related disciplines. SOL was, and still is, viewed as a groundbreaking contribu-
tion to the field, one that demonstrates how human language does not have to be
spoken, and how the human capacity for language is more profound than the mere
capacity for vocal-auditory communication.
While the changing theoretical landscape of Stokoe’s time worked against him, it
worked to the advantage of Klima and Bellugi; the paradigmatic shift in linguistic
theory that followed from the work of Chomsky (1957, 1965) created a new space for
and interest in the study of sign languages (see McBurney 2001). If the universal princi-
ples that are proposed to exist in all languages are, in fact, universal, then they should
also be able to account for language in another modality.
While most deaf Anglophone residents of Canada use ASL (the varieties of LSF
and British Sign Language brought by immigrants in the 19th century have largely
disappeared), Langue des Signes Québécoise (LSQ), which is historically related to
LSF, is used primarily by deaf people in French-speaking Quebec (Winzer 1987). In
the late 1970s, a group of deaf researchers, headed by Paul Bourcier and Julie Elaine
Roy, began working on a dictionary of LSQ at the Institut Raymond-Dewar in Mon-
tréal. Also at this time, Rachel Mayberry conducted some preliminary research on the
comprehension of LSQ. In 1983, Laura-Ann Petitto established the Cognitive Science
Laboratory for Language, Sign Languages, and Cognition at McGill University in Mon-
38. History of sign languages and sign language linguistics 925

tréal. Petitto had previously done studies on manual babbling in children exposed to
sign languages, a research program begun when she worked with Ursula Bellugi at
the Salk Institute. Once established at McGill, Petitto focused on investigating the
phonological structure, acquisition, and neural representation of LSQ. Deaf artist and
research assistant Sierge Briere was a key collaborator in Petitto’s research into the
linguistics of LSQ, and graduate student Fernande Charron pursued developmental
psycholinguistic studies in the lab as well. In 1988, Colette Dubuisson of Université du
Québec à Montréal received a research grant from the Social Sciences and Humanities
Research Council of Canada (SSHRC) and began working on the linguistics of LSQ;
collaborators in this group included Robert Fournier, Marie Nadeau, and Christopher
Miller.

7.2. Sign language research in Europe

The earliest work on sign communication in Europe was Bernard Tervoort’s 1953 dis-
sertation (University of Amsterdam), Structurele Analyse van Visueel Taalgebruik Bin-
nen een Groep Dove Kinderen (“Structural Analysis of Visual Language Use in a
Group of Deaf Children”) (Tervoort 1954). While Tervoort is considered one of the
founding fathers of international sign language research, his 1953 thesis has not, for
the most part, been considered a modern linguistic analysis of a sign language because
the signing he studied was not a complete and natural sign language. The Dutch educa-
tional system forbade the use of signs in the classroom, so most of the signs the children
used were either homesigns or signs developed amongst the children themselves. The
communication he studied did not, therefore, represent Sign Language of the Nether-
lands (NGT), a fully developed natural sign language. Nevertheless, Tervoort treated
the children’s signing as largely linguistic in nature, and his descriptions of the signing
suggest a complex structural quality (see also Tervoort 1961).
Research on the linguistics of natural sign languages emerged later in Europe than
it did in the United States. Two factors contributed to this (Tervoort 1994). First, most
European schools for deaf children maintained a strictly oral focus longer than did
schools in North America; signing was discouraged, and sign languages were not under-
stood to be “real” languages worthy of study. Second, the rise in social status and
acceptance of sign language that deaf Americans enjoyed beginning in the late 1960s
was not, for the most part, experienced by deaf Europeans until later.
European sign language research began in the 1970s, and became established most
quickly in Scandinavia. While Swedish Sign Language (SSL) and Finnish Sign Lan-
guage (FinSL) dictionaries appeared in the early 1970s, it was in 1972 that formal
research on the linguistics of SSL began at the Institute of Linguistics, University of
Stockholm, under the direction of Brita Bergman (Bergman 1982). Other linguists
involved in early work on SSL included Inger Ahlgren and Lars Wallin. Research on
the structure of Danish Sign Language (DSL) began in 1974, with early projects being
initiated by Britta Hansen at the Doeves Center for Total Communication (KC) in
Copenhagen. Dr. Hansen collaborated with Kjær Sørensen and Elisabeth Engberg-
Pedersen to publish the first comprehensive work on the grammar of DSL in 1981,
and since that time, there has been a considerable amount of research into various
aspects of DSL. In neighboring Scandinavian countries, research on the structure of
926 VIII. Applied issues

other indigenous sign languages began in the late 1970s, when Marit Vogt-Svendsen
began working on Norwegian Sign Language (NSL) at the Norwegian Postgraduate
College of Special Education, near Oslo, and Odd-Inge Schröder began a research
project at the University of Oslo (Vogt-Svendsen 1983; Schröder 1983). While diction-
ary work began in the 1970s, formal investigations into the structure of FinSL were
begun in 1982, when the FinSL Research Project began, under the direction of Fred
Karlsson, at the Department of General Linguistics at the University of Helsinki, with
some of the earliest research conducted by Terhi Rissanen (Rissanen 1986).
In Germany, scholars turned their attention to German Sign Language (DGS) in
1973 when Siegmund Prillwitz, Rolf Schulmeister, and Hubert Wudtke started a re-
search project at the University of Hamburg (Prillwitz/Leven 1985). In 1987, Dr. Prill-
witz founded the Centre (now Institute) for German Sign Language and Communica-
tion of the Deaf at the University of Hamburg. Several other researchers contributed
to the early linguistic investigations, including Regina Leven, Tomas Vollhaber, Thomas
Hanke, Karin Wempe, and Renate Fischer. One of the most important projects under-
taken by this research group was the development and release, in 1987, of the Hamburg
Notation System (HamNoSys), a phonetic transcription system developed in the tradi-
tion of Stokoe’s early notation (see chapter 43, Transcription). A second major contri-
bution is the International Bibliography of Sign Language; a searchable online data-
base covering over 44,000 publications related to sign language and deafness, the
bibliography is a unique and indispensable research tool for sign linguists.
The first linguistics PhD thesis on a European sign language (British Sign Language,
or BSL) was written by Margaret Deuchar (Stanford University, California, 1978).
Following this, in the late 1970s and early 1980s, there was a marked expansion of
research on the linguistics of BSL. In 1977, a Sign Language Seminar was held at the
Northern Counties School for the Deaf at Newcastle-upon-Tyne; co-sponsored by the
British Deaf Association and attended by a wide range of professionals as well as
researchers from the Swedish Sign Linguistics Group and Stokoe from Gallaudet, this
seminar marked a turning point after which sustained research on BSL flourished
(Brennan/Hayhurst 1980). In 1978, the Sign Language Learning and Use Project was
established at the University of Bristol (James Kyle, director, with researchers Bencie
Woll, Peter Llewellyn-Jones, and Gloria Pullen, the deaf team member and signing
expert). This project eventually led to the formation of the Centre for Deaf Studies
(CDS) at Bristol, co-founded in 1980 by James Kyle and Bencie Woll. Early research
here focused on language acquisition and coding of BSL, but before long scholars at
CDS were exploring all aspects of the structure of BSL (Kyle/Woll 1985). Spearheaded
by Mary Brennan with collaboration from Martin Colville and deaf research associate
Lilian Lawson, the Edinburgh BSL Research Project (1979⫺84) focused primarily on
the tense and aspect system of verbs and on developing a notation system for BSL.
The Deaf Studies Research Unit at the University of Durham was established in 1982,
and researchers there worked to complete a dictionary of BSL, originally begun by
Allan B. Hayhurst in 1971 (Brien 1992).
In the mid 1970s, Bernard Mottez and Harry Markowicz at the Centre National de
la Recherché Scientifique (CNRS) in Paris began mobilizing the French deaf commu-
nity and working toward the formal recognition and acceptance of LSF. While the
initial focus here was social, research on the structure of LSF began soon after. Two
of the first researchers to focus specifically on linguistic aspects of LSF were Christian
38. History of sign languages and sign language linguistics 927

Cuxac (Cuxac 1983) and Daniele Bouvet in the late 1970s. Another important early
scholar was Paul Jouison who, in the late 1970s, worked with a group of deaf individuals
in Bordeaux to develop a notation system for LSF (described in Jouison 1990), and
went on to publish a number of important works.
In Italy, Virginia Volterra and Elena Pizzuto were among the first to do research
on Italian Sign Language (LIS) in the late 1970s and early 1980s. Working at the CNR
Institute of Psychology in Rome, these researchers conducted a wide range of investiga-
tions into various linguistic, psycholinguistic, educational, and historical aspects of LIS
(Volterra 1987). Also in the late 1970s, Penny Boyes Braem, who had worked alongside
Ursula Bellugi at the LLCS, started a research center in Basel, Switzerland, and began
investigating the structure of Swiss-German Sign Language (Boyes Braem 1984).
Modern linguistic research on NGT began in the early 1980s. Following the publica-
tion of his 1953 thesis, Bernard Tervoort continued to publish works on language devel-
opment and sign communication in deaf children. In 1966, Tervoort became a full
professor at the University of Amsterdam, where he helped establish the Institute for
General Linguistics. Tervoort’s contribution to sign language studies is both founda-
tional and significant; his thesis was a major step toward understanding the language
of deaf children, he inspired and mentored many scholars, and his later work paved
the way for linguistic research projects into NGT (see Kyle 1987). In 1982, the first
formal sign language research group was established at the Dutch Foundation for the
Deaf and Hard of Hearing Child (NSDSK), with support from Tervoort at the Univer-
sity of Amsterdam. The initial focus here was research into the lexicon of NGT; Trude
Schermer served as the project director for the development of the first NGT diction-
ary, and worked in collaboration with Marianne Stroombergen, Rita Harder, and Hel-
een Bos (see Schermer 2003). Research on various grammatical aspects of NGT was
also conducted at the NSDSK, and later on by Jane Coerts and Heleen Bos at the
University of Amsterdam as well. In 1988, Anne Baker took over the department
chair from Tervoort, and has had a substantial impact on sign language research in the
Netherlands since then.
Early research on Russian Sign Language (RSL) took place in the mid 1960s, at the
Institute of Defectology in Moscow (now the Scientific-Research Institute of Correct-
ive Pedagogy), where in 1969 Galina Zaitseva completed her PhD thesis on spatial
relationships in RSL. While additional documentation of and research on the language
has been slow in coming, the founding in 1998 of the Centre for Deaf Studies in Mos-
cow, with the late Zaitseva as the original academic director, has brought about a
renewed interest in linguistic research.
Over the past few decades, sign language research has continued to flourish across
much of Europe, and the number of individual researchers and research groups has
grown considerably. While some European sign languages have received more schol-
arly attention than others, most natural sign languages found in Europe have been
subject to at least some degree of linguistic investigation. Here the focus has been on
discussing the earliest work on individual European sign languages; some more recent
developments will be discussed in section 8.

7.3. Sign language research in other parts of the world


Research on the linguistics of natural sign languages outside North America and Eu-
rope is, for the most part, less well-established, though certainly underway in a number
928 VIII. Applied issues

of other countries around the globe. Dictionaries have been compiled for many of
these sign languages, and additional research into the linguistic structure of some has
been undertaken.

7.3.1. Asia

The early 1970s marked the start of research on Israeli Sign Language (Israeli SL),
with linguistic investigations and dictionary development instigated by Izchak Schle-
singer and Lila Namir (Schlesinger/Namir 1976). In the early 1990s, intensive descrip-
tive and theoretical linguistic research began at the University of Haifa, which led to
the establishment in 1998 of the Sign Language Research Laboratory, with Wendy
Sandler as director (see Meir/Sandler 2007).
A 1982 dissertation by Ziad Salah Kabatilo provided the first description of Jorda-
nian Sign Language but there was no further research on the language until Bernadet
Hendriks began investigating its structure in the mid 2000s (Hendriks 2008).
A Turkish Sign Language dictionary was published in 1995, but research into the
structure of the language did not begin until the early 2000s when Ulrike Zeshan, Aslı
Özyürek, Pamela Perniss, and colleagues began examining the phonology, morphology,
and syntax of the language.
In the early 1970s, research on Japanese Sign Language (NS) was initiated. The
Japanese Association of the Deaf published a five-volume dictionary in 1973, and Fred
Peng published early papers on a range of topics. Other early researchers included
S. Takemura, S. Yamagishi, T. Tanokami, and S. Yoshizawa (see Takashi/Peng 1976).
The Japanese Association of Sign Language was founded in 1974, while academic con-
ferences have been held and proceedings published beginning in 1979.
In the mid 1970s, research on Indian Sign Language was undertaken by Madan
Vasishta, in collaboration with Americans James Woodward and Kirk Wilson (Vasishta/
Woodward/Wilson 1978). More recent research by Ulrike Zeshan (2000) has revealed
that the sign languages of India and Pakistan are, in fact, varieties of the same language,
which she terms Indo-Pakistani Sign Language (IPSL). Initial documentation and dic-
tionary work on the Pakistani variety of ISPL was done in the late 1980s by the ABSA
(Anjuman-e-Behbood-e-Samat-e-Atfal) Research Group which was established in
1986; in more recent years, the Pakistan Association of the Deaf has established a Sign
Language Research Group dedicated to analyzing sign language in Pakistan.
Chinese Sign Language was also the subject of scholarly interest in the mid 1970s,
with a dictionary published in 1977 and initial research conducted by Shun-Chiu Yau
(1991). Recently, a comprehensive dictionary of Hong Kong Sign Language (HKSL)
has become available, based on research led by Gladys Tang and colleagues at the
recently established Centre for Sign Language and Deaf Studies (CSLDS) at the Chi-
nese University of Hong Kong (Tang 2007). Since 2003, CSLDS has initiated a compre-
hensive Asia-Pacific sign linguistics research and training program. In Taiwan, early
attempts to document Taiwanese Sign Language (TSL) began in the late 1950s and
continued throughout the 1960s and early 1970s (Smith 2005), but it was not until the
late 1970s that TSL came under closer study when Wayne Smith began publishing a
series of papers examining several aspects of the language. In 1981, following several
years of research on TSL, Chao Chienmin published Natural Sign Language (rev. ed.
38. History of sign languages and sign language linguistics 929

Chao et al. 1988). Julia Limei Chen researched a range of TSL features in the 1980s,
a dictionary was published in 1983, and Wayne Smith wrote a dissertation on the mor-
phological structure of TSL (Smith 1989). The 1990s saw continued investigations into
the structure of TSL, including a collection of papers by linguist Jean Ann examining
the phonetics and phonology of the language.
A dictionary of Korean Sign Language (KSL) was published in 1982, but it was
only nearly two decades later that the language received scholarly attention: J. S. Kim
worked on gesture recognition, and Sung-Eun Hong (2003, 2008) on classifiers and
verb agreement. A Filipino Sign Language dictionary was published by Jane MacFad-
den in 1977; since that time, there has been additional research and publications on
the language, led by Lisa Martinez.
Sign Language in Thailand came under study in the mid 1980s, with a dictionary
published by Suwanarat and Reilly in 1986, and initial research on spatial locatives. In
the late 1990s and early 2000s, James Woodward published several papers on the histor-
ical relationships between sign languages in Thailand and Viet Nam. Short dictionaries
of Cambodian Sign Language have been published as part of the Deaf Development
Program in Phnom Penh, and recently work on Burmese Sign Language has begun
(Justin Watkins, personal communication).

7.3.2. South America

The first South American sign language to be documented was Brazilian Sign Lan-
guage (LSB), which is used by urban deaf communities in the country. An early volume
appeared in 1875; inspired by work coming out of France at the time, deaf student
Flausino José da Gama compiled a dictionary organized by category of sign. A second
illustrated dictionary appeared in 1969, authored by American missionary Eugene
Oates. Following this early work, it was not until the early 1980s that the structure of
LSB came under study, with scholarly analysis on a wide range of topics conducted by
Lucinda Ferreira-Brito and Harry Hoemann, among others. More recently, Ronice
Müller de Quadros wrote a 1999 dissertation on the syntactic structure of Brazilian
Sign Language, and has gone on to establish a Deaf Studies program at Santa Caterina
University in Florianópolis. Relatively early availability of studies led to LSB being
among the first languages to be included in early cross-linguistic comparisons.
When compared to other South American sign languages, Argentine Sign Language
(LSA) is relatively well researched. Beginning in the early 1990s, Maria Ignacia Mas-
sone and colleagues began investigations into a wide range of topics, including kinship
terms, number, gender, grammatical categories, word order, tense and modality, non-
manuals, and phonetic notation of LSA (Massone 1994). A dictionary was also com-
piled and published in 1993.
In 1991, a volume on the syntactic and semantic structure of Chilean Sign Language
(ChSL) was published by Mauricio Pilleux and colleagues at Universidad Austral de
Chile. A dictionary came out that same year, and since that time, there have been
several studies examining aspects such as negation, spatial locatives, and psycholinguis-
tic processing of ChSL.
Colombian Sign Language (CoSL) came under study in the early 1990s, with a
dictionary published in 1993. A deaf education manual published that same year in-
930 VIII. Applied issues

cluded a discussion on linguistic descriptions of some aspects of CoSL. In the late


1990s, Nora Lucia Gomez began research on the morphology and phonology of the
language, and Alexander Oviedo began research on classifier constructions. Oviedo
published a large and comprehensive volume on the grammar of CoSL in 2001.
Research into the sign languages of other South American countries has yet to be
fully established, though dictionaries have been compiled for several, including Uru-
guay (1988), Paraguay (1989), and Guyana (2001). A dictionary for Venezuelan Sign
Language was published in 1983, and an unpublished manuscript dated 1991 provides
a linguistic analysis of the language.

7.3.3. Oceania

A significant body of research, dating back to the early 1900s, exists on the sign lan-
guages traditionally used by Aboriginal communities in some parts of Australia. These
languages are used as alternatives to spoken languages, often in connection with taboos
concerning speech between certain members of the community or at particular times
(e.g. during a period of mourning). LaMont West, an American linguist, produced a
1963 report on his and others’ research on Australian Aboriginal sign languages, and
English scholar Adam Kendon turned his attention to these languages in the mid 1980s
(Kendon 1989) (see chapter 23 for further discussion).
Research on the lexicon and structure of Australian Sign Language (Auslan) began
in the early 1980s. Trevor Johnston’s 1989 doctoral dissertation was the first full-length
study of the linguistics of Auslan. Included in the thesis was a dictionary, and Johnston’s
continued work with colleagues Robert Adam and Adam Schembri led to the publica-
tion in 1998 of a new and comprehensive dictionary of Auslan. A recent collaboration
with Adam Schembri has produced a comprehensive introduction to the language
(Johnston/Schembri 2007). Teaching materials and several academic publications have
also been produced by Jan Branson and colleagues at the National Institute for Deaf
Studies, established in 1993 at La Trobe University.
Studies of New Zealand Sigh Language (NZSL) began in the early 1970s with an
unpublished thesis by Peter Ballingall that examined the sign language of deaf students
and concluded that it is a natural language. In the early 1980s, American Marianne
Collins-Ahlgren began research on NZSL, which culminated in her 1989 thesis (Victo-
ria University, Wellington) comprising the first full description of the grammar. In
1995, Victoria University established the Deaf Studies Research Unit (DSRU), and
researchers continued investigations into the lexicon and grammar of NZSL. The first
major project of the DSRU was the development and publication in 1997 of a compre-
hensive dictionary of NZSL. Currently under the direction of David McKee, research is
ongoing at DRSU, including a large study examining sociolinguistic variation in NZSL.

7.3.4. Africa

While there are at least 24 sign languages in Africa (Kamei 2006), sign language re-
search is relatively sparse in the region. This appears to be changing, as evidenced by
38. History of sign languages and sign language linguistics 931

a recent Workshop on Sign Language in Africa that was held in 2009 in conjunction
with the 6 th World Congress of African Linguistics in Leiden (the Netherlands).
In North Africa, dictionaries have been compiled for Libyan Sign Language (1984),
Egyptian Sign Language (1984), and Moroccan Sign Language (1987). Sign language
research began in West Africa in the mid 1990s when linguist Constanze Schmaling
began studying Hausa Sign Language, the sign language used by members of the deaf
community in areas of Northern Nigeria. Her 1997 dissertation (published in 2000)
provides a descriptive analysis of the language. In 2002, linguist Victoria Nyst began
studying Adamorobe Sign Language, an indigenous sign language used in an eastern
Ghana village that has a very high incidence of deafness. Nyst completed and then
published a dissertation containing a sketch grammar of the language (Nyst 2007).
Currently with the Leiden University Centre for Linguistics, Nyst has initiated a re-
search project to document and describe another West African Sign Language, Mali-
nese Sign Language, for which a dictionary was compiled in 1999.
With the exception of a dictionary for Congolese Sign Language (1990) and a recently-
published dictionary of Rwandan Sign Language (2009), it appears that there has been
limited, if any, linguistic research on sign languages used in Central African countries.
An early description of East African signs appeared in the journal Sign Language
Studies in 1977, and a paper on Ethiopian Sign Language was presented at the 1979
World Congress of the World Federation of the Deaf. A dictionary of Kenyan Sign
Language was made available in 1991, and in 1992, a linguistic study was undertaken
by Philomen Akach, who examined sentence formation in Kenyan Sign Language.
While there has been additional interest in the language, most of it has focused on sign
language development and education in the country. One exception is a 1997 paper by
Okombo and Akach on language convergence and wave phenomena in the growth of
Kenyan Sign Language. The Tanzanian Association of the Deaf published a dictionary
in 1993.
A Ugandan Sign Language dictionary was published in 1998, and the following
year a diploma thesis (Leiden University, the Netherlands) by Victoria Nyst addressed
handshape variation in Ugandan Sign Language. Finally, an Ethiopian Sign Language
dictionary was published in 2008, and Addis Ababa University has recently launched
an Ethiopian Sign Language and Deaf Culture Program, with one of the aims being
to increase collaborative research on the language.
In the mid 1970s, Norman Nieder-Heitmann began researching sign languages in
South Africa and in 1980, a dictionary was published. A second dictionary was pub-
lished in 1994, the same year that a paper by C. Penn and Timothy Reagan appeared
in Sign Language Studies, exploring lexical and syntactic aspects of South African Sign
Language. More recently, Debra Aarons and colleagues have investigated a wide range
of topics, including non-manual features, classifier constructions and their interaction
with syntax, and the sociolinguistics of sign language in South Africa. Also, a research
project was recently launched to investigate the structural properties of the sign lan-
guages used by different deaf communities in South Africa in order to determine if
there is one unified South African Sign Language or many different languages.

8. The establishment of the discipline


One indication that sign language linguistics has become a mature field of study is its
professionalization. Over the years, there have been several different series of confer-
932 VIII. Applied issues

ences or symposia dealing with sign language linguistics, nearly all of which were fol-
lowed by the publication of conference proceedings. The volumes themselves contain
an impressive body of research that formed the core of the literature for the discipline.
The earliest of these was the National Symposium on Sign Language Research and
Training. Primarily a meeting of American researchers and educators, the NSSLRT
held its first symposium in 1977 (Chicago, IL, USA), and others followed in 1978 (Cor-
onado, CA, USA), 1980 (Boston, MA, USA) and 1986 (Las Vegas, NV, USA).
In the summer of 1979, two international gatherings of sign language linguists were
held in Europe. In June, the first International Symposium on Sign Language Research
(ISSLR), organized by Inger Ahlgren and Brita Bergman, was held in Stockholm,
Sweden. Then in August of 1979, Copenhagen was the site of the NATO Advanced
Study Institute on Language and Cognition: Sign Language Research, which was orga-
nized by Harlan Lane, Robbin Battison, and François Grosjean. The ISSLR held a
total of five symposia, with additional meetings in 1981 (Bristol, England), 1983 (Rome,
Italy), 1987 (Lappeenranta, Finland), and 1992 (Salamanca, Spain). In contrast to the
ISSLR, which brought together researchers from North America and Europe, the Eu-
ropean Congress on Sign Language Research (ECSL) focused on research being con-
ducted on European sign languages. Four meetings were held by this congress: 1982
(Brussels, Belgium), 1985 (Amsterdam, the Netherlands), 1989 (Hamburg, Germany),
and 1994 (Munich, Germany).
The International Conference on Theoretical Issues in Sign Language Research
(TISLR) first convened in 1986 (Rochester, NY, USA), and was followed by (largely)
biannual conferences in 1988 (Washington, DC, USA), 1990 (Boston, MA, USA), 1992
(San Diego, CA, USA), 1996 (Montreal, Canada), 1998 (Washington, DC, USA), 2000
(Amsterdam, the Netherlands), 2004 (Barcelona, Spain), 2006 (Florianopolis, Brazil),
and 2010 (West Lafayette, IN, USA). In 2006, the first in a yearly series of conferences
aimed at broadening the international base in sign language linguistics was held in
Nijmegen, the Netherlands. Originally called CLSLR (Cross-Linguistic Sign Language
Research), the conference now goes by the name of SIGN. 2008 saw the SignTyp Con-
ference on the phonetics and phonology of sign languages held at the University of
Connecticut, Storrs, USA. While researchers from Europe, North America, and other
countries around the world continue to hold smaller conferences, workshops and semi-
nars, TISLR has become the primary international conference for sign language re-
searchers.
Gallaudet University Press was established in 1980 to disseminate knowledge about
deaf people, their languages, their communities, their history, and their education
through print and electronic media. The International Sign Linguistics Association
(ISLA) was founded in 1987 to encourage and facilitate sign language research
throughout the international community. Three publications came out of this organiza-
tion: the newsletter Signpost, which first appeared in 1988, became a quarterly periodi-
cal in the early 1990s and was published by ISLA until 1995; The International Journal
of Sign Linguistics (1990⫺1991); and The International Review of Sign Linguistics
(1996, published by Lawrence Erlbaum). In 1998, John Benjamins began publishing
the peer-reviewed journal Sign Language & Linguistics, with Ronnie Wilbur serving as
the general editor until 2007, at which time Roland Pfau and Josep Quer assumed
editorial responsibilities. ISLA folded in the late 1990s, and calls to create a new organ-
ization began at the 1998 TISLR meeting. It was replaced by the international Sign
38. History of sign languages and sign language linguistics 933

Language Linguistics Society (SLLS), which officially began at the 2004 TISLR meet-
ing in Barcelona.
In 1989, Signum Press was created; an outgrowth of the Institute for DGS at the
University of Hamburg, Signum Press publishes a wide range of books and multimedia
materials in the area of sign language linguistics, and also publishes Das Zeichen, a
quarterly journal devoted to sign language research and deaf communication issues.
Finally, the online discussion list, SLLing-L, has been up and running since the early
1990s and is devoted to the discussion of the linguistic aspects of natural sign languages.
With hundreds of subscribers around the globe, this electronic forum has become a
central means of facilitating scholarly exchange and, thus, has played an important role
in the growth of the discipline of sign language linguistics.
As far as research is concerned, the recent establishment of a few research centers
is worth noting. Established in 2006, the Deafness Cognition and Language (DCAL)
Research Centre at University College London aims to study the origins, development,
and processing of human language using sign languages as a model. With Bencie Woll
as director, DCAL is home to a growing number of researchers in the fields of sign
linguistics, psychology, and neuroscience. In early 2007, Ulrike Zeshan founded the
International Centre for Sign Language and Deaf Studies (iSLanDS) at the University
of Central Lancashire. The research center incorporates the Deaf Studies program
(offered since 1993) but expands research and teaching to encompass an international
dimension, with documentation of and research on sign languages around the world
as well as the development of programs to provide higher education opportunities for
deaf students from across the globe. Launched in 2008, the sign language research
group at Radboud University Nijmegen, led by Onno Crasborn, conducts research on
the structure and use of NGT, and is also a leading partner in SignSpeak, a European
collaborative effort to develop and analyze sign language corpora with the aim of
developing vision-based technology for translating continuous sign language to text.

9. Historical relationships between sign languages

While a rich body of comparative research has elucidated the genetic relationships
among the world’s spoken languages (the nearly 7,000 spoken languages can be divided
into roughly 130 major language families), the same cannot be said for sign languages,
on which much comparative research remains to be done. The historical connections
between some sign languages have been explored (see, for example, Woodward 1978b,
1991, 1996, 2000; McKee/Kennedy 2000; Miller 2001; among others), but there have
been only a few attempts to develop more comprehensive historical mappings of the
relationships between a broad range of the world’s sign languages (Anderson 1979;
Wittmann 1991).
The precise number of sign languages in existence today is not known, but the
Ethnologue database (16th edition, Lewis 2009) currently lists 130 “deaf sign lan-
guages”, up from 121 in the 2005 survey. The fact that the Ethnologue lists “deaf sign
languages” as one language family among 133 total language families highlights the
extent to which sign languages are under-researched and brings into focus the challen-
ges involved in placing sign languages into a larger comparative historical context.
934 VIII. Applied issues

Whereas spoken languages have evolved over thousands of years, modern sign lan-
guages have evolved very rapidly, indeed one might argue spontaneously, and many
have emerged largely independently from each other, making traditional historical
comparisons difficult. Notably, there is some question as to the validity and usefulness
of standard historical comparative methods when attempting to determine the histori-
cal relationships between sign languages. Most researchers acknowledge that tradi-
tional comparative techniques must be modified when studying sign languages. For
example, the original 200-word Swadesh list used to compare basic vocabularies across
spoken languages has been modified for use with sign languages; in order to reduce
the number of false potential cognates, words such as pronouns and body parts that
are represented indexically (via pointing signs), and words whose signs are visually-
motivated or iconic (such as drink) have been factored out (Woodward 1978b; Pizzuto/
Volterra 1996; McKee/Kennedy 2000; Hendriks 2008). In addition, because sign lan-
guages are so young, it is necessary that researchers adapt the time scale used to calcu-
late differences between sign languages (Woll/Sutton-Spence/Elton 2001).
As scholars have begun to study the various sign languages from an historical com-
parative angle, a lack of documentation of the oldest forms of sign languages has made
research difficult. Furthermore, it can be challenging to distinguish between relatedness
due to genetic descent versus relatedness due to language contact and borrowing,
which is quite pervasive among sign languages. During their emergence and growth,
individual sign languages come into contact with other natural sign languages, the ma-
jority spoken language(s) of the culture, signed versions of the majority spoken lan-
guage, as well as gestural systems that may be in use within the broader community
(see chapter 35, Language Contact and Borrowing, for details). The extensive and
multi-layered language contact that occurs can make traditional family tree classifica-
tions difficult.
Analyses have shown that historical links between sign languages have been heavily
influenced by, among other things, world politics and the export of educational systems
(see Woll/Sutton-Spence/Elton 2001; Woll 2006). For example, the historical legacy of
the Habsburg Empire has resulted in a close relationship between the sign languages
of Germany, Austria, and Hungary. BSL has had a strong influence on sign languages
throughout the former British Empire; after being educated in Britain, deaf children
often returned to their home countries, bringing BSL signs with them. Additionally,
the immigration of British deaf adults to the colonies has resulted in strong connections
between BSL and Auslan, NZSL, Maritime Sign Language in Nova Scotia, as well as
certain varieties of Indian and South African Sign Languages. Recent research suggests
that the sign languages of Britain, Australia, and New Zealand are in fact varieties of
the same sign language, referred to as “BANZL”, for British, Australian and New
Zealand Sign Languages (Johnston 2003). The Japanese occupation of Taiwan has re-
sulted in some dialects of TSL being very similar to NS. Following Japan’s withdrawal,
another form of TSL has developed ⫺ one heavily influenced by the sign language
used in Shanghai, brought over by immigrants from the Chinese mainland. NS, TSL,
and KSL are thought to be members of the Japanese Sign Language family (Morgan
2004, 2006).
The export of educational systems, often by individuals with religious or missionary
agendas, has without question had an influence on the historical relationships between
sign languages. Foremost among these is the French deaf education system, the export
38. History of sign languages and sign language linguistics 935

of which brought LSF into many countries around Europe and North America. As a
result, the influence of LSF can be seen in a number of sign languages, including Irish
Sign Language, ASL, RSL, LSQ, and Mexican Sign Language. A similar relationship
exists between SSL and Portuguese Sign Language following a Swedish deaf educator’s
establishment of a deaf school in Lisbon in 1824 (Eriksson 1998). Researchers have
noted that Israeli SL is historically related to DGS, having evolved from the sign lan-
guage used by German Jewish teachers who, in 1932, opened a deaf school in Jerusalem
(Meir/Sandler 2007). Icelandic Sign Language is historically related to DSL, the con-
nection stemming from the fact that deaf Icelandic people were sent to Denmark for
education until the early 1900s (Aldersson 2006).
Some scholars hypothesize that modern ASL is the result of a process of creoliza-
tion between indigenous ASL and the LSF that was brought to America by Clerc and
Gallaudet in the early 1800s (Woodward 1978; Fischer 1978; but see Lupton/Salmons
1996 for a reanalysis of this view). It has been shown that ASL shares many of the
sociological determinants of creoles, as well as a similar means of grammatical expres-
sion. Furthermore, evidence of restructuring at the lexical, phonological, and grammat-
ical levels points to creolization. This line of thinking has been expanded to include a
broader range of sign languages (including BSL, LSF, and RSL) that have been shown
to share with creoles a set of distinctive grammatical characteristics as well as a similar
path of development (Deuchar 1987; see also Meier 1984 and chapter 36, Language
Emergence and Creolization).
At least two sign languages that were originally heavily influenced by LSF have, in
turn, had an impact on other sign languages around the globe. Irish nuns and brothers
teaching in overseas Catholic schools for deaf children have led to Irish Sign Language
influencing sign languages in South Africa, Australia, and India. Similarly, a much
larger number of sign languages around the world have been heavily influenced by
ASL through missionary work, the training of deaf teachers in developing countries,
and/or because many foreign deaf students have attended Gallaudet University in
Washington, DC (the world’s first and only university for deaf people) and have then
returned to their home countries, taking ASL with them. ASL is unique in that, next
to International Sign, it serves as a lingua franca in the worldwide deaf community,
and thus has had a major influence on many sign languages around the globe. The
Ethnologue (16th edition, Lewis 2009) reports that ASL is used among some deaf
communities in at least 20 other countries around the world, including many countries
in Africa and the English speaking areas of Canada. In fact, many of the national
sign languages listed in the Ethnologue for some developing countries might best be
considered varieties of ASL.
Recent research examines the sign languages used in West and Central French-
speaking African countries and finds evidence of a creole sign language (Kamei 2006).
Historically, ASL was introduced into French-speaking African countries when
Dr. Andrew J. Foster, a deaf African-American and Christian missionary, began estab-
lishing schools for deaf children in 1956 (Lane/Hoffmeister/Bahan 1996). Over time,
the combination of French literacy education with ASL signs has led to the emergence
of Langue des Signes Franco-Africaine (LSFA), an ASL-based creole sign language.
A survey of the sign languages used in 20 Eastern Europe countries suggests that,
while the sign languages used in this region are distinct languages, there are two clus-
ters of languages that have wordlist similarity scores that are higher than the bench-
936 VIII. Applied issues

marks for unrelated languages (Bickford 2005). One cluster includes RSL, Ukrainian
Sign Language, and Moldova Sign Language. A second cluster includes the sign lan-
guages in the central European countries of Hungary, Slovakia, the Czech Republic,
and more marginally Romania, Poland, and Bulgaria.
A rich body of lexicostatistical research has revealed that there are seven distinct
sign languages in Thailand and Viet Nam, falling into three distinct language families
(Woodward 1996, 2000). The first is an indigenous sign language family comprised of a
single language ⫺ Ban Khor Sign Language ⫺ that developed in isolation in Northeast
Thailand. A second sign language family contains indigenous sign languages that devel-
oped in contact with other sign languages in Southeast Asia, but had no contact with
Western sign languages: Original Chiangmai Sign Language, Original Bangkok Sign
Language, and Hai Phong Sign Language (which serves as a link between the second
and third families). Finally, there exists a third sign language family comprised of “mod-
ern” sign languages that are mixtures, likely creolizations, of original sign languages
with LSF and/or ASL (languages that were introduced via deaf education). This third
language family includes Ha Noi Sign Language, Ho Chi Minh Sign Language, Modern
Thai Sign Language, and the link language Hai Phong Sign Language (Woodward
2000).
Finally, over the years, as communication and interaction between deaf people
around the world has increased, a contact language known as International Sign (IS)
has developed spontaneously. Formerly known as Gestuno, IS is used at international
Deaf events and meetings of the World Federation of the Deaf. Recent research sug-
gests that while IS is a type of pidgin, it is more complex than typical pidgins and its
structure is more similar to full sign languages (Supalla/Webb 1995; also see chapter
35).

10. Trends in the field

During the early years of sign language linguistics (1960 to the mid 1980s, roughly),
much of the research focused on discovering and describing the fundamental structural
components of sign languages. Most of the research during this period was conducted
on ASL, and was primarily descriptive in nature, though in time researchers began to
include theoretical discussions as well. Early works tended to stress the arbitrariness
of signs and highlight the absence of iconicity as an organizing principle underlying
sign languages; features that were markedly different from spoken languages were not
often addressed. The early research revealed that sign languages are structured, ac-
quired, and processed (at the psychological level) in ways that are quite similar to
spoken languages. With advances in technology, researchers eventually discovered that
largely identical mechanisms underlie the neurological processing of languages in the
two modalities (see Emmorey (2002) for a review). Such discoveries served as proof
that, contrary to what had been previously assumed, sign languages are legitimate
human languages, worthy of linguistic analysis.
In later years (mid 1980s to the late 1990s, roughly), once the linguistic status of
sign languages was secure, researchers turned their attention to some of the more
unusual aspects of sign languages, such as the complex use of space, the importance of
38. History of sign languages and sign language linguistics 937

non-manual features, and the presence of both iconicity and gesture within sign lan-
guages (see, for example, Liddell 2003). It was during this time that sign language
research expanded beyond the borders of the United States to include other (mostly
European) sign languages. With an increase in the number and range of sign languages
studied, typological properties of sign languages began to be considered, and the
groundwork was laid for the eventual emergence of sign language typology. While
the early research showed that signed and spoken languages share many fundamental
properties, when larger numbers of sign languages were studied, it became clear that
sign languages are remarkably similar in certain respects (for example, in the use of
space in verbal and aspectual morphology). This observation led researchers to exam-
ine more seriously the effects that language modality might have on the overall struc-
ture of language.
In recent years (late 1990s to the present), as the field of sign language typology
has become established, research has been conducted on an even wider range of sign
languages, crucially including non-Western sign languages. This has provided scholars
the opportunity to reevaluate the assumption that sign languages show less structural
variation than spoken languages do. While structural similarities between sign lan-
guages certainly exist (and they are, indeed, striking), systematic and comparative stud-
ies on a broader range of sign languages reveal some interesting variation (e.g. nega-
tion, plural marking, position of functional categories; see Perniss/Pfau/Steinbach 2007;
Zeshan 2006, 2008). This line of inquiry has great potential to inform our understand-
ing of typological variation as well as the universals of language and cognition.
Cross-linguistic research on an increasing number of natural sign languages has
been facilitated by the development of multimedia tools for the collection, annotation,
and dissemination of primary sign language data. An early frontrunner in this area was
SignStream, a database tool developed in the mid 1990s at Boston University’s ASL
Linguistic Research Project (ASLLRP), under the direction of Carol Neidle. However,
the most widely used current technology is ELAN (EUDICO Linguistic Annotator).
Originally developed at the Max Planck Institute for Psycholinguistics in Nijmegen,
the Netherlands, ELAN is a language archiving technology that enables researchers to
create complex annotations on video and audio resources. These tools make it possible
to create large corpora of sign language digital video data, an essential step in the
process of broad-scale linguistic investigations and typological comparisons (see Se-
gouat/Braffort 2009). Sign language corpora projects are underway in Australia, Ire-
land, the Netherlands, the United Kingdom, France, Germany, the United States, and
the Czech Republic, to name a few places. Embracing the tools of the day, there is
even a sign language corpora wiki that serves as a resource for the emerging field of
sign language corpus linguistics (http://sign.let.ru.nl/groups/slcwikigroup/).
One of the most fascinating areas of recent research has been in the domain of
emerging sign languages (see Meir et al. (2010) for an overview). A handful of re-
searchers around the globe have been studying these new sign languages which emerge
when deaf people without any previous exposure to language, either spoken or signed,
come together and form a language community ⫺ be it in the context of villages with
mixed deaf and hearing (see chapter 24, Shared Sign Languages) or newly formed deaf
communities, as, for instance, the well-known case of a school for deaf children in
Managua, Nicaragua (Kegl/Senghas/Coppola 1999; see chapter 36 for discussion).
938 VIII. Applied issues

11. Conclusion
The discipline of sign language linguistics came into being 50 years ago, and the dis-
tance traveled in this short period of time has indeed been great. In terms of general
perceptions, sign languages have gone from being considered primitive systems of ges-
ture to being recognized for their richness and complexity, as well as their cultural and
linguistic value. Though initially slow to catch on, the “discovery” of sign languages
(or more precisely the realization that sign languages are full human languages) has
been embraced by scholars around the globe.
An informal survey of American introductory linguistics textbooks from the past
several decades reveals a gradual though significant change in the perception of sign
languages as natural human languages (see McBurney 2001). In textbooks from the
mid 20th century, language was equated with speech (as per Hockett’s (1960) design
features of language), and sign languages of deaf people were simply not mentioned.
By the 1970s, a full decade after linguistic investigations began, sign languages began
to be addressed, but only in a cursory manner; Bolinger’s Aspects of Language, 2nd
Edition (1975) discusses sign language briefly in a section on language origins, noting
that sign languages are “very nearly” as expressive a medium of communication as
spoken languages. Fromkin and Rodman’s An Introduction to Language (1974) in-
cludes a discussion of “deaf sign” in a chapter on animal languages; although the dis-
cussion is brief, they do mention several significant aspects of sign languages (including
syntactic and semantic structure), and they directly refute Hockett’s first design feature
of language, arguing that sign languages are human languages, and therefore the use
of the vocal-auditory channel is not a key property of human language. The 1978
edition of the text includes an entire section on ASL and the growing field of research
surrounding it. Successive editions include increasingly extensive coverage, and
whereas earlier editions covered sign languages in a separate section, starting with the
1998 edition, discussion of sign languages is integrated throughout the text, in sections
on linguistic knowledge, language universals, phonology, morphology, syntax, language
acquisition, and language processing. Although it has taken some time, the ideas ini-
tially proposed by Stokoe and further developed by sign linguists around the world
have trickled down and become part of the standard discussion of human language.
In addition to the continued and expanding professional conference and publishing
activities specific to sign language linguistics, sign language research is crossing over
into many related disciplines, with papers being published in a growing number of
journals and conference proceedings. Over the past decade, sign language research has
been presented at a wide range of academic conferences, and special sign language
sessions or workshops have been held in conjunction with many professional conferen-
ces in related disciplines (including child language acquisition, bilingual acquisition,
gesture studies, minority languages, endangered languages, sociolinguistics, language
typology, laboratory phonology, corpus linguistics, computational linguistics, anthropol-
ogy, psychology, and neuroscience).
Without question, research into sign languages has enriched our understanding of
the human mind and its capacity for language. Sign languages have proven to be a
fruitful area of study, the findings of which shed light upon some of the most challeng-
ing and significant questions in linguistics and neighboring disciplines. One need only
glance through this volume’s table of contents to get a sense of how broad and varied
38. History of sign languages and sign language linguistics 939

the discipline of sign language linguistics has become; it is a testament to the compel-
ling nature of the subject matter as well as to the dedication and excellence of the
community of scholars who have made this their life’s work.

12. Literature

Akach, Philomen A.O.


1992 Sentence Formation in Kenyan Sign Language. In: The East African Sign Language
Seminar, Karen, Nairobi, Kenya, 24 th⫺28 th August 1992. Copenhagen: Danish Deaf
Association, 45⫺51.
Aldersson, Russell R.
2006 A Lexical Comparison of Icelandic Sign Language and Danish Sign Language. In: Birk-
beck Studies in Applied Linguistics, Vol. 2.
Anderson, Lloyd B.
1979 A Comparison of Some American, English, and Swedish Signs: Evidence on Historical
Change in Signs and Some Family Relationships of Sign Languages. Manuscript, Gallau-
det University, Washington, DC.
Aristotle; Hammond, William Alexander
1902 Aristotle’s Psychology: A Treatise on the Principles of Life (DeAnima and Parva Natu-
ralia). London: S. Sonnenschein & Co.
Armstrong, David F./Karchmer, Michael A./Van Cleve, John V. (eds.)
2002 The Study of Signed Languages: Essays in Honor of William C. Stokoe. Washington,
DC: Gallaudet University Press.
Augustine; Oates, Whitney Jennings
1948 Basic Writings of St. Augustine. New York: Random House.
Baynton, Douglas C.
1996 Forbidden Signs: American Culture and the Campaign Against Sign Language. Chicago:
University of Chicago Press.
Baynton, Douglas C.
2002 The Curious Death of Sign Language Studies in the Nineteenth Century. In: Armstrong,
David F./Karchmer, Michael A./Van Cleve, John V. (eds.), The Study of Signed Lan-
guages: Essays in Honor of William C. Stokoe. Washington, DC: Gallaudet University
Press, 13⫺34.
Bébian, Auguste
1825 Mimographie: Essai d’écriture Mimique, Propre à Régulariser le Langage des Sourds-
muets. Paris: Colas.
Bender, Ruth
1960 The Conquest of Deafness. Cleveland, OH: The Press of Western Reserve.
Bergman, Brita
1982 Studies in Swedish Sign Language. Department of Linguistics, University of Stockholm.
Bickford, J. Albert
2005 The Signed Languages of Eastern Europe. SIL Electronic Survey Reports 2005⫺026:
45. [Available at: www.sil.org/silesr/abstract.asp?ref=2005⫺026]
Bloomfield, Leonard
1933 Language. New York: Holt, Rinehart & Winston.
Bolinger, Dwight Le Merton
1975 Aspects of Language, 2nd edition. New York: Harcourt Brace Jovanovich.
940 VIII. Applied issues

Boyes Braem, Penny


1984 Studying Swiss German Sign Language Dialects. In: Loncke, Filip/Boyes Braem, Penny/
Lebrun, Yvan (eds.), Recent Research on European Sign Languages: Proceedings of the
European Meeting of Sign Language Research, Held in Brussels, September 19⫺25,
1982. Lisse: Swets & Zeitlinger, 93⫺103.
Branson, Jan/Miller, Don
2002 Damned for Their Difference: The Cultural Construction of Deaf People as Disabled.
Washington, DC: Gallaudet University Press.
Brennan, Mary/Hayhurst, Allan B.
1980 The Renaissance of British Sign Language. In: Baker-Shenk, Charlotte/Battison, Rob-
bin (eds.), Sign Language and the Deaf Community: Essays in Honor of William C.
Stokoe. Silver Spring, MD: National Association of the Deaf, 233⫺244.
Brien, David
1992 Dictionary of British Sign Language/English. London: Faber and Faber.
Bulwer, John
1644 Chirologia: Or the Natural Language of the Hand. London: Harper.
Bulwer, John
1648 Philocophus: Or the Deafe and Dumbe Man’s Friend. London: Humphrey Moseley.
Chao, Chienmin/Chu, Hsihsiung/Liu, Chaochung
1988 Taiwan Ziran Shouyu [Taiwan Natural Sign Language]. Taipei: Deaf Sign Language
Research Association. [Revised edition of Chienmin Chao (1981)]
Chomsky, Noam
1957 Syntactic Structures. The Hague: Mouton.
Chomsky, Noam
1965 Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
Collins-Ahlgren, Marianne
1989 Aspects of New Zealand Sign. PhD Dissertation. Wellington, Victoria University of
Wellington.
Cuxac, Christian
1983 Le Langage des Sourds. Paris: Payot.
Dalgarno, George
1680 Didascalocophus; or the Deaf and Dumb Man’s Tutor, to Which Is Added a Discourse
of the Nature and Number of Double Consonants Both Which Tracts Being the First
(for What the Author Knows) that Have Been Published Upon Either of the Subjects.
Oxford: Printed at the Theater in Oxford.
Danby, Herbert
1933 The Mishnah. Oxford: Clarendon Press.
Daniels, Marilyn
1997 Benedictine Roots in the Development of Deaf Education: Listening with the Heart.
Westport, CT: Bergin & Garvey.
Dekesel, Kristiaan
1992 John Bulwer: The Founding Father of BSL Research, Part I. In: Signpost 5(4), 11⫺14.
Dekesel, Kristiaan
1993 John Bulwer: The Founding Father of BSL Research, Part II. In: Signpost 6(1), 36⫺46.
Desloges, Pierre
1779 Observations d’un Sourd et Muèt, sur un Cours Élémentaire d’éducation des Sourds et
Muèts: Publié en 1779 par M. l’abbé Deschamps. A Amsterdam & se trouve a Paris:
Chez B. Morin.
Deuchar, Margaret
1978 Diglossia in British Sign Language. PhD Dissertation, Stanford University.
Deuchar, Margaret
1987 Sign Languages as Creoles and Chomsky’s Notion of Universal Grammar. In: Modgil,
Sohan/Modgil, Celia (eds.), Noam Chomsky: Consensus and Controversy. London:
Falmer Press, 81⫺91.
38. History of sign languages and sign language linguistics 941

Diderot, Denis/Meyer, Paul Hugo


1965 Lettre sur les Sourds et Muets. Genève: Droz.
Digby, Kenlam
1644 Treatise on the Nature of Bodies. Blaizot: Paris.
Dotter, Franz/Okorn, Ingeborg
2003 Austria’s Hidden Conflict: Hearing Culture Versus Deaf Culture. In: Monaghan, Leila
F./Schmaling, Constanze/Nakamura, Karen/Turner, Graham H. (ed.), Many Ways to be
Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet Univer-
sity Press, 49⫺66.
Emmorey, Karen
2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah,
NJ: Lawrence Erlbaum.
Engberg-Pedersen, Elisabeth/Hansen, Britta/Sørensen, Ruth Kjær
1981 Døves Tegnsprog: Træk af Dansk Tegnsprogs Grammatik. Arhus: Arkona.
L’Epée, Abbé de
1776 Institution des Sourds et Muets, par la Voie des Signes Méthodiques [The Instruction of
the Deaf and Dumb by Means of Methodical Signs]. Paris: Le Crozet.
Eriksson, Per
1998 The History of Deaf People: A Source Book. Örebro: Daufr.
Facchini, Massimo G.
1985 An Historical Reconstruction of Events Leading to the Congress of Milan in 1880. In:
Stokoe, William C./Volterra, Virginia (eds.), SLR ’83: Proceedings of the 3 rd Interna-
tional Symposium on Sign Language Research, Rome, June 22⫺26, 1983. Silver Spring,
MD: Linstok Press, 356⫺362.
Fischer, Renate
1993 Language of Action. In: Fischer, Renate/Lane, Harlan (eds.), Looking Back: A Reader
on the History of Deaf Communities and Their Sign Languages. Hamburg: Signum,
431⫺433.
Fischer, Renate
1995 The Notation System of Sign Languages: Bébian’s Mimographie. In: Schermer, Trude/
Bos, Heleen (eds.), Sign Language Research 1994: Proceedings of the 4 th European
Congress on Sign Language Research, Munich, September 1⫺3, 1994. Hamburg: Sig-
num, 285⫺301.
Fischer, Renate
2002 The Study of Natural Sign Language in Eighteenth-Century France. In: Sign Language
Studies 2(4), 391⫺406.
Fischer, Susan D.
1973 Two Processes of Reduplication in American Sign Language. In: Foundations of Lan-
guage 9, 469⫺480.
Fischer, Susan D.
1978 Sign Language and Creoles. In: Siple, Patricia (ed.), Understanding Language through
Sign Language Research. New York: Academic Press, 309⫺331.
Frishberg, Nancy
1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Lan-
guage 51(3), 696⫺719.
Frishberg, Nancy
1987 Home Sign. In: Van Cleve, John V. (ed.), Gallaudet Encyclopedia of Deaf People and
Deafness, Vol. 3. New York: McGraw-Hill, 128⫺131.
Fromkin, Victoria/Rodman, Robert
1974 An Introduction to Language. New York: Holt, Rinehart & Winston.
Gardiner, Alan H.
1911 Egyptian Hieratic Texts, Transcribed, Translated and Annotated. In: Series I: Literary
Texts of the New Kingdom. Hildesheim: Georg Olms.
942 VIII. Applied issues

Goldin-Meadow, Susan
2003 The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About
How All Children Learn Language. New York: Psychology Press.
Groce, Nora Ellen
1985 Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard. Cam-
bridge, MA: Harvard University Press.
Hendriks, Bernadet
2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD
Dissertation, University of Amsterdam. Utrecht: LOT.
Hockett, Charles F.
1960 The Origin of Speech. In: Scientific American 203, 89⫺97.
Hodgson, Kenneth W.
1954 The Deaf and Their Problems: A Study in Special Education. New York: Philosophical
Library.
Hong, Sung-Eun
2003 Empirical Survey of Animal Classifiers in Korean Sign Language (KSL). In: Sign Lan-
guage & Linguistics 6(1), 77⫺99.
Hong, Sung-Eun
2008 Eine Empirische Untersuchung zu Kongruenzverben in der Koreanischen Gebärdenspra-
che. Hamburg: Signum.
Johnston, Trevor
1989 Auslan: The Sign Language of the Australian Deaf Community. Vol. 1. PhD Disserta-
tion, University of Sydney.
Johnston, Trevor
2003 BSL, Auslan and NZSL: Three Signed Languages or One? In: Baker, Anne/Bogaerde,
Beppie van den/Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign Language
Research: Selected Papers from TISLR 2000. Hamburg: Signum, 47⫺69.
Johnston, Trevor/Schembri, Adam
2007 Australian Sign Language (Auslan): An Introduction to Sign Language Linguistics.
Cambridge: Cambridge University Press.
Jouison, Paul
1990 Analysis and Linear Transcription of Sign Language Discourse. In: Prillwitz, Siegmund/
Vollhaber, Thomas (eds.), Current Trends in European Sign Language Research: Pro-
ceedings of the 3rd European Congress on Sign Language Research. Hamburg: Signum,
337⫺353.
Kabatilo, Ziad Salah
1982 A Pilot Description of Indigenous Signs Used by Deaf Persons in Jordan. PhD Disserta-
tion, Michigan State University.
Kamei, Nobutaka
2006 History of Deaf People and Sign Languages in Africa: Fieldwork in the ‘Kingdom’ De-
rived from Andrew J. Foster. Tokyo: Akashi Shoten Co., Ltd.
Kegl, Judy/Senghas, Ann/Coppola, Marie
1999 Creation through Contact: Sign Language Emergence and Sign Language Change in
Nicaragua. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creoli-
zation, Diachrony, and Development. Cambridge, MA: MIT Press, 179⫺237.
Kendon, Adam
1989 Sign Languages of Aboriginal Australia: Cultural, Semiotic, and Communicative Per-
spectives. Cambridge: Cambridge University Press.
Kendon, Adam
2002 Historical Observations on the Relationship Between Research on Sign Languages and
Language Origins Theory. In: Armstrong, David F./Karchmer, Michael A./Van Cleve,
John V. (eds.), The Study of Signed Languages: Essays in Honor of William C. Stokoe.
Washington, DC: Gallaudet University Press, 35⫺52.
38. History of sign languages and sign language linguistics 943

Klima, Edward S./Bellugi, Ursula


1979 The Signs of Language. Cambridge: Harvard University Press.
Kyle, Jim (ed.)
1987 Sign and School: Using Signs in Deaf Children’s Development. Clevedon: Multilingual
Matters.
Kyle, James/Woll, Bencie
1985 Sign Language: The Study of Deaf People and Deafness. Cambridge: Cambridge Uni-
versity Press.
Ladd, Paddy
2003 Understanding Deaf Culture: In Search of Deafhood. Clevedon: Multilingual Matters.
Lane, Harlan L.
1984 When the Mind Hears: A History of the Deaf. New York: Random House.
Lane, Harlan L.
1992 The Mask of Benevolence: Disabling the Deaf Community. New York: Knopf.
Lane, Harlan L./Hoffmeister, Robert/Bahan, Benjamin J.
1996 A Journey Into the Deaf-World. San Diego, CA: DawnSignPress.
Lewis, M. Paul (ed.)
2009 Ethnologue: Languages of the World, 16th edition. Dallas, TX: SIL International. [On-
line version: www.ethnologue.com]
Liddell, Scott K.
1980 American Sign Language Syntax. The Hague: Mouton.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lucas, Ceil/Valli, Clayton
1989 Language Contact in the American Deaf Community. In: Lucas, Ceil (ed.), The Socio-
linguistics of the Deaf Community. San Diego, CA: Academic Press, 11⫺40.
Lucas, Ceil/Valli, Clayton
1992 Language Contact in the American Deaf Community. San Diego, CA: Academic Press.
Lupton, Linda/Salmons, Joe
1996 A Re-analysis of the Creole Status of American Sign Language. In: Sign Language
Studies 90, 80⫺94.
Massone, Maria Ignacia
1994 Lengua de Señas Argentina: Análisis y Vocabulario Bilingüe. Buenos Aires: Edicial.
McBurney, Susan L.
2001 William Stokoe and the Discipline of Sign Language Linguistics. In: Historiographia
Linguistica 28(1/2), 143⫺186.
McKee, David/Kennedy, Graeme
2000 Lexical Comparison of Signs from American, Australian, British and New Zealand Sign
Languages. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited:
An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence
Erlbaum, 49⫺76.
Meier, Richard P.
1984 Sign as Creole. In: Behavioral and Brain Sciences 7, 201⫺202.
Meier, Richard P.
2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon
Linguistic Structure in Sign and Speech. In: Meier, Richard P./Cormier, Kearsy/Quinto-
Pozos, David (eds.), Modality and Structure in Signed and Spoken Language. Cam-
bridge: Cambridge University Press, 1⫺25.
Meir, Irit/Sandler, Wendy
2007 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erl-
baum.
944 VIII. Applied issues

Meir, Irit/Sandler, Wendy/Padden, Carol/Aronoff, Mark


2010 Emerging Sign Languages. In: Marschark, Marc/Spencer, Patricia (eds.), Oxford Hand-
book of Deaf Studies, Language, and Education, Vol. 2. Oxford: Oxford University
Press, 267⫺280.
Miles, M.
2000 Signing in the Seraglio: Mutes, Dwarfs and Jestures at the Ottoman Court 1500⫺1700.
In: Disability and Society 15(1), 115⫺134.
Miles, M.
2005 Deaf People Living and Communicating in African Histories, c. 960s⫺1960s [New,
much extended Version 5.01, incorporating an article first published in Disability &
Society 19, 531⫺45, 2004, titled then: Locating Deaf People, Gesture and Sign in Afri-
can Histories, 1450s⫺1950s. Available online: www.independentliving.org/docs7/
miles2005a.html].
Miller, Christopher
2001 The Adaptation of Loan Words in Quebec Sign Language: Multiple Sources, Multiple
Processes. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A Cross-
linguistic Investigation of Word Formation. Mahwah, NJ: Lawrence Erlbaum, 139⫺173.
Mitchell, Ross E./Karchmer, Michael A.
2004 Chasing the Mythical Ten Percent: Parental Hearing Status of Deaf and Hard of Hear-
ing Students in the United States. In: Sign Language Studies 4(2), 138⫺163.
Moores, Donald F.
1987 Educating the Deaf: Psychology, Principles, and Practices. Boston: Houghton Mifflin.
Morgan, Michael W.
2004 Tracing the Family Tree: Tree-reconstruction of Two Sign Language Families. Poster
Presented at 8th International Conference on Theoretical Issues in Sign Language Re-
search, Barcelona, Spain.
Morgan, Michael W.
2006 Interrogatives and Negatives in Japanese Sign Language. In: Zeshan, Ulrike (ed.), Inter-
rogatives and Negatives in Signed Languages. Nijmegen: Ishara Press, 91⫺127.
Newport, Elissa/Supalla, Ted
2000 Sign Language Research at the Millennium. In: Emmorey, Karen/Lane, Harlan (eds.),
The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward
Klima. Mahwah, NJ: Lawrence Erlbaum, 103⫺114.
Nyst, Victoria
1999 Variation in Handshape in Uganda Sign Language. Diploma Thesis, Leiden University,
The Netherlands.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Okombo, Okoth/Akach, Philemon A.O.
1997 Language Convergence and Wave Phenomena in the Growth of a National Sign Lan-
guage in Kenya. In: International Journal of the Sociology of Language 125, 131⫺144.
Oviedo, Alejandro
2001 Apuntes Para una Gramática de la Lengua de Señas Colombiana. Santafé de Bogotá,
Colombia: INSOR.
Padden, Carol A.
1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation,
University of California, San Diego. [Published in 1988 in Outstanding Dissertations in
Linguistics, Series IV, New York: Garland]
Padden, Carol/Humphries, Tom
1988 Deaf in America: Voices from a Culture. Cambridge, MA: Harvard University Press.
38. History of sign languages and sign language linguistics 945

Penn, Claire/Reagan, Timothy


1994 The Properties of South African Sign Language: Lexical Diversity and Syntactic Unity.
In: Sign Language Studies 85, 317⫺25.
Perniss, Pamela/Pfau, Roland/Steinbach, Markus
2007 Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de
Gruyter.
Pilleux, Mauricio/Cuevas, Hernán/Avalos, Erica
1991 El Lenguaje de Señas: Análisis Sintáctico-Sémantico. Valdiva: Universidad Austral de
Chile.
Pizzuto, Elena/Volterra, Virginia
1996 Sign Language Lexicon: Cross-linguistic and Cross-cultural Comparisons. Report pre-
pared for the Commission of the European Communities, Human Capital and Mobility
Programme; Project: Intersign: Multi Professional Study of Sign Language and the Deaf
Community in Europe (Network).
Plann, Susan
1993 Pedro Ponce de León: Myth and Reality. In: Van Cleve, John V. (ed.), Deaf History
Unveiled: Interpretations from the New Scholarship. Washington, DC: Gallaudet Uni-
versity Press, 1⫺12.
Plann, Susan
1997 A Silent Minority: Deaf Education in Spain 1550⫺1835. Berkeley, CA: University of
California Press.
Plato; Jowett, Benjamin
1931 The Dialogues of Plato. London: Oxford University Press.
Prillwitz, Siegmund/Leven, Regina
1985 Skizzen zu einer Grammatik der Deutschen Gebärdensprache. Hamburg: Forschungs-
stelle Deutsche Gebärdensprache.
Quadros, Ronice Müller de
1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifícia Universidade
Católica do Rio Grande do Sul, Brazil.
Quartararo, Anne
1993 Republicanism, Deaf Identity, and the Career of Henri Gaillard in Late-Nineteenth-
Century France. In: Van Cleve, John V. (ed.), Deaf History Unveiled: Interpretations
from the New Scholarship. Washington, DC: Gallaudet University Press, 40⫺52.
Radutzky, Elena
1993 The Education of Deaf People in Italy and the Use of Italian Sign Language. In: Van
Cleve, John V. (ed.), Deaf History Unveiled: Interpretations from the New Scholarship.
Washington, DC: Gallaudet University Press, 237⫺251.
Rissanen, Terhi
1986 The Basic Structure of Finnish Sign Language. In: Tervoort, Bernard T. (ed.), Signs
of Life: Proceedings of the Second European Congress on Sign Language Research,
Amsterdam, July 14⫺18, 1985. Amsterdam: University of Amsterdam, 42⫺46.
Sayers, Edna Edith/Gates, Diana
2008 Lydia Huntley Sigourney and the Beginnings of American Deaf Education in Hartford:
It Takes a Village. In: Sign Language Studies 8(4), 369⫺411.
Schermer, Trude
2003 From Variant to Standard: An Overview of the Standardization Process of the Lexicon
of Sign Language of the Netherlands over Two Decades. In: Sign Language Studies
3(4), 469⫺486.
Schlesinger, Izchak M./Namir, Lila
1976 Recent Research on Israeli Sign Language. In: Report on the 4 th International Confer-
ence on Deafness, Tel Aviv, March 18⫺23, 1973. Silver Spring, MD: National Associa-
tion of the Deaf, 114.
946 VIII. Applied issues

Schmaling, Constanze
2000 Maganar Hannu: Language of the Hands. A Descriptive Analysis of Hausa Sign Lan-
guage. Hamburg: Signum.
Schröder, Odd-Inge
1983 Fonologien i Norsk Tegnsprog. In: Tellevik, Jon M./Vogt-Svendsen, Marit/Schröder,
Odd-Inge (eds.), Tegnspråk og Undervisning av Døve Barn: Nordisk Seminar, Trond-
heim, Juni 1982. Trondheim: Tapir, 39⫺53.
Segouat, Jérémie/Braffort, Annelies
2009 Toward Categorization of Sign Language Corpora. In: Proceedings of the 2nd Workshop
on Building and Using Comparable Corpora. Suntec, Singapore, August 2009, 64⫺67.
Seigel, J.P.
1969 The Enlightenment and the Evolution of a Language of Signs in France and England.
In: Journal of the History of Ideas 30(1), 96⫺115.
Smith, Wayne H.
1989 The Morphological Characteristics of Verbs in Taiwan Sign Language. PhD Dissertation,
Indiana University, Bloomington.
Smith, Wayne H.
2005 Taiwan Sign Language Research: An Historical Overview. In: Language and Linguistics
6(2), 187⫺215.
Stokoe, William C.
1960 Sign Language Structure: An Outline of the Visual Communication System of the
American Deaf. In: Studies in Linguistics Occasional Papers 8. Buffalo: University of
Buffalo Press. [Re-issued 2005, Journal of Deaf Studies and Deaf Education 10(1), 3⫺
37]
Stokoe, William C.
1972 Semiotics and Human Sign Languages. The Hague: Mouton.
Stokoe, William C.
1979 Language and the Deaf Experience. In: Alatis, J./Tucker, G.R. (eds.), Proceedings from
the 30 th Annual Georgetown University Round Table on Languages and Linguistics,
222⫺230.
Stokoe, William C./Casterline, Dorothy/Croneberg, Carl
1965 A Dictionary of American Sign Language on Linguistic Principles. Washington, DC:
Gallaudet College Press.
Stone, Christopher/Woll, Bencie
2008 Dumb O Jemmy and Others: Deaf People, Interpreters and the London Courts in the
Eighteenth and Nineteenth Centuries. In: Sign Language Studies 8(3), 226⫺240.
Supalla, Ted
1982 Structure and Acquisition of Verbs of Motion and Location in American Sign Language.
PhD Dissertation, University of California, San Diego.
Supalla, Ted/Newport, Elissa L.
1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign
Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language
Research. New York: Academic Press, 181⫺214.
Supalla, Ted/Webb, Rebecca
1995 The Grammar of International Sign: A New Look at Pidgin Languages. In: Emmorey,
Karen/Reilly, Judy S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence
Erlbaum, 333⫺352.
Takashi, Tanokami/Peng, Fred C.
1976 Shuwa o Megutte: On the Nature of Sign Language. Hiroshima, Japan: Bunka Hyoron
Shuppan.
Tang, Gladys
2007 Hong Kong Sign Language: A Trilingual Dictionary with Linguistic Descriptions. Hong
Kong: Chinese University Press.
38. History of sign languages and sign language linguistics 947

Tervoort, Bernard
1954 Structurele Analyse van Visueel Taalgebruik Binnen een Groep Dove Kinderen [Struc-
tural Analysis of Visual Language Use in a Group of Deaf Children]. Amsterdam:
Noord-Hollandsche Uitgevers Maatschappij.
Tervoort, Bernard
1961 Esoteric Symbolism in the Communication Behavior of Young Deaf Children. In:
American Annals of the Deaf 106, 436⫺480.
Tervoort, Bernard
1994 Sign Languages in Europe: History and Research. In: Asher, Ronald E./Simpson, J.
M. Y. (eds.), The Encyclopedia of Language and Linguistics. Oxford: Pergamon Press,
3923⫺3926.
Trager, George L./Smith, Henry Lee
1951 An Outline of English Structure (Studies in Linguistics: Occasional Papers 3). Norman,
OK: Battenberg Press.
Van Cleve, John V./Crouch, Barry A.
1989 A Place of Their Own: Creating the Deaf Community in America. Washington, DC:
Gallaudet University Press.
Vasishta, Madan/Woodward, James/Wilson, Kirk L.
1978 Sign Languages in India: Regional Variations Within the Deaf Populations. In: Indian
Journal of Applied Linguistics 4(2), 66⫺74.
Vogt-Svendsen, Marit
1983 Lip Movements in Norwegian Sign Language. In: Kyle, James/Woll, Bencie (eds.), Lan-
guage in Sign: An International Perspective on Sign Language. London: Croom Helm,
85⫺96.
Volterra, Virginia (ed.)
1987 La Lingua Italiana dei Segni: La Comunicazione Visivo Gestuale dei Sordi. Bologna:
Il Mulino.
West, LaMont
1963 A Terminal Report Outlining the Research Problem, Procedure of Investigation and
Results to Date in the Study of Australian Aboriginal Sign Language. Sydney. AIATSIS
Call number: MS 2456/1 (Item 3).
Winzer, Margret A.
1987 Canada. In: Van Cleve, John V. (ed.), Gallaudet Encyclopedia of Deaf People and Deaf-
ness. Vol. 1. A⫺G. New York, NY: McGraw-Hill, 164⫺168.
Wittmann, Henri
1991 Classification Linguistique des Langues Signées Nonvocalement. In: Revue Québécoise
de Linguistique Théorique et Appliquée: Les Langues Signées 10(1), 215⫺288.
Woll, Bencie
1987 Historical and Comparative Aspects of BSL. In: Kyle, Jim (ed.), Sign and School: Using
Signs in Deaf Children’s Development. Clevedon: Multilingual Matters, 12⫺34.
Woll, Bencie
2003 Modality, Universality and the Similarities Among Sign Languages: An Historical Per-
spective. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Cross-
linguistic Perspectives in Sign Language Research: Selected Papers from TISLR 2000.
Hamburg: Signum, 17⫺30.
Woll, Bencie
2006 Sign Language: History. In: Brown, Keith (ed.), The Encyclopedia of Language and
Linguistics. Amsterdam: Elsevier, 307⫺310.
Woll, Bencie/Sutton-Spence, Rachel/Elton, Frances
2001 Multilingualism: The Global Approach to Sign Languages. In: Lucas, Ceil (ed.), The
Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 8⫺32.
948 VIII. Applied issues

Woodward, James
1973 Implicational Lects on the Deaf Diglossic Continuum. PhD Dissertation, Georgetown
University, Washington, DC.
Woodward, James
1976 Signs of Change: Historical Variation in ASL. In: Sign Language Studies 5(10), 81⫺94.
Woodward, James
1978a Historical Basis of ASL. In: Siple, Patricia (ed.), Understanding Language through Sign
Language Research. New York, NY: Academic Press, 333⫺348.
Woodward, James
1978b All in the Family: Kinship Lexicalization Across Sign Languages. In: Sign Language
Studies 19, 121⫺138.
Woodward, James
1991 Sign Language Varieties in Costa Rica. In: Sign Language Studies 73, 329⫺346.
Woodward, James
1993 Lexical Evidence for the Existence of South Asian and East Asian Sign Language
Families. In: Journal of Asian Pacific Communication 4(2), 91⫺106.
Woodward, James
1996 Modern Standard Thai Sign Language: Influence from ASL, and Its Relationship to
Original Thai Sign Varieties. In: Sign Language Studies 92, 227⫺252.
Woodward, James
2000 Sign Language and Sign Language Families in Thailand and Viet Nam. In: Emmorey,
Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor
Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 23⫺47.
Yau, Shun-Chiu
1991 La Langue des Signes Chinoise. In: Cahiers de Linguistique Asie Orientale 20(1),
138⫺142.
Zeshan, Ulrike
2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Ben-
jamins.
Zeshan, Ulrike (ed.)
2006 Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press.
Zeshan, Ulrike
2008 Roots, Leaves and Branches: The Typology of Sign Languages. In: Quadros, Ronice M.
de (ed.), Sign Languages: Spinning and Unraveling the Past, Present and Future. Pro-
ceedings of the 9 th International Conference on Theoretical Issues in Sign Language
Research, Florianopolis, Brazil, December 2006. Petropolis, Brazil: Editora Arara Azul,
671⫺695.

Susan McBurney, Spokane, Washington (USA)


39. Deaf education and bilingualism 949

39. Deaf education and bilingualism


1. Introduction
2. Early records of deaf education
3. Bimodal bilingualism at the societal level
4. Deaf education in the 21st century
5. Bilinguals
6. Conclusion
7. Literature

Abstract
In this chapter, the major findings from research on deaf education and bilingualism are
reviewed. Following a short introduction into (sign) bilingualism, the second section
provides an overview of the history of deaf education from the earliest records until the
late 19 th century, highlighting the main changes in philosophy and methods at the levels
of provision and orientation. In section 3, the major factors that have determined the
path toward sign bilingualism in the deaf communities, in particular, at the levels of
language policy and education, are discussed. Current developments and challenges in
deaf education, as reflected in the recent diversification of education methods, are ad-
dressed in section 4, with a focus on bilingual education conceptions. The final section
is centred on deaf bilinguals, their language development, and patterns of language use,
including cross-modal contact phenomena in educational and other sociolinguistic con-
texts.

1. Introduction
Bilingualism is not the exception, but rather the norm for the greater part of the world
population (Baker 2001; Grosjean 1982; Romaine 1996; Tracy/Gawlitzek-Maiwald
2000). Maintenance and promotion of bilingualism at the societal level are related to
the status of the languages involved (majority or prestige language vs. minority lan-
guage). Indeed, while social and economic advantages are attributed to the ability to
use various ‘prestige’ languages, minority bilingualism is generally associated with low
academic achievements and social problems. This apparent paradox reflects the sym-
bolic value of language and the continuing predominance of the nation-state ideology
in the majority of Western countries, in which language is regarded as one of the most
powerful guarantors for social cohesion in a state, language policies being commonly
monolingual in orientation. The situation is markedly different in countries with a
longstanding tradition of multilingualism, as is the case in India (Mohanty 2006).
The factors that determine the vitality of two or more languages in a given social
context may change over time; so may the patterns of language use in a given speech
community, indicating that bilingualism is a dynamic phenomenon. At the level of
language users, the different types of bilingualism encountered are commonly de-
950 VIII. Applied issues

scribed in terms of a continuum ranging from balanced bilingualism to partial or semi-


bilingualism (Romaine 1996). In view of the variety of acquisition types and compe-
tence levels attained, some authors have proposed to define bilingualism as the regular
use of more than one language in everyday life (Grosjean 1982). Following this broad
definition of bilingualism, most members of deaf communities are bilingual, as they
regularly use the community’s sign language and the spoken or written language of the
larger hearing society (Ann 2001). Since this type of bilingualism involves two lan-
guages of different modalities of expression, it is commonly referred to as bimodal
bilingualism, sign bilingualism, or cross-modal bilingualism. In addition, many deaf
individuals know and use other sign languages ⫺ or spoken/written languages (for
discussion of additional dimensions of bilingualism in deaf communities, see chapter
35, Language Contact and Borrowing). Linguistic competences in sign language and
spoken language can vary substantially (Grosjean 2008; Lucas/Valli 1992; Padden
1998). Linguistic profiles range from native fluency in one or both languages to de-
layed, partial, or even only rudimentary skills. The reasons for this variation relate to
such diverse factors as the age at which hearing loss occurred, the degree of deafness,
the age of exposure to the respective languages, the hearing status of the parents and
their family language policy, schooling, and social networks (Emmorey 2002; Fischer
1998; Grosjean 2008; van den Bogaerde/Baker 2002).
In recent years, sociolinguistic and psycholinguistic research has shown that sign
bilingualism is as dynamic as other types of bilingualism, and that bilingual signers,
like other bilinguals, skilfully exploit their linguistic resources. It is important to note
in this context that deaf individuals have only recently been recognised as bilingual
language users (Grosjean 2008; Plaza-Pust/Morales-López 2008; Padden 1998) follow-
ing the gradual recognition of sign languages as bona fide languages from the 1960s
onwards. Since then, deaf activism worldwide has led to a wider perception of deaf
communities as linguistic minority communities, the official recognition of sign lan-
guages and their inclusion in the education of deaf children being among their cen-
tral demands.
However, questions concerning the use of sign languages and spoken/written lan-
guages in the education of deaf individuals, and the impact of signing on the develop-
ment of spoken language have preoccupied professionals and scholars for the last two
centuries (Bagga-Gupta 2004; Tellings 1995). Beyond the controversy over the most
appropriate educational methods, the establishment of deaf schools has been of critical
importance in the development of deaf communities and their sign languages (Erting/
Kuntze 2008; Ladd 2003; Monaghan 2003; Padden 1998), the emergence of Nicaraguan
Sign Language being a recent example (Senghas 2003; see chapter 36 for discussion).
Education also plays a prominent role in relation to bilingualism in other minority
groups. However, bilingual development of sign language and spoken/written language
in deaf children is determined by two unusual factors, namely, (i) the unequal status
of the languages at the level of parent-child transmission (more than 90 % of deaf
children are born to hearing, non-signing parents) and (ii) the unequal accessibility of
the languages (no or only limited access to auditory input). The response of the educa-
tional (and political) institutions to the linguistic needs of deaf students is a major
theme in bimodal bilingualism research. The diversity of approaches to communication
with deaf children can be described as a continuum that ranges from a strictly monolin-
gual (oralist) to a (sign) bilingual model of deaf education, with variation in concepts
39. Deaf education and bilingualism 951

of bilingual education also reflecting different objectives in language planning in rela-


tion to sign languages. Another major topic in the field concerns the interaction of the
two languages in their acquisition and use, with no consensus in the domain of deaf
education on the role of sign language in the acquisition of literacy (Chamberlain/
Mayberry 2000).

2. Early records of deaf education

Until fairly recently, little was known about sign languages, the language behaviour of
their users, and the status of these languages in education and society at large. Studies
on the early records of deaf individuals’ use of signs to communicate and the first
attempts to educate deaf children (Lang 2003) report that manual means of communi-
cation ⫺ where they were noted ⫺ were not referred to as ‘language’ on a par with
spoken languages, and deaf individuals were not regarded as bilinguals. However, ques-
tions about the ‘universal’ nature of gesture/signing (Woll 2003), and the use of manual
means of communication (in particular, manual alphabets) have been addressed since
the beginnings of deaf education (for the development of deaf education, see also
chapter 38, History of Sign Languages and Sign Language Linguistics).
The first records of deaf education date from the 16th century. Deaf children from
aristocratic families were taught individually by private tutors (often members of reli-
gious congregations, such as monks). Spoken language was taught to these children
with two main objectives: legal (i.e. to enable them to inherit) and religious. The earli-
est documents report on the teachers’ successes rather than describe the methods used
(Gascón-Ricao/Storch de Gracia y Asensio 2004; Monaghan 2003). As a result, little
is known about the methods used around 1545 by Brother Ponce de León, a Spanish
Benedictine monk, commonly regarded as the first teacher of the deaf. There are,
however, some indications that he used a manual alphabet with his pupils, a practice
that spread to several European countries following the publication of the first book
about deaf education by Juan de Pablo Bonet in 1620. Juan de Pablo Bonet acknowl-
edges signing as the natural means of communication among the deaf, but advises
against its use in the education of deaf children, reflecting his main educational aim:
the teaching of the spoken language. Publications on deaf education which mention
the use of signs to support the teaching of the spoken/written language appeared soon
after in Britain and France (Gascón-Ricao/Storch de Gracia y Asensio 2004; Woll 2003;
Tellings 1995).
Classes for deaf children were established more than a century later, in the 1760s,
at Thomas Braidwood’s private academy in Edinburgh, and at the French National
Institute for Deaf-Mutes in Paris, founded by the Abbé de l’Epée. The latter was the
first public school for deaf children, including not only children of wealthy families but
also charity pupils. Soon after, schools for deaf children were founded in other centres
across Europe (for example, in Leipzig in 1778, in Vienna in 1779, and in Madrid in
1795) (Monaghan 2003). At that time, the state and religious groups (often the Catholic
Church) were the major stakeholders in the education of deaf children. Priests, nuns,
and monks founded schools in other countries throughout the world. In some cases,
deaf and hearing teachers who had worked in schools for the deaf in Europe went on
952 VIII. Applied issues

to establish educational institutions abroad. For example, the first school for the deaf
in Brazil was founded in Rio de Janeiro in 1857 by Huet, a deaf teacher from Paris
(Berenz 2003).
By the end of the 19th century, education had reached many deaf children; however,
education for deaf children did not become compulsory in most countries until much
later, in many countries only in the second half of the 20th century (see various chapters
in Monaghan et al. 2003).
Developments in deaf education from the late 18th century to the end of the 19th cen-
tury were also crucial in relation to policies about educational objectives and communi-
cation. While the goal of teaching deaf children the spoken/written language was
shared, the means to achieve this goal became a matter of a heated debate that contin-
ues to divide the field today (Gascón-Ricao/Storch de Gracia y Asensio 2004; Lane/
Hoffmeister/Bahan 1996; Tellings 1995).
De l’Epée, the founder of the Paris school, believed that sign language was the
natural language of deaf individuals. In his school, deaf pupils were taught written
language by means of a signed system (‘methodical signs’) which he had developed,
comprised of the signs used by deaf people in Paris and additional signs invented to
convey the grammatical features of French. The impact of his teaching went well be-
yond Paris, as several other schools that adopted this method were established in
France and a teacher trained in this tradition, Laurent Clerc, established the American
Asylum for the Deaf in Hartford (Connecticut) in 1817 together with Thomas Gallau-
det. Teachers trained in this institution later established other schools for deaf children
using the same approach throughout the US (Lane/Hoffmeister/Bahan 1996).
The spread of this philosophy, that promoted the use of signs, even though it in-
cluded artificial signs, recognised the value of sign language for communication with
deaf children, and the role of sign language in teaching written language, was chal-
lenged by the increasing influence of those who argued in favour of the oralist ap-
proach. Oralism regarded spoken language as essential for a child’s cognitive develop-
ment and for full integration into society, restricted communication to speech and
lipreading, and regarded written language learning as secondary to the mastery of the
spoken language. One of the most influential advocates of the oral method in deaf
education and for its spread in Germany and among allied countries was Samuel Hei-
nicke, a private tutor who founded the first school for the deaf in Germany in 1778.
The year 1880 is identified as a turning point in the history of deaf education.
During the International Congress on the Education of the Deaf held in Milan in that
year, a resolution was adopted in which the use of signs in the education of deaf
children was rejected, and the superiority of the oral method affirmed (Gascón-Ricao/
Storch de Gracia y Asensio 2004; Lane/Hoffmeister/Bahan 1996). The impact of this
congress, attended by hearing professionals from only a few countries, must be under-
stood in relation to more general social and political developments towards the end of
the 19th century (Monaghan 2003; Ladd 2003). Over the following years, most schools
switched to an oralist educational policy, so that by the early 20th century, oralism was
dominant in deaf education.
While the Milan congress was a major setback for the role of sign language in deaf
education, sign languages continued to be used in deaf communities. Indeed, although
oralist in educational orientation, residential schools for the deaf continued to contrib-
ute to the development and maintenance of many sign languages throughout the fol-
39. Deaf education and bilingualism 953

lowing decades as sign languages were transmitted from one generation to another
through communication among the children outside the classroom. These institutions
can therefore be regarded as important sites of language contact (Lucas/Valli 1992),
and by extension, of sign bilingualism, even though deaf people were not specifically
aware of their bilinguality at the time.
It is important to emphasise that the Milan resolution did not have an immediate
effect in all countries (Monaghan et al. 2003). In some, the shift towards oralism only
occurred decades later, as was the case in Ireland, where signing continued to be used
in schools until well into the 1940s (LeMaster 2003). In other countries, for example,
the US, sign language retained a role in deaf education in some schools. In China,
oralism was introduced only in the 1950s based on reports about the use of this method
in Russia (Yang 2008).

3. Bimodal bilingualism at the societal level


In the course of the last three decades, administrations in several countries have been
confronted with questions concerning language planning measures targeting sign lan-
guages, such as their legal recognition, their inclusion in deaf children’s education, or
the provision of interpretation. Grassroots pressure by deaf associations and related
interest groups has been instrumental in getting these issues to appear on the agendas
of governments throughout the world. Importantly, much of the impetus for local activ-
ities has been driven by international deaf activism, based on concepts such as Deaf
community and Deaf culture that are linked to the notion of sign language as a symbol
of identity (Ladd 2003). Indeed, hearing loss is not the sole determiner of Deaf commu-
nity membership, as this is crucially determined by the choice of sign language as the
preferred language (Woll/Ladd 2003) and solidarity, based on the concept of attitudinal
deafness (Ladd 2003; Erting/Kuntze 2008).

3.1. The Deaf community as a linguistic minority group

The development of a socio-cultural (or socio-anthropological) view of deafness and


related demands for the legal recognition of sign languages and their users as members
of linguistic minorities throughout the world are examples of the internationalisation
of political activism (recently referred to in terms of a ‘globalization of Deafhood’,
Erting/Kuntze 2008), which is reflected in similar sociolinguistic changes that have af-
fected deaf communities in several countries (Monaghan et al. 2003).
Historically, the gradual self-assertion of deaf individuals as members of a linguistic
minority as of the late 20th century is tied to the insights obtained from linguistic
research on sign languages, on the one hand, and socio-political developments toward
the empowerment of linguistic minorities, on the other hand. Following the recognition
of sign languages as full languages in the 1960s, deaf people themselves felt empowered
to claim their rights as a linguistic minority group, on a par with other linguistic minor-
ity groups that were granted linguistic rights at the time (Morales-López 2008). The
official recognition of sign languages as well as their inclusion in the education of
954 VIII. Applied issues

deaf children are central demands. In Sweden, where the provision of home-language
teaching to minority and immigrant children was stipulated by the 1977 ‘home language
reform’ act (Bagga-Gupta/Domfors 2003), sign language was recognised in 1981 as
the first and natural language of deaf individuals. The work of Swedish sign language
researchers (inspired by Stokoe’s research into ASL), Deaf community members, and
NGOs brought about this change at the level of language policy that would soon be
reflected in the compulsory use of sign language as the language of instruction at
schools with deaf children. In the US, the Deaf President Now movement, organised
by Gallaudet University students in March 1988, and leading to the appointment of
the first deaf president of that university, did not only raise awareness of the Deaf
community in the hearing society, it was also “above all a reaffirmation of Deaf culture,
and it brought about the first worldwide celebration of that culture, a congress called
The Deaf Way, held in Washington, DC, the following year”, with thousands of deaf
participants from all over the world (Lane/Hoffmeister/Bahan 1996, 130). These two
events gave impetus to the Deaf movement, which has influenced political activism of
deaf communities in many countries.
Sociolinguistic research into deaf communities around the globe has provided fur-
ther insights into how developments at a broad international level can combine with
local sociolinguistic phenomena in some countries, while they may have little impact
on the situation of deaf people in others. Positive examples include Spain and South
Africa. In Spain, political activism of deaf groups throughout the country began in the
1990s, influenced by the worldwide Deaf movement (Gras 2008) and by the socio-
political changes in that country concerning the linguistic rights granted to regional
language minorities after the restoration of democracy in the late 1970s (Morales-
López 2008). A similar relationship between political reforms and the activities of local
deaf communities is reported by Aarons and Reynolds (2003) for South Africa, where
the recognition of South African Sign Language (SASL) was put on the political
agenda after the end of the apartheid regime, with the effect that the 1996 constitution
protects the rights of deaf people, including the use of SASL. In contrast, in many other
African countries, socio-cultural and economic circumstances (widespread poverty, lack
of universal primary education, negative attitudes towards deafness) work against the
building of deaf communities (Kiyaga/Moores 2003).
In some cases, Deaf communities in developing countries have been influenced by
Deaf communities from other countries. One example is discussed in Senghas (2003)
regarding the assistance provided by the Swedish Deaf community to the Nicaraguan
Deaf community in the process of its formation and organisation, through exchanges
between members of the Nicaraguan and the Swedish Deaf communities; the Swedish
Deaf community also funded the centre for Deaf activities in Managua.
Despite the differences in the timing of the ‘awakening’ of the deaf communities in
different countries, the developments sketched here point to the significance of the
“process by which Deaf individuals come to actualise their Deaf identity” (Ladd 2003,
xviii) and the nature of Deaf people’s relationships to each other ⫺ two dimensions
that are captured by the concept of Deafhood, developed by Paddy Ladd in the 1990s
(Ladd 2003). Padden and Humphries (2005, 157) highlight the sense of pride brought
about by the recognition of sign languages as full languages: “To possess a language
that is not quite like other languages, yet equal to them, is a powerful realization for
a group of people who have long felt their language disrespected and besieged by
others’ attempts to eliminate it”.
39. Deaf education and bilingualism 955

Indeed, signers’ reports on their own language socialisation and their lack of aware-
ness that they were bilingual (Kuntze 1998) are an indication of the effect of oralism
on the identities of deaf individuals. It should be noted in this context that hearing
people with deaf parents shared the experiences of their families as “members of two
cultures (Deaf and hearing), yet fully accepted by neither” (Ladd 2003, 157).

3.2. Language planning

Until the end of the 20th century, language planning and language policies negatively
impacted on bilingualism in Deaf communities. The situation has changed as measures
have been taken in some countries that specifically target sign languages and their
users (see chapter 37, Language Planning, for further discussion). There has, however,
been concern about whether the steps taken meet the linguistic and educational needs
of deaf individuals (Cokely 2005; Gras 2008; Morales-López 2008; Reagan 2001). Ab-
stracting away from local problems, studies conducted in various social contexts reveal
similar shortcomings in three major areas: (i) sign language standardisation, (ii) sign
language interpretation, and (iii) education.
Among the most controversial language planning measures are those that affect the
development of languages. Sign languages have been typically used in informal con-
texts, with a high degree of regional variation. With the professionalization of interpret-
ing, increased provision of interpreting in schools and other contexts, and the teaching
of sign languages to deaf and hearing learners, these features have led to a demand
for the development of new terminology and more formal registers. Standardisation
deriving from expansion of language function is often contentious because it affects
everyday communication in multiple ways. Communication problems may arise (e.g.
between sign language interpreters and their consumers) and educational materials
may either not be used (as is the case with many sign language dictionaries, see John-
ston 2003; Yang 2008) or be used in unforeseen ways (Gras 2008).
Changes at the legal level concerning recognition of sign language do not always
have the expected effects. In France, for example, the 1991 Act granted parents of deaf
children free choice with respect to the language used in the education of their chil-
dren, but did not stipulate that any concrete measures be taken, neither with respect
to provisions to meet the needs of those choosing this option nor with respect to the
organisation of bilingual teaching where it was being offered (Mugnier 2006). Aarons
and Reynolds (2003) describe a similar situation for South Africa regarding the 1996
South African Schools Act, which stipulated that SASL be used as the language of in-
struction.
In general, scholars agree that many of the shortcomings encountered are related
to the lack of a holistic approach in sign language planning that would be characterised
by coordinated action and involvement (Gras 2008; Morales-López 2008). Indeed, in
many social contexts, measures taken represent political ‘concessions’ to pressure
groups (deaf associations, educational professionals, parents of deaf children), often
made with little understanding of the requisites and effects of the steps taken. The
question of whether and how diverse and often conflicting objectives in the area of
deaf education are reconciled is addressed in the next section.
956 VIII. Applied issues

4. Deaf education in the 21st century

4.1. Diversification of educational methods

Moores and Martin (2006) identify three traditional concerns of deaf educators:
(i) Where should deaf children be taught? (ii) How should they be taught? (iii) What
should they be taught? From the establishment of the first schools for the deaf in the
18th century until today, the different answers to these questions are reflected in the
diversity of educational methods, including the bilingual approach to deaf education.
Developments leading to a diversification of educational options in many countries are
similar, reflecting, on the one hand, the impact of the international Deaf movement
and related demands for sign bilingual education, and, on the other hand, the more
general trend toward inclusive education. However, variation in educational systems
indicates socio-political and cultural characteristics unique to different countries.
It is important to note in this context that throughout the world, many deaf children
continue to have no access to education (Kiyaga/Moores 2003). Indeed, it has been
estimated that only 20 % of all deaf children worldwide have the opportunity to go to
school (Ladd 2003). In many developing countries, universal primary education is not
yet available; because resources are limited, efforts are concentrated on the provision
of general education. Deaf education, where it is available, is often provided by non-
governmental organisations (Kiyaga/Moores 2003).
Comparison of the different educational options available shows that provision of
deaf education varies along the same dimensions as those identified for other types of
bilingual education (Baker 2007): (a) status of the languages (minority vs. majority
language), (b) language competence(s) envisaged (full bilingualism or proficiency in
the majority language), (c) placement (segregation vs. mainstreaming), (d) language
backgrounds of children enrolled, and (e) allocation of the languages in the curriculum.
From a linguistic perspective, the spectrum of communication approaches used with
deaf children exists on a continuum that ranges from a strictly monolingual (oralist)
to a spoken/written language and sign language bilingual model of deaf education, with
intermediate options characterised either by the use of signs as a supportive means of
communication or by teaching of sign language as a second language (Plaza-Pust 2004).
Variation in the status assigned to sign language in deaf education bears many similar-
ities to the situation of other minorities, but there are also marked differences relating
to the difference in the accessibility of the minority vs. the majority language to deaf
children and the types of intervention provided using each language. This is also re-
flected in the terminological confusion that continues to be widespread in the area of
deaf education related to the use of the term ‘signs’ or ‘signing’ to refer to any type of
manual communication, without a clear distinction between the use of individual signs,
artificially created signed systems, or natural sign languages. Only the latter are fully
developed, independent language systems, acquired naturally by deaf children of
deaf parents.
The first alternative approaches to the strictly oralist method were adopted in the
US in the 1970s as a response to the low linguistic and academic achievements of deaf
children educated orally (Chamberlain/Mayberry 2000). Against the backdrop of strict
oralism, the inclusion of signs to improve communication in the classroom marked an
39. Deaf education and bilingualism 957

important step. However, the objective of what is commonly referred to as the Total
Communication or Simultaneous Communication approaches in deaf education was
still mastery of the spoken language. For this purpose, artificial systems were devel-
oped, consisting of sign language elements and newly created signs (for example, Seeing
Essential English (SEE-1; Anthony 1971) or Signing Exact English (SEE-2; Gustason/
Zawolkow 1980) in the US). The use of these artificial systems to teach spoken lan-
guage combined with the relative ease of hearing teachers in their ‘mastery’ (since only
the lexicon had to be learned, rather than a different grammar) contributed to their
rapid spread in many countries including the US, Australia, New Zealand, Switzerland,
Germany, Thailand, and Taiwan (Monaghan 2003). It is important to note that the
creation and use of these systems for didactic purposes represents a case of language
planning with an assimilatory orientation (Reagan 2001). From a psycholinguistic per-
spective, these systems do not constitute a proper basis for the development of the
deaf child’s language faculty as they do not represent independent linguistic systems
(Johnson/Liddell/Erting 1989; Lane/Hoffmeister/Bahan 1996; Fischer 1998; Bavelier/
Newport/Supalla 2003). Moreover, adult models (hearing teachers and parents) com-
monly do not use them in a consistent manner, for example, frequently dropping func-
tional elements (Kuntze 2008). It is clear, therefore, that what is often described in the
educational area as the use of ‘sign language’ as a supportive means of communication
needs to be distinguished from the use of sign language as a language of instruction in
sign bilingual education programmes. Only in the latter case is the sign language of the
surrounding Deaf community, a natural language, used as the language of instruction.
In the second half of the 1980s, there was increasing awareness that Total Communi-
cation programmes were not delivering the results expected, particularly in relation to
literacy. Against the backdrop of the cultural movement of the Deaf community, the
Total Communication philosophy also clashed with the view of a Deaf community as
a linguistic minority group in that it was based on a medical view of deafness. That
signed systems were artificial modes of communication and not natural languages was
also reflected in studies documenting children’s adaptations of their signing to better
conform to the constraints of natural sign languages, for example, with respect to the
use of spatial grammar (Kuntze 2008). In addition, there was consistent evidence of
better academic results for deaf children of deaf parents (DCDP), which contributed
to an understanding of the linguistic and educational needs of the children, in particu-
lar, the relevance of access to a natural language early in childhood (Tellings 1995;
Hoffmeister 2000; Johnson/Liddell/Erting 1989).

4.2. Bilingual education

In recognising the relevance of sign language for the linguistic and cognitive develop-
ment of deaf children, the bilingual/bicultural approach to deaf education marked a
new phase in the history of deaf pedagogy (Johnson/Liddell/Erting 1989). Sweden pio-
neered this phase by recognising sign language as the first language of deaf people in
1981, and implementing in 1983 the Special School curriculum, with the aim of promot-
ing bilingualism (Bagga-Gupta/Domfors 2003; Svartholm 2007). The policy has re-
sulted in the implementation of a uniform bilingual education option (Bagga-Gupta/
958 VIII. Applied issues

Domfors 2003), which contrasts markedly with the range of options offered in other
countries as of the 1990s.
There is no comprehensive systematic comparison of bilingual education pro-
grammes internationally. However, some scholars have addressed the issue of variation
in the bilingual conceptions applied.

4.2.1. Sign language promotion

There are two main tenets of bilingual deaf education: (i) sign language is the primary
language of deaf children in terms of accessibility and development; (ii) sign language
is to be promoted as a ‘first’ language. How these tenets are operationalised varies in
relation to the choice of the languages of instruction, the educational setting, and the
provision of early intervention measures focussing on the development of a firm com-
petence in sign language as a prerequisite for subsequent education. The majority of
deaf children are born to non-signing hearing parents, so support for early intervention
is particularly important given the relevance of natural language input during the sensi-
tive period for language acquisition (Bavelier/Newport/Supalla 2003; Fischer 1998;
Grosjean 2008; Leuninger 2000; Mahshie 1997). However, this requirement is often not
met and the children enter bilingual education programmes with little or no language
competence.
Specific difficulties arise in interpreted education, where children attend regular
classes in a mainstream school, supported by sign language interpreting. In this type
of education, it is common to take the children’s sign language competence for granted,
with little effort put into the teaching of this language. In practice, many children are
required to learn the language whilst using the language to learn, receiving language
input from adult models who are mostly not native users of the language (Cokely
2005). In addition, these children often lack the opportunity to develop one important
component of bilingualism, namely the awareness of their own bilinguality and knowl-
edge about contrasts between their two languages (i.e. metalinguistic awareness), as
sign language is hardly ever included as a subject in its own right in the curriculum
(Morales-López 2008).

4.2.2. Spoken language promotion

In sign bilingual education programmes, the spoken/written language is commonly con-


ceived of as a second language (L2) (Bagga-Gupta 2004; Günther et al. 1999, 2004;
Knight/Swanwick 2002; Vercaingne-Ménard/Parison/Dubuisson 2005; Yang 2008; Pla-
za-Pust/Morales-Lopez 2008). In these programmes, the teaching of L2 literacy is ap-
proached via L1 sign language. In general, the written language is given prominence
over the spoken language; however, socio-political pressure to promote the spoken
language skills of deaf children varies from one country to another, affecting the status
assigned to this modality in bilingual teaching (Plaza-Pust 2004). Hence, many educa-
tional professionals in bilingual education programmes are confronted with the ethical
dilemma of how to deal with the political pressure to deliver good results in the spoken
language, on the one hand, and their knowledge about the impact of hearing impair-
39. Deaf education and bilingualism 959

ment on the effort required to learn the spoken language, on the other hand (Tellings
1995, 121).
Another development affecting the status of the spoken language pertains to the
increasing sophistication of technology ⫺ in particular, in the form of cochlear implants
(CIs) (Knoors 2006). As more and more cochlear-implanted children attend bilingual
programmes ⫺ a trend that reflects the overall increase in provision of these devices ⫺
the aims of bilingual education in relation to use of the spoken language need to be
redefined to match the linguistic capabilities and needs of these children.

4.2.3. Languages of instruction

One crucial variable in bilingual education pertains to the choice of the main lan-
guage(s) of instruction (Baker 2001). In some bilingual education programmes for deaf
children, all curriculum subjects are taught in sign language. In this case, the spoken/
written language is clearly defined as a subject in itself and taught as a second or
foreign language. Other programmes, for example, those in Hamburg and Berlin (Gün-
ther et al. 1999), opt for a so-called ‘continuous bilinguality’ in the classroom, put into
practice through team-teaching, with classes taught jointly by deaf and hearing
teachers.
In addition to spoken/written language and sign language, other codes and suppor-
tive systems of communication may be used, such as fingerspelling or Cued Speech
(LaSasso/Lamar Crain/Leybaert 2010). The continuing use of spoken language-based
signed systems, not only for teaching properties of the L2 but also for content matter,
in those programmes where instruction through sign language is provided for part of
the curriculum only, is subject of an ongoing heated debate. Opinions diverge on
whether these systems are more useful than sign language in the promotion of spoken
language acquisition in deaf children, given the ‘visualisation’ of certain grammatical
properties of that language, as is argued by their advocates (Mayer/Akamatsu 2003).
Critics maintain that the children are confronted with an artificial mixed system that
presupposes knowledge of the language which is actually to be learned (Kuntze 1998;
Wilbur 2000). Between these two positions, the benefit from utilising signed systems
in the teaching of the spoken language is conceded by some, although with a clear
disapproval of their use in place of sign language for the teaching of content matter.

4.2.4. Learners’ profiles

Teachers are confronted with a heterogeneous population of learners, with marked


individual differences not only in terms of degree of hearing loss, but also with respect
to prior educational experiences, linguistic profiles, and additional learning needs. In
some types of bilingual education (co-enrolment, interpreted education), deaf and
hearing children are taught in the same classroom. Indeed, particularly in the US,
a widespread alternative to bilingual education in special schools or self-contained
classrooms in mainstream schools is the provision of sign language interpreters in main-
stream classrooms. Interpreted education is also provided in Spain, particularly in sec-
ondary education. In Spain and other countries, the transition from primary to second-
960 VIII. Applied issues

ary education involves a change of institution, and often also a change of bilingual
methods used, as team-teaching found throughout primary education in some bilingual
programmes is not available in secondary education (Morales-López 2008). Variation
in learners’ profiles is often overlooked in these educational settings, even though
adaptations to meet the linguistic abilities and learning needs of deaf children are
necessary (Marschark et al. 2005).
Co-enrolment of deaf and hearing children has been offered in Australia, the US,
and in several countries in Europe (Ardito et al. 2008; de Courcy 2005; Krausneker
2008). While studies on this type of bilingual education coincide in the positive results
obtained, which mirrors the findings reported for Dual Language programmes with
minority and majority language children in the US (Baker 2007), these programmes
are often offered for a limited time only. A bilingual programme in Vienna, for exam-
ple, was discontinued after four years.
For multiple reasons, including the temporary character of some bilingual pro-
grammes, or changes in orientation from primary to secondary education, many deaf
children are exposed to diverse methods in the course of their development, often
without preparation for changes affecting communication and teaching in their new
classrooms (Gras 2008; Plaza-Pust 2004).
Demographic changes relating to migration are also reflected in the changing deaf
population (Andrews/Covell 2006). There is general awareness of the challenges this
imposes on teaching and learning, in particular among professionals working in special
education. However, both in terms of research and teaching, the lack of alignment of
the spoken languages (and, at times, also sign languages) used at home and in school
remains largely unaccounted for. It is clear that the concept of bilingual education, if
taken literally (that is, involving two languages only) does not recognise the diversity
that characterises deaf populations in many countries. Moreover, because of differen-
ces in educational systems, some deaf children from migrant families enrol in deaf
schools without prior knowledge of any language because deaf education was not avail-
able in their country of origin.
The increasing number of deaf children with cochlear implants (CI) ⫺ more than
half of the population in the UK, for example (Swanwick/Gregory 2007) ⫺ adds a new
dimension to the heterogeneity of linguistic profiles in deaf individuals. While most
children are educated in mainstream settings, there are many CI children attending
bilingual programmes, either because of low academic achievements in the mainstream
or because of late provision of a CI. The generalised rejection of sign language in the
education of these children in many countries contrasts with the continuing bilingual
orientation of education policy in Sweden, where the views of professionals and parents
of deaf children with CIs in favour of a bilingual approach follows a pragmatic reason-
ing that acknowledges not only the benefits of bilingual education but also that the CI
is not a remedy for deafness and its long-term use remains uncertain (Svartholm 2007).
In a similar vein, though based on the observation of remaining uncertainties concern-
ing children’s eventual success in using CIs, Bavelier, Newport, and Supalla (2003)
argue in favour of the use of sign language as a ‘safety net’.

4.2.5. Biculturalism

An aspect that continues to be controversial, and is also of relevance in discussions


about appropriate educational placements, concerns the notion of biculturalism in the
39. Deaf education and bilingualism 961

education of deaf children (Massone 2008; Mugnier 2006). Whilst sign bilingual educa-
tion is also bicultural for some educational professionals, the idea of deaf culture and
related bicultural components of deaf education are rejected by others. There are di-
verging views about whether sign bilingualism is the intended outcome (following the
model of maintenance bilingual education) or is intended to be a transitional phenom-
enon as an ‘educational tool’, as in other types of linguistic minority education (Baker
2001). The latter view, widespread among teaching professionals (see Mugnier (2006)
for a discussion of the situation in France; Massone (2008) for Argentina), commonly
attributes the status of a teaching tool to sign language, with no acknowledgment of
culture.
Apart from questions about the inclusion of deaf culture as an independent subject,
the discussion also affects the role assigned to deaf teachers as adult role models,
linguistically and culturally. As Humphries and MacDougall state (2000, 94): “The cul-
tural in a ‘bilingual, bicultural’ approach to educating deaf children rests in the details
of language interaction of teacher and student, not just in the enrichment of curriculum
with deaf history, deaf literature, and ASL storytelling.”

4.2.6. Educational conceptions and policy

Because sign bilingual education is not institutionalised in the majority of countries,


its provision often depends on the interest and support of parents, on the one hand,
and the expertise of specialists offering such programmes, on the other hand. Not only
do these programmes often struggle for survival, but professionals working in these
settings also face the task of developing their own curricula, teaching materials, and
assessment tools (Komesaroff 2001; Morales-López 2008; Plaza-Pust 2004). In many
cases, teachers in bilingual education programmes have little or no training in bilingual-
ism in general, or sign bilingualism in particular. In sign bilingual education, written
language is taught as an L2, but teachers are rarely taught the theoretical underpin-
nings of this type of language acquisition (Bagga-Gupta/Domfors 2003; Morales-López
2008). Contrastive teaching is assigned an important role, but there is a general lack
of knowledge about research in sign language linguistics and the impact of critical
language awareness on the developmental process, an issue that is the focus of educa-
tion of other linguistic minorities (Siebert-Ott 2001). Whatever the role assigned to the
different professionals involved in the teaching of deaf students, a (near-)native level
of proficiency in sign language should be a requirement. However, for multiple reasons,
this requirement is often not met, and in-service training is often insufficient to fill the
skill and knowledge gaps (Bagga-Gupta/Domfors 2003). In general, there is agreement
that many of the shortcomings must be addressed in the context of a coherent holistic
planning of bilingual education involving all stakeholders (administration, teachers,
parents, deaf association) with the aim of aligning the different measures that need to
be taken, such as the provision of appropriate teacher training, the development of
teaching materials specifically devised for sign bilingualism, and focus on the aspects
that distinguish sign bilingualism from other forms of bilingualism (use of two different
modalities, lack of access to the spoken modality, etc.) (Morales-López 2008). Clearly,
the absence of co-ordinated action results in ineffective use of the human and financial
resources available.
962 VIII. Applied issues

It is also apparent that the aim of guaranteeing equity of access to all children
often takes precedence over educational excellence. The objectives of the general trend
toward educating deaf children in mainstream schools ⫺ namely, choice of the least
restrictive environment, integration, and inclusion (Moores/Martin 2006) ⫺ have, since
the 1970s, been increasingly regarded as preferable to segregation (Lane/Hoffmeister/
Bahan 1996), not only in many Western countries (Monaghan 2003) but also in coun-
tries such as Japan (Nakamura 2003, 211), with the effect that many special schools
have been closed in recent years.
In the US, the trend initiated through Public Law 94⫺142 (1975), which requires
that education should take place in the least restrictive environment for all handicapped
children, has resulted in more than 75 % of deaf children being educated in the main-
stream (Marschark et al. 2005, 57), compared with 80 % of deaf children educated in
residential schools before the 1960s in that country (Monaghan 2003). The pattern is
similar in many other countries, for instance, in the UK, where only 8 % of deaf chil-
dren are currently educated in special schools (Swanwick/Gregory 2007). Moores and
Martin (2006) note, though, that this has been a long-term trend in the US, where
education in mainstream settings began to be increasingly offered after the end of
World War II, due to the increasing child population, including deaf children, and the
favouring of classes for deaf children in mainstream schools rather than the building
of additional residential schools.
These observations point to the additional issue of the economics of education,
which is often overlooked. To date, few cost-effectiveness studies have been under-
taken (but see Odom et al. 2001). While limited attention is paid to the economics of
bilingual education (Baker 2007), the limited discussion on the cost benefits of sign
bilingual education can also be considered as an indication of the ideological bias of
deaf educational discourse. A few scholars have speculated about whether the move
toward mainstreaming was motivated by efforts to improve the quality of deaf educa-
tion and promote deaf children’s integration, or was related to cost saving (Ladd 2003;
Marschark et al. 2005).
The provision of educational interpreting in mainstream or special educational set-
tings is based on the assumption that through interpreting, deaf children are provided
with the same learning opportunities as hearing children (Marschark et al. 2005). How-
ever, there is limited evidence concerning the effectiveness of educational interpreting
and little is known about the impact of the setting, the children’s language skills, the
interpreters’ language skills, and the pedagogical approach on information transmis-
sion.
In their study of interactions between deaf and hearing peers outside the main-
stream classroom, Keating and Mirus (2003) observed that deaf children were more
skilful in accommodating to their hearing peers than vice versa and concluded that
mainstreaming relies on an unexamined model of cross-modal communication.
A major challenge to traditional concepts of bilingual education and the develop-
ment of appropriate language planning measures concerns the increasing number of
deaf children with CI. While this general trend and the revival of radical oralist views
of deaf education are acknowledged, the long-term impact of CIs on educational pro-
grammes for deaf students is not yet clear. Although there is little research to provide
empirical support for the appropriateness of this generalised choice, mainstreaming is
the usual type of educational setting provided for children with a CI. Studies indicate
39. Deaf education and bilingualism 963

substantial variation in the developmental progress of cochlear-implanted orally edu-


cated children. The individual differences observed suggest that some of them would
certainly profit from the use of sign language in their education (Szagun 2001).
As for children themselves, remarkably little is known about their educational pref-
erences. Interview data are available from adult deaf people revealing their views of
oralist education and special schools, but at the time of their education, bilingual ap-
proaches were not an option (Krausneker 2008; Lane/Pillard/French 2000; Panayiotis/
Aravi 2006). Mainstreamed children in Spain, when asked whether they would prefer
interpreters or support from tutors (most of whom were competent in sign language)
expressed their preference for the latter option (Morales-López 2008), highlighting the
importance of face-to-face communication in the teaching and learning of content mat-
ter. As for the increasing number of deaf children with CIs, little is known about their
views concerning the impact of a CI on their quality of life (Preisler 2007).
Finally, it should be noted that most research has been orientated towards demon-
strating the effectiveness of one educational option over another, reflecting the interde-
pendence of research, policy, and practice (Bagga-Gupta 2004; Plaza-Pust/Morales-
López 2008). Demonstrating the success of a bilingual approach may be of critical
importance in ensuring its maintenance, even when it has not been appropriately im-
plemented. A few scholars (Bagga-Gupta 2004; Moores 2007) have drawn attention to
the dangers of ideological biases in research that should critically analyse and reflect.

5. Bilinguals

Because of the diversity of factors determining the acquisition and use of sign language
and spoken language in deaf individuals, bimodal bilingualism offers a rich field of
research into the complex inter-relation of external sociolinguistic and internal psycho-
linguistic factors that shape the outcomes of language contact. Indeed, recent years
have witnessed an increased interest in the study of this type of bilingualism. The next
sections summarise the major findings obtained in the psycholinguistic and sociolin-
guistic studies conducted, in particular, concerning (i) developmental issues in the ac-
quisition of the two languages, (ii) sociolinguistic aspects determining patterns of lan-
guage use in the deaf communities, and (iii) (psycho-)linguistic characteristics of cross-
modal language contact phenomena.

5.1. Bilingual learners: acquisition of sign language as an L1 and


spoken/written language as an L2

Studies of hearing children’s bilingual development commonly concern bilingual fami-


lies (Lanza 1997). The situation is markedly different regarding research into sign bilin-
gualism, in which longitudinal studies of family bilingualism are rare, although there
are exceptions, such as the studies conducted by Petitto and colleagues (2001) and
Baker and van den Bogaerde (2008). Over the past decade, such factors as scarcity of
bilingual educational settings and lack of appropriate measures of sign language knowl-
964 VIII. Applied issues

edge (Chamberlain/Mayberry 2000; Singleton/Supalla 2003) have changed and an in-


creasing number of studies have been conducted.
The specific circumstances that determine exposure to and access to spoken/written
language and sign languages in deaf children raise a number of issues concerning the
status attributed to the two languages as L1 (sign language) and L2 (spoken/written
language). The notions of mother tongue or first language(s) are commonly bound to
criteria of age (first language(s) acquired) and environment (language(s) used at
home), while full access to the language(s) learned is assumed. In the case of deaf
children, however, accessibility should be considered the defining criterion of which
language is their primary language, given that they can only fully access and naturally
acquire sign languages (Berent 2004; Grosjean 2008; Leuninger 2000; Mahshie 1997).
Age of exposure to sign language is a critical issue for the large majority of deaf
children born to non-signing hearing or deaf parents. Whether they acquire sign lan-
guage successfully through contact with signing peers or adult signers depends on such
factors as parents’ choices about language, medical advice, and early intervention, in a
context where the medical view of deafness prevails (Morales-López 2008; Yang 2008;
Plaza-Pust 2004). Even in countries where bilingual education is available, many par-
ents only learn about it later as an option for their deaf child, so that many children
only enter such programmes when they are older.
There are many reasons why deaf children are not exposed to sign language during
the sensitive period for language acquisition. It has been argued that the full accessibil-
ity of the language may compensate for delay in exposure; however, there is evidence
that late learners (5⫺10 years old) of a sign language as L1 may not become fully
fluent (Mayberry 2007). Another issue concerns the potential impact of the early use
of an artificial signed system at home and/or in pre-school on the later development
of sign language. Despite these caveats, sign language is generally referred to as the
L1 of bilingually educated deaf children.
With respect to the status of the spoken/written language, there is some consensus
that written language can be directly acquired by bilingually educated deaf children as
an L2 (Leuninger 2000; Plaza-Pust 2008; Vercaingne-Ménard/Parison/Dubuisson 2005).
However, there is little agreement on whether deaf children can compensate for the
lack of access to spoken language by taking other pathways in the learning of written
language to successfully acquire L2 grammar. Many researchers discuss deaf children’s
written language development in the context of the impact of hearing loss, specifically
in relation to the role of phonological awareness in the development of literacy (Mus-
selman 2000; Mayer 2007). A different view is adopted by others who emphasise the
need to look at written language in its own right. Günther (2003), for example, main-
tains that although written language is related to spoken language, it is an autonomous
semiotic system. Learners must ‘crack the code’ along the lines proposed for other
acquisition situations, that is, they have to identify the relevant units of each linguistic
level, the rules that govern their combination, as well as the inter-relation of the differ-
ent linguistic levels of analysis. Both innate knowledge and linguistic environment are
assumed to contribute to this process (Plaza-Pust 2008).
While numerous studies provide detailed accounts of error types in deaf children’s
written productions, few scholars discuss their findings about deaf children’s acquisition
of the written language in relation to linguistic theory. Wilbur (2000) distinguishes
internal and external sources of errors in the writing. As the errors found resemble
39. Deaf education and bilingualism 965

the rule-based errors found in learner grammars of hearing L2 learners (i.e. omissions
or overgeneralisations), it is assumed that they reflect developmental stages. However,
the characteristic long-term persistence of these errors is reminiscent of plateau or
fossilisation effects in second language learner grammars and suggests that the devel-
opment of the written language by deaf children might be delayed or truncated as a
result of restricted quantity of the input and a deficit in the quality of input. Following
this line of reasoning, it has been argued that the traditional teaching of written lan-
guage structures in isolation with a focus on formal correctness is at the expense of
creative uses of written language which would allow deaf children to acquire subtle
grammatical and pragmatic properties (Günther et al. 2004; Wilbur 2000).
Studies that specifically address similarities and differences between deaf learners
and other learners continue to be rare. In longitudinal studies, both Krausneker (2008)
and Plaza-Pust (2008) compare deaf learners’ L2 written language development with
that of other L2 learners of the same language in order to ascertain whether the under-
lying language mechanisms are the same. Krausneker (2008) directly compared hearing
and deaf children’s development of L2 German. She explains differences between the
learner groups in developmental progress as resulting from differences in the amount
of input available: while hearing children are continuously exposed to the L2, deaf
children’s input and output in this language are much more restricted. Plaza-Pust
(2008) found that the bilingually educated deaf children in her study acquired the
target German sentence structure like other learners, but with marked differences in
individual progress. She argues that variation in the learners’ productions is an indica-
tor of the dynamic learning processes that shape the organisation of language, as has
been shown to be the case in other contexts of language acquisition.
Additionally, where written language serves as the L2, the question of the potential
role of sign language (L1) in its development is fundamental to an appropriate under-
standing of how deaf children may profit from their linguistic resources in the course
of bilingual development.

5.2. Bilingual development: pooling of linguistic resources

Over recent years, several hypotheses have been put forward with respect to positive
and negative effects of cross-modal language interaction in sign bilingual development
(Plaza-Pust 2008). In research on bilingualism in two spoken languages, this is usually
expressed as a facilitating or accelerating vs. a delaying effect in the learning of target
language properties (Odlin 2003). A variety of terminology is found in the literature,
including that concerned with sign bilingualism, to refer to different types of interac-
tion between two or more languages in the course of bilingual development, such as
‘language transfer’, ‘linguistic interference’, ‘cross-linguistic influence’, ‘code-mixing’,
and ‘linguistic interdependence’. Many of these terms have negative connotations
which indicate attitudes toward bilingualism and bilinguals’ language use and reflect a
common view that the ‘ideal’ bilingual is two monolinguals in one person who should
keep his two languages separate at all times.
Studies on language contact phenomena in interactions among adult bilinguals, in-
cluding bilingual signers, and in the productions of bilingual learners have shown that
language mixing is closely tied to the organisation of language on the one hand, and
966 VIII. Applied issues

to the functional and sociolinguistic dimensions of language use on the other hand
(Grosjean 1982, 2008; Tracy/Gawlitzek-Maiwald 2000), with a general consensus that
bilingual users, including bilingual learners, exploit their linguistic resources in both
languages.

5.2.1. Cross-modal language borrowing

Following a long debate about separation or fusion of languages in early bilingual


acquisition (Tracy/Gawlitzek-Maiwald 2000; Meisel 2004), there is a consensus that
both languages develop separately from early on. This is supported by longitudinal
studies on the acquisition of diverse language pairs, including the acquisition of sign
language and spoken language in hearing children (Petitto et al. 2001). More recently,
some researchers have studied language mixing in young bilinguals and concluded
that their languages may temporarily interact in the course of bilingual development
(Genesee 2002). Particularly in cases where there is asymmetry in the development of
the two languages, learners may use a ‘relief strategy’ of temporarily borrowing lexical
items or structural properties from the more advanced language. It has also been ar-
gued that the sophisticated combination of two distinct grammars in learners’ mixed
utterances not only reveals the structures available in each language, but also that
learners know, by virtue of their innate language endowment (i.e. Universal Grammar),
that grammars are alike in fundamental ways.
In relation to bilingual acquisition of sign language and spoken/written language
by deaf children, some scholars acknowledge in general terms that L1 sign language
knowledge, drawing on Universal Grammar, might reduce the complexity of acquiring
a written language as an L2 (Wilbur 2000), but do not consider cross-linguistic influ-
ence or borrowing as outlined above.
Günther and colleagues (1999) maintain that sign language serves a critical role in
the bilingual development of deaf children. Based on a study of the writing of deaf
children attending the Hamburg bilingual programme, they claim that these children
profit from their knowledge of German Sign Language (DGS) in two respects. Firstly,
they benefit from general knowledge attained through this language (general world
knowledge and also knowledge about story grammar) in their production of written
narratives (Günther et al. 2004). Secondly, they compensate for gaps in their written
language grammar by borrowing sign language structures. Crucially, the authors show
that DGS influence was a temporary phenomenon; as the learners’ knowledge of writ-
ten German increased, the use of DGS borrowings decreased.
Plaza-Pust and Weinmeister (2008) specifically address the issue of cross-modal bor-
rowing in relation to learners’ grammatical development in both languages. Their
analysis of signed and written narratives (collected in the context of a longitudinal
investigation into bilingual acquisition of DGS and written German by deaf children
attending the bilingual programme in Berlin) shows that lexical and structural borrow-
ings occur at specific developmental phases in both languages, with structural borrow-
ings decreasing as learners progress. Once the target structural properties are estab-
lished, language mixing serves other, pragmatic functions (code-switching). The use of
language mixing both in signed and written productions was relatively infrequent. For
one participant, no language mixing was observed in either DGS or German. Individ-
39. Deaf education and bilingualism 967

ual variation shows patterns similar to those described in research on the bilingual
acquisition of two spoken languages (Genesee 2002). Finally, the data reveal a gradual
development of the grammars of both languages, with differences among learners in
the extent of development in the time covered by the study.

5.2.2. Inter-dependence of sign language and literacy skills

Academic disadvantages resulting from a mismatch between L1 and L2 skills are most
pronounced in linguistic minority members, in particular, in the case of socially stigma-
tised minorities (Saiegh-Haddad 2005). Cummins’ Interdependence Hypothesis (1991),
which sees a strong foundation in the L1 as a prerequisite for bilingual children’s
academic success, targets functional distinctions in language use and their impact on
academic achievements in acquisition situations in which the home language (L1) dif-
fers from the language (L2) used in school.
Because the theoretical justification for a bilingual approach to the education of
linguistic minority and deaf children bears important similarities (Strong/Prinz 2000,
131; Kuntze 1998), the Interdependence Hypothesis has been widely used in the field
of deaf education. As the role of spoken language in the linguistic and academic devel-
opment of deaf children is limited, including reading development, the promotion of
sign language as a base or primary language, although not the language used at home
in the majority of cases, is fundamental to deaf children’s cognitive and communicative
development (Hoffmeister 2000; Niederberger 2008).
As for Cummins’ Common Underlying Proficiency Hypothesis, which assumes cog-
nitive academic proficiency to be interdependent across languages, it is important to
note that it is not a monolithic ability but rather involves a number of components,
making it necessary to carefully examine the skills that might be involved in the ‘trans-
fer process’. In research on sign bilingual development, the identification of the skills
that might belong to common underlying proficiency is further complicated by the fact
that sign language, the L1 or base language, has no written form that might be used in
literacy related activities. Thus the notion of transfer or interaction of academic lan-
guage skills needs to be conceived of independently of print. This in turn has led to a
continuing debate about whether or not sign language can facilitate the acquisition of
L2 literacy (Chamberlain/Mayberry 2000; Mayer/Akamatsu 2003; Niederberger 2008).
The positive correlations between written language and sign language found in stud-
ies of ASL and English (Hoffmeister 2000; Strong/Prinz 2000) and other language pairs
(Dubuisson/Parisot/Vercaingne-Ménard (2008) for Quebec Sign Language (LSQ) and
French; Niederberger (2008) for French Sign Language (LSF) and French) have pro-
vided support for the assumption that good performances in both languages are indeed
linked. As for the specific sign language skills associated with specific literacy skills,
several proposals have been put forward. Given the differences between the languages
at the level of the modality of expression and organisation, some authors assume that
interaction or transfer mainly operate at the level of story grammar and other narrative
skills (Wilbur 2000). Other researchers believe that the interaction relates to more
specifically linguistic skills manifested in the comprehension and production of sign
language and written language (Chamberlain/Mayberry 2000; Hoffmeister 2000;
Strong/Prinz 2000). Higher correlations were obtained between narrative comprehen-
968 VIII. Applied issues

sion and production levels in ASL and English reading and writing levels than between
ASL morphosyntactic measures and English reading and writing. Niederberger (2008)
reported a significant correlation of global scores in LSF and French and observed that
correlations between narrative skills in both languages were higher than those relating
to morphosyntactic skills. Additionally, sign language comprehension skills were more
highly correlated with French reading and writing skills than with sign language pro-
duction skills. Given that LSF narrative skills also correlated with French morphosyn-
tactic skills, the interaction of both languages was assumed to involve more than global
narrative skills.
The study by Dubuisson, Parisot, and Vercaingne-Ménard (2008) on the use of
spatial markers in LSQ (taken as an indicator of global proficiency in LSQ) and higher
level skills in reading comprehension showed a relationship between improvement in
children’s use of spatial markers in LSQ and their ability to infer information when
reading French. With respect to global ability in the use of space in LSQ and global
reading comprehension, the authors reported a highly significant correlation in the first
year of the study. More specifically, a correlation was found between the ability to
assign loci in LSQ and the ability to infer information in reading. In a two-year follow-
up, they observed a correlation between locus assignment in LSQ and locating informa-
tion in reading, and between global LSQ scores and locating information in reading.
In summary, the results obtained from these studies show a relation between sign
language development and literacy skills. However, they do not provide any direct
information about the direction of the relationship, and some of the relations in the
data remain unaccounted for at a theoretical level, in particular where the links con-
cern grammatical properties and higher level processes.

5.2.3. Language contacts in the classroom and the promotion


of metalinguistic skills

In the course of bilingual development, children learn the functional and pragmatic
dimensions of language use and develop the capacity to reflect upon and think about
language, commonly referred to as metalinguistic awareness. From a developmental
perspective, the ability to monitor speech (language choice, style), which appears quite
early in development, can be distinguished from the capacity to express and reflect on
that knowledge (Lanza 1997, 65). It is important to note that metalinguistic awareness
is not attained spontaneously but is acquired through reflection on structural and com-
municative characteristics of the target language(s) in academic settings (Ravid/Tolch-
insky 2002). Thus with respect to the education of deaf children, the question arises as
to whether and how these skills are promoted in the classroom.
One salient characteristic of communication in the sign bilingual classroom is that
it involves several languages and codes (see section 4.2). This diversity raises two fun-
damental issues about communication practices in the classroom and the children’s
language development. On the one hand, because modality differences alone cannot
serve as a clear indicator of language, scholars have remarked on the importance of
structural and pragmatic cues in providing information about differences between the
languages. In particular, distinctive didactic roles for the different languages and codes
used in the classroom seem to be fundamental for successful bilingual development.
39. Deaf education and bilingualism 969

On the other hand, Padden and Ramsey (1998) state that associations between sign
language and written language must be cultivated. Whether these associations pertain
to the link between fingerspelling and the alphabetic writing system, or to special regis-
ters and story grammar in both languages, an enhanced awareness of the commonali-
ties and differences will help learners to skilfully exploit their linguistic resources in
the mastery of academic content.
While language use in classrooms for deaf children is complex (Ramsey/Padden
1998), studies of communication practices in the classroom show that language contact
is used as a pedagogical tool: teachers (deaf and hearing) and learners creatively use
their linguistic resources in dynamic communication situations, and children learn to
reflect about language, its structure and use.
Typically, activities aimed at enhancing the associations between the languages in-
volve their use in combination with elements of other codes, as in the use of teaching
techniques commonly referred to as chaining (Padden/Ramsey 1998; Humphries/Mac-
Dougall 2000; Bagga-Gupta 2004) or sandwiching, where written, fingerspelled, and
spoken/mouthed items with the same referent follow each other. An illustration of this
technique is provided in (1) (adapted from Humphries/MacDougall 2000, 89).

(1) volcano ⫺ v-o-l-c-a-n-o ⫺ ‘volcano’ point ⫺ v-o-l-c-a-n-o


initialized C fingerspelling C point to printed C fingerspelling
sign word

Particularly in the early stages of written language acquisition, knowledge of and atten-
tion to the relationships between the different languages and codes become apparent
in the communication between the teachers and the children: children use sign lan-
guage in their enquiries about translation equivalents; once the equivalence in meaning
is agreed upon, fingerspelling is used to confirm the correct spelling of the word. At
times, children and teachers may also use spoken words or mouthings in their enquiries.
The following example describes code-switching occurring upon the request of a ‘new-
comer’ deaf student:

Roy […] wanted to spell ‘rubber’. He invoked the conventional requesting procedure,
waving at Connie [the deaf teacher] and repeatedly signing ‘rubber’. […] As soon as she
turned her gaze to him, Roy began to sign again. She asked for clarification, mouthing
‘rubber? rubber?’, then spelled it for him. He spelled it back, leaving off the final ‘r’. She
assented to the spelling, and he began to write. John, also at the table and also experienced
with signed classroom discourse, had been watching the sequence as well, and saw that
Roy had missed the last ‘r’ just before Connie realized it. John signalled Connie and in-
formed Roy of the correction. (Ramsey/Padden 1998, 18)

During text comprehension and production activities, teachers and children move be-
tween the languages. For example, teachers provide scaffolding through sign language
during reading activities, including explanations about points of contrast between spo-
ken language and sign language. Bagga-Gupta (2004) describes chaining of the two
languages in a simultaneous or synchronised way, for example, by periodically switch-
ing between the two languages or ‘visually reading’ (signing) a text.
Mugnier (2006) analyses the dynamics of bilingual communication in LSF and
French during text comprehension activities in classes taught by a hearing or a deaf
970 VIII. Applied issues

teacher. LSF and French were used by the children in both situations. However, while
the deaf teacher validated the children’s responses in either language, the hearing
teacher only confirmed the correctness of spoken responses. Teacher-student ex-
changes including metalinguistic reflection about the differences between the lan-
guages only occurred in interaction with the deaf teacher. It was occasionally observed
that in communication with the hearing teacher, the children engaged with each other
in a parallel conversation, with no participation by the teacher. Millet and Mugnier
(2004) conclude that children do not profit from their incipient bilingualism by the
simple juxtaposed presence of the languages in the classroom, but benefit where lan-
guage alternation is a component of the didactic approach.
The dynamics of bilingual communication in the classroom also has a cultural com-
ponent. As pointed out by Ramsey and Padden (1998, 7) “a great deal of information
about the cultural task of knowing both ASL and English and using each language in
juxtaposition to the other is embedded in classroom discourse, in routine ‘teacher talk’
and in discussions”.

5.3. Sociolinguistic aspects of cross-modal language contact

Language choice in bilingual interactions is a complex phenomenon. For bilinguals,


variation in situation induces different language modes. Whether bilinguals choose one
language or another as a base language is related to a number of factors such as their
fluency in the two languages, the conversational partners, the situation, the topic, and
the function of the interaction (Fontana 1999; Grosjean 1982, 2008; Romaine 1996).
For deaf bilinguals, limitations on perception and production of the spoken language
condition its choice as a base language. Thus, in interactions with other sign bilinguals,
code-switching for stylistic purposes or communicative efficiency can involve mouthing
or fingerspelling. Code-switching provides an additional communication resource when
clarification is required (Grosjean 1982). For this purpose, bilingual signers may com-
bine elements of different codes, that is, sign language, mouthing, and fingerspelling
(see chapter 35, Language Contact and Borrowing, fur further discussion).
An interesting example, indicating that cross-cultural differences are reflected in
preferences for specific combinations over others, is described by Yang (2008) with
respect to contact between Chinese Sign Language (CSL) and Chinese. Like other sign
bilinguals, Chinese/CSL bilinguals combine CSL elements and mouthings. However,
they also code-switch to written Chinese by tracing the strokes of Chinese characters
in the air or on the palm of the hand, a strategy that is also common among hearing
Chinese to disambiguate homophonic words.
Studies of the use of mixed varieties, initially referred to as ‘pidgin sign language’
and later as ‘contact signing’ (also see chapter 35) have shown that the hearing status
of the conversational partner is not the sole criterion determining language choice in
situations of sign language and spoken/written language contact (Fontana 1999). Lucas
and Valli (1989) report on the use of ASL by some deaf participants in their communi-
cation with a hearing interviewer, and on the use of contact signing with a deaf inter-
viewer even where both the interviewers and the participants were fluent in ASL and
the participants were addressed by the deaf interviewer in ASL. According to Lucas
and Valli, language choice in any of these cases is determined by sociolinguistic factors,
39. Deaf education and bilingualism 971

such as the formality of an interview situation or the use of a particular language as a


marker of identity. Similar patterns of language use have been reported with respect
to other linguistic minorities (see Grosjean (1982) for a discussion of the factors that
determine language choice in bilingual settings).
There has been limited research outside the US into the sociolinguistic factors de-
termining language choice. However, studies of loan vocabulary indicate the influence
of different teaching practices in deaf education. For example, the widespread use of
fingerspelling as a teaching tool in the US is reflected in the frequency of specific types
of cross-modal language contact phenomena, such as initialisation (Lucas/Valli 1992),
which contrasts with the widespread use of mouthing in the sign languages of German-
speaking countries, reflecting the spoken language orientation in deaf education in
these countries.
LeMaster’s (2003) study of differences between men and women’s signing indicates
the impact of educational philosophy and segregation by gender. In Dublin, girls and
boys attended different schools, which is reflected in lexical differences (see chapter
33, Sociolinguistic Aspects of Variation and Change, for details).
From a sociolinguistic perspective, where two linguistic communities do not live in
regionally separate areas, language contact phenomena can be considered to be a natu-
ral outcome. This holds equally of the intricate relationship between the hearing and
Deaf communities. However, where languages in a given social space do not have the
same status, language mixing may not only be associated with a lack of competence at
the individual level, but it may also be perceived as a cause of language loss (Baker
2001; Grosjean 1982; Romaine 1996).
The situation is markedly different in communities where bimodal bilingualism de-
velops primarily as a result of a high incidence of deafness (Branson/Miller/Marsaja
1999; Woodward 2003; see chapter 24, Shared Sign Languages, for further discussion).
Kisch (2008) describes the complexity of language profiles she encountered in her
anthropological study of the Al-Sayyid community (Israel). Her observations of the
interactions between hearing and deaf, competent and less competent signers point to
a situation of intensive language contact, with dynamic movement between the lan-
guages. With respect to linguistic profiles, Kisch observes an asymmetry between deaf
individuals, who are usually monolingual in sign language, and hearing individuals, who
are bilingual in spoken language and sign language. This type of language contact
situation results in a ‘reverse’ pattern to that found in the language profiles of linguistic
minority members in other social contexts where the minority language community
members are bilingual, while the majority members are usually monolingual. This dem-
onstrates how sociolinguistic and educational factors affect the patterns of language
use in a given social space. Indeed, cases of such ‘village sign languages’ offer the
opportunity to study the outcomes of language contact in situations without language
planning targeting sign language. Also, in such communities, language behaviour is
neither determined by the stigmatisation of deafness nor by the concept of a Deaf
community and related notions of attitudinal deafness. As more and more deaf chil-
dren in such communities are exposed to deaf education, it will be interesting to see
how this affects their language development and use, and, eventually, also the commu-
nication patterns in the social context of the village in light of the discussion of how
the establishment of the American School for the Deaf was an important factor in the
disappearance of the Martha’s Vineyard deaf community (Lane/Pillard/French 2000).
972 VIII. Applied issues

6. Conclusion

Human beings, deaf or hearing, have an innate predisposition to acquire one or more
languages. Variation in linguistic profiles of deaf individuals, ranging from competence
in two or more languages to rudimentary skills in only one language, indicates how
innate and environmental factors conspire in the development and maintenance of a
specific type of bilingualism that is characterised by the fragile pattern of transmission
of sign languages and the unequal status of sign language and spoken/written language
in terms of their accessibility.
Today, the diversity of approaches to communication in the education of deaf stu-
dents ranges from a strictly monolingual (oralist) to a (sign) bilingual model of deaf
education. Variation in the choice of the languages of instruction and educational
placement reveals that diverse, and often conflicting, objectives need to be reconciled
with the aim of guaranteeing equity of access and educational excellence to a heteroge-
neous group of learners, with marked differences in terms of their degree of hearing
loss, prior educational experiences, linguistic profiles, and additional learning needs.
Demographic changes relating to migration and the increasing number of children with
cochlear implants add two new dimensions to the heterogeneity of the student popula-
tion that need to be addressed in the educational domain.
While bilingualism continues to be regarded as a problem by advocates of a mono-
lingual (oral only) education of deaf students, studies into the bimodal bilingual devel-
opment of deaf learners have shown that sign language does not negatively affect
spoken/written language development. Statistical studies documenting links between
skills in the two languages and psycholinguistic studies showing that learners tempora-
rily fill gaps in their weaker language by borrowing from their more advanced language
further indicate that deaf learners, like their hearing bilingual peers, creatively pool
their linguistic resources. Later in their lives, bilingual deaf individuals have been found
to benefit from their bilingualism as they constantly move between the deaf and the
hearing worlds, code-switching between the languages for stylistic purposes or commu-
nicative efficiency.

7. Literature
Aarons, Debra/Reynolds, Louise
2003 South African Sign Language: Changing Policy and Practice. In: Monaghan, Leila/
Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be
Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet Univer-
sity Press, 194⫺210.
Andrews, Jean F./Covell, John A.
2006 Preparing Future Teachers and Doctoral Level Leaders in Deaf Education: Meeting
the Challenge. In: American Annals of the Deaf 151(5), 464⫺475.
Ann, Jean
2001 Bilingualism and Language Contact. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign
Languages. Cambridge: Cambridge University Press, 33⫺60.
Anthony, David
1971 Seeing Essential English. Anaheim, CA: Anaheim Union High School District.
39. Deaf education and bilingualism 973

Ardito, Barbara/Caselli, M. Cristina/Vecchietti, Angela/Virginia, Volterra


2008 Deaf and Hearing Children: Reading Together in Preschool. In: Plaza-Pust, Carolina/
Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interac-
tion, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins,
137⫺164.
Bagga-Gupta, Sangeeta
2004 Literacies and Deaf Education: A Theoretical Analysis of the International and Swed-
ish Literature. In: Forskning I Fokus 23. The Swedish National Agency for School
Improvement.
Bagga-Gupta, Sangeeta/Domfors, Lars-Ake
2003 Pedagogical Issues in Swedish Deaf Education. In: Monaghan, Leila/Schmaling, Con-
stanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International
Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 67⫺88.
Baker, Anne/Bogaerde, Beppie van den
2008 Code-mixing in Signs and Words in Input to and Output from Children. In: Plaza-Pust,
Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development,
Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benja-
mins, 1⫺27.
Baker, Colin
2001 Foundations of Bilingual Education and Bilingualism. Clevedon: Multilingual Matters.
Baker, Colin
2007 Becoming Bilingual through Bilingual Education. In: Auer, Peter/Wie, Li (eds.), Hand-
book of Multilingualism and Multilingual Communication. Berlin: Mouton de Gruyter,
131⫺152.
Bavelier, Daphne/Newport, Elissa L./Supalla, Ted
2003 Signed or Spoken, Children Need Natural Languages. In: Cerebrum 5, 15⫺32.
Berent, Gerald P.
2004 Sign Language ⫺ Spoken Language Bilingualism: Code Mixing and Mode Mixing by
ASL-English Bilinguals. In: Bhatia, Tej K./Ritchie, William C. (eds.), The Handbook of
Bilingualism. Oxford: Blackwell, 312⫺335.
Berenz, Norine
2003 Surdos Venceremos: the Rise of the Brazilian Deaf Community. In: Monaghan, Leila/
Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be
Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet Univer-
sity Press, 173⫺192.
Bogaerde, Beppie van den/Baker, Anne E.
2002 Are Young Deaf Children Bilingual? In: Morgan, Gary/Woll, Bencie (eds.), Directions
in Sign Language Acquisition. Amsterdam: Benjamins, 183⫺206.
Branson, Jan/Miller, Don/Marsaja, I Gede
1999 Sign Language as a Natural Part of the Mosaic: The Impact of Deaf People on Dis-
course Forms in North Bali, Indonesia. In: Winston, Elizabeth (ed.), Storytelling and
Conversation: Discourse in Deaf Communities. Washington, DC: Gallaudet University
Press, 109⫺148.
Chamberlain, Charlene/Mayberry, Rachel I.
2000 Theorizing About the Relation Between American Sign Language and Reading. In:
Chamberlain, Charlene/Morford, Jill P./Mayberry, Rachel I. (eds.), Language Acquisi-
tion by Eye. Mahwah, NJ: Lawrence Erlbaum, 221⫺260.
Cokely, Dennis
2005 Shifting Positionality: A Critical Examination of the Turning Point in the Relationship
of Interpreters and the Deaf Community. In: Marschark, Marc/Peterson, Rico/Winston,
Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Directions for
Research and Practice. Oxford: Oxford University Press, 3⫺28.
974 VIII. Applied issues

Courcy, Michèle de
2005 Policy Challenges for Bilingual and Immersion Education in Australia: Literacy and
Language Choices for Users of Aboriginal Languages, Auslan and Italian. In: The Inter-
national Journal of Bilingual Education and Bilingualism 8(2⫺3), 178⫺187.
Cummins, Jim
1991 Interdependence of First- and Second-Language Proficiency in Bilingual Children. In:
Bialystok, Ellen (ed.), Language Processing in Bilingual Children. Cambridge: Cam-
bridge University Press, 70⫺89.
Dubuisson, Colette/Parisot, Anne-Marie/Vercaingne-Ménard, Astrid
2008 Bilingualism and Deafness: Correlations Between Deaf Students’ Ability to Use Space
in Quebec Sign Language and their Reading Comprehension in French. In: Plaza-Pust,
Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development,
Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benja-
mins, 51⫺71.
Emmorey, Karen
2002 Language, Cognition, and the Brain. Mahwah, NJ: Lawrence Erlbaum.
Erting, Carol J./Kuntze, Marlon
2008 Language Socialization in the Deaf Communities. In: Duff, Patricia A./Hornberger,
Nancy H. (eds), Encyclopedia of Language and Education (2 nd Edition), Volume 8:
Language Socialization. Berlin: Springer, 287⫺300.
Fischer, Susan D.
1998 Critical Periods for Language Acquisition: Consequences for Deaf Education. In: Wei-
sel, Amatzia (ed.), Issues Unresolved: New Perspectives on Language and Deaf Educa-
tion. Washington, DC: Gallaudet University Press, 9⫺26.
Fontana, Sabina
1999 Italian Sign Language and Spoken Italian in Contact: An Analysis of Interactions Be-
tween Deaf Parents and Hearing Children. In: Winston, Elizabeth (ed.), Storytelling
and Conversation: Discourse in Deaf Communities. Washington, DC: Gallaudet Univer-
sity Press, 149⫺161.
Gascón-Ricao, Antonio/Storch de Gracia y Asensio, José Gabriel
2004 Historia de la Educación de los Sordos en España y su Influencia en Europa y América.
Madrid: Editorial Universitaria Ramón Areces.
Genesee, Fred
2002 Portrait of the Bilingual Child. In: Cook, Vivian (ed.), Portraits of the L2 User. Cleve-
don: Multilingual Matters, 167⫺196.
Gras, Victòria
2008 Can Signed Language Be Planned? Implications for Interpretation in Spain. In: Plaza-
Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Develop-
ment, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam:
Benjamins, 165⫺193.
Grosjean, Francois
1982 Life with Two Languages. Cambridge, MA: Harvard University Press.
Grosjean, Francois
2008 Studying Bilinguals. Oxford: Oxford University Press.
Günther, Klaus-B.
2003 Entwicklung des Wortschreibens bei Gehörlösen und Schwerhörigen Kindern. In: Fo-
rum 11, 35⫺70.
Günther, Klaus-B./Staab, Angela/Thiel-Holtz, Verena/Tollgreef, Susanne/Wudtke, Hubert (eds.)
1999 Bilingualer Unterricht mit Gehörlosen Grundschülern: Zwischenbericht zum Hamburger
Bilingualen Schulversuch. Hamburg: Hörgeschädigte Kinder.
39. Deaf education and bilingualism 975

Günther, Klaus-B./Schäfke, Ilka/Koppitz, Katharina/Matthaei, Michaela


2004 Vergleichende Untersuchungen zur Entwicklung der Textproduktionskompetenz und
Erzählkompetenz. In: Günther, Klaus-B./Schäfke, Ilka (eds.), Bilinguale Erziehung als
Förderkonzept für Gehörlose SchülerInnen: Abschlussbericht zum Hamburger Bilin-
gualen Schulversuch. Hamburg: Signum, 189⫺267.
Gustason, Gerrilee/Zawolkow, Esther (eds.)
1980 Using Signing Exact English in Total Communication. Los Alamitos, CA: Modern
Signs Press.
Hoffmeister, Robert J.
2000 A Piece of the Puzzle: ASL and Reading Comprehension in Deaf Children. In: Cham-
berlain, Charlene/Morford, Jill P./Mayberry, Rachel I. (eds.), Language Acquisition by
Eye. Mahwah, NJ: Lawrence Erlbaum, 143⫺163.
Humphries, Tom/MacDougall, Francine
2000 “Chaining” and Other Links Making Connections Between American Sign Language
and English in Two Types of School. In: Visual Anthropology Review 15(2), 84⫺94.
Johnson, Robert E./Liddell, Scott K./Erting, Carol J.
1989 Unlocking the Curriculum: Principles for Achieving Access in Deaf Education. Washing-
ton, DC: Gallaudet University Press.
Johnston, Trevor
2003 Language Standardization and Signed Language Dictionaries. In: Sign Language Stud-
ies 3(4), 431⫺468.
Keating, Elizabeth/Mirus, Gene
2003 Examining Interactions Across Language Modalities: Deaf Children and Hearing Peers
at School. In: Anthropology & Education Quarterly 34(2), 115⫺135.
Kisch, Shifra
2008 ‘Deaf Discourse’: The Social Construction of Deafness in a Bedouin Community. In:
Medical Anthropology 27(3), 283⫺313.
Kiyaga, Nassozi B./Moores, Donald F.
2003 Deafness in Sub-Saharan Africa. In: American Annals of the Deaf 148(1), 18⫺24.
Knight, Pamela/Swanwick, Ruth
2002 Working with Deaf Pupils: Sign Bilingual Policy into Practice. London: David Fulton.
Knoors, Harry
2006 Educational Responses to Varying Objectives of Parents of Deaf Children: A Dutch
Perspective. In: Journal of Deaf Studies and Deaf Education 12, 243⫺253.
Komesaroff, Linda
2001 Adopting Bilingual Education: An Australian School Community’s Journey. In: Journal
of Deaf Studies and Deaf Education 6(4), 299⫺314.
Krausneker, Verena
2008 Language Use and Awareness of Deaf and Hearing Children in a Bilingual Setting.
In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language
Development, Interaction, and Maintenance in Sign Language Contact Situations. Am-
sterdam: Benjamins, 195⫺221.
Kuntze, Marlon
1998 Codeswitching in ASL and Written English Language Contact. In: Emmorey, Karen/
Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula
Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 287⫺302.
Ladd, Paddy
2003 Understanding Deaf Culture: In Search of Deafhood. Clevedon: Multilingual Matters.
Lane, Harlan/Hoffmeister, Robert/Bahan, Ben
1996 A Journey Into the Deaf-World. San Diego, CA: DawnSignPress.
Lane, Harlan/Pillard, Richard/French, Mary
2000 Origins of the American Deaf-World: Assimilating and Differentiating Societies and
Their Relation to Genetic Patterning. In: Sign Language Studies 1(1), 17⫺44.
976 VIII. Applied issues

Lang, Harry G.
2003 Perspectives on the History of Deaf Education. In: Marschark, Marc/Spencer, Patricia
(eds.), Oxford Handbook of Deaf Studies, Language, and Education. Oxford: Oxford
University Press, 9⫺20.
Lanza, Elizabeth
1997 Language Mixing in Infant Bilingualism: A Sociolinguistic Perspective. Oxford: Clar-
endon.
LaSasso, Carol/Lamar Crain, Kelly/Leybaert, Jacqueline
2010 Cued Speech and Cued Language for Deaf and Hard of Hearing Children. San Diego,
CA: Plural Publishing.
LeMaster, Barbara
2003 School Language and Shifts in Irish Deaf Identity. In: Monaghan, Leila/Schmaling,
Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: Interna-
tional Variation in Deaf Communities. Washington, DC: Gallaudet University Press,
153⫺172.
Leuninger, Helen
2000 Mit den Augen Lernen: Gebärdenspracherwerb. In: Grimm, Hannelore (ed.), Enzyklo-
pädie der Psychologie. Bd. IV: Sprachentwicklung. Göttingen: Hogrefe, 229⫺270.
Lucas, Ceil/Valli, Clayton
1989 Language Contact in the American Deaf Community. In: Lucas, Ceil (ed.), The Socio-
linguistics of the Deaf Community. San Diego, CA: Academic Press, 11⫺40.
Lucas, Ceil/Valli, Clayton
1992 Language Contact in the American Deaf Community. New York, NY: Academic Press.
Mahshie, Shawn Neal
1997 A First Language: Whose Choice Is It? (A Sharing Ideas Series Paper, Gallaudet Uni-
versity, Laurent Clerk National Deaf Education Center). [Retrieved 19 February 2003
from: http://clerccenter.gallaudet.edu /Products/Sharing-Ideas/index.html]
Marschark, Marc/Sapere, Patricia/Convertino, Carol/Rosemarie, Seewagen
2005 Educational Interpreting: Access and Outcomes. In: Marschark, Marc/Peterson, Rico/
Winston, Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Direc-
tions for Research and Practice. Oxford: Oxford University Press, 57⫺83.
Mayberry, Rachel
2007 When Timing Is Everything: Age of First-language Acquisition Effects on Second-lan-
guage Learning. In: Applied Psycholinguistics 28, 537⫺549.
Mayer, Connie
2007 What Really Matters in the Early Literacy Development of Deaf Children. In: Journal
of Deaf Studies and Deaf Education 12(4), 411⫺431.
Mayer, Connie/Akamatsu, Tane
2003 Bilingualism and Literacy. In: Marschark, Marc/Spencer, Patricia (eds.), Oxford Hand-
book of Deaf Studies, Language, and Education. Oxford: Oxford University Press,
136⫺147.
Meisel, Jürgen M.
2004 The Bilingual Child. In: Bhatia, Tej K./Ritchie, William C. (eds.), The Handbook of
Bilingualism. Oxford: Blackwell, 91⫺113.
Millet, Agnès/Mugnier, Saskia
2004 Français et Langue des Signes Française (LSF): Quelles Interactions au Service des
Compétences Langagières? Etude de Cas d’une Classe d’Enfants Sourds de CE2. In:
Repères 29, 1⫺20.
Mohanty, Ajit K.
2006 Multilingualism of the Unequals and Predicaments of Education in India: Mother
Tongue or Other Tongue? In: Garcıa, Ofelia/Skutnabb-Kangas, Tova/Torres-Guzman,
Maria (eds.), Imagining Multilingual Schools: Languages in Education and Globaliza-
tion. Clevedon: Multilingual Matters, 262⫺283.
39. Deaf education and bilingualism 977

Monaghan, Leila
2003 A World’s Eye View: Deaf Cultures in Global Perspective. In: Monaghan, Leila/Schma-
ling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: Inter-
national Variation in Deaf Communities. Washington, DC: Gallaudet University Press,
1⫺24.
Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.)
2003 Many Ways to be Deaf: International Variation in Deaf Communities. Washington, DC:
Gallaudet University Press.
Moores, Donald F.
2007 Educational Practices and Assessment. In: American Annals of the Deaf 151(5), 461⫺
463.
Moores, Donald F./Martin, David S.
2006 Overview: Curriculum and Instruction in General Education and in Education of Deaf
Learners. In: Moores, Donald F./Martin, David S. (eds.), Deaf Learners: Developments
in Curriculum and Instruction. Washington, DC: Gallaudet University Press, 3⫺13.
Morales-López, Esperanza
2008 Sign Bilingualism in Spanish Deaf Education. In: Plaza-Pust, Carolina/Morales-López,
Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Mainte-
nance in Sign Language Contact Situations. Amsterdam: Benjamins, 223⫺276.
Mugnier, Saskia
2006 Le Bilinguisme des Enfants Sourds: de Quelques Freins aux Possibles Moteurs. In:
GLOTTOPOL Revue de Sociolinguistique en Ligne. [Retrieved 8 March 2006 from:
http://www.univ-rouen.fr/dyalang/glottopol]
Musselman, Carol
2000 How Do Children Who Can’t Hear Learn to Read an Alphabetic Script? A Review of
the Literature on Reading and Deafness. In: Journal of Deaf Studies and Deaf Educa-
tion 5(1), 9⫺31.
Nakamura, Karen
2003 U-turns, Deaf Shock, and the Hard-of-hearing: Japanese Deaf Identities at the Border-
lands. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham
(eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washing-
ton, DC: Gallaudet University Press, 211⫺229.
Odlin, Terence
2003 Cross-linguistic Influence. In: Doughty, Catherine J./Long, Michael H. (eds.), The Hand-
book of Second Language Acquisition. Oxford: Blackwell, 436⫺486.
Odom, Samuel L./Hanson, Marci J./Lieber, Joan/Marquart, Jules/Sandall, Susan/Wolery, Ruth/
Horn, Eva/Schwartz, Ilene/Beckman, Paula/Hikido, Christine/Chambers, Jay
2001 The Costs of Pre-School Inclusion. In: Topics in Early Childhood Special Education 21,
46⫺55.
Padden, Carol
1998 From the Cultural to the Bicultural: The Modern Deaf Community. In: Parasnis, Ila
(ed.), Cultural and Language Diversity: Reflections on the Deaf Experience. Cambridge:
Cambridge University Press, 79⫺98.
Padden, Carol/Humphries, Tom
2005 Inside Deaf Culture. Cambridge, MA: Harvard University Press.
Padden, Carol/Ramsey, Claire
1998 Reading Ability in Signing Deaf Children. In: Topics in Language Disorders 18, 30⫺46.
Panayiotis, Angelides/Aravi, Christiana
2006 A Comparative Perspective of Deaf and Hard-of-hearing Individuals as Students at
Mainstream and Special Schools. In: American Annals of the Deaf 151(5), 476⫺487.
978 VIII. Applied issues

Petitto, Laura Ann/Katerelos, Marina/Levy, Bronna G./Gauna, Kristine/Tetreault, Karina/Fer-


raro, Vittoria
2001 Bilingual Signed and Spoken Language Acquisition from Birth: Implications for the
Mechanisms Underlying Early Bilingual Language Acquisition. In: Journal of Child
Language 28, 453⫺496.
Plaza-Pust, Carolina
2004 The Path Toward Bilingualism: Problems and Perspectives with Regard to the Inclusion
of Sign Language in Deaf Education. In: Van Herreweghe, Mieke/Vermeerbergen, Myr-
iam (eds.), To the Lexicon and Beyond: Sociolinguistics in European Deaf Communities.
Washington, DC: Gallaudet University Press, 141⫺170.
Plaza-Pust, Carolina
2008 Why Variation Matters: On Language Contact in the Development of L2 Written Ger-
man. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Lan-
guage Development, Interaction, and Maintenance in Sign Language Contact Situations.
Amsterdam: Benjamins, 73⫺135.
Plaza-Pust, Carolina/Morales-López, Esperanza (eds.)
2008 Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Lan-
guage Contact Situations. Amsterdam: Benjamins.
Plaza-Pust, Carolina/Weinmeister, Knut
2008 Bilingual Acquisition of German Sign Language and Written Language: Developmen-
tal Asynchronies and Language Contact. In: Quadros, Ronice M. de (ed.), Sign Lan-
guages: Spinning and Unraveling the Past, Present, and Future. Forty-five Papers and
Three Posters from the 9 th Theoretical Issues in Sign Language Research Conference,
Florianopolis, Brazil, December 2006. Petrópolis (Brazil): Editora Arara Azul, 497⫺
529. [Available from: www.editora-arara-azul.com.br/EstudosSurdos.php]
Preisler, Gunilla
2007 The Psychosocial Development of Deaf Children with Cochlear Implants. In: Komesa-
roff, Linda (ed.), Surgical Consent: Bioethics and Cochlear Implantation. Washington,
DC: Gallaudet University Press, 120⫺136.
Ramsey, Claire/Padden, Carol
1998 Natives and Newcomers: Gaining Access to Literacy in a Classroom for Deaf Children.
In: Anthropology & Education Quarterly 29(l), 5⫺24.
Ravid, Dorit/Tolchinsky, Liliana
2002 Developing Linguistic Literacy: a Comprehensive Model. In: Journal of Child Language
29, 417⫺447.
Reagan, Timothy
2001 Language Planning and Policy. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Lan-
guages. Cambridge: Cambridge University Press, 145⫺180.
Romaine, Suzanne
1996 Bilingualism. In: Ritchie, William C./Bhatia, Tej K. (eds.), Handbook of Second Lan-
guage Acquisition. San Diego, CA: Academic Press, 571⫺601.
Saiegh-Haddad, Elinor
2005 Correlates of Reading Fluency in Arabic: Diglossic and Orthographic Factors. In: Read-
ing and Writing 18(6), 559⫺582.
Senghas, Richard J.
2003 New Ways to Be Deaf in Nicaragua: Changes in Language, Personhood, and Commu-
nity. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham
(eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washing-
ton, DC: Gallaudet University Press, 260⫺282.
Singleton, Jenny L./Supalla, Samuel J.
2003 Assessing Children’s Proficiency in Natural Signed Languages. In: Marschark, Marc/
Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education.
Oxford: Oxford University Press, 289⫺302.
39. Deaf education and bilingualism 979

Svartholm, Kristina
2007 Cochlear Implanted Children in Sweden’s Bilingual Schools. In: Komesaroff, Linda
(ed.), Surgical Consent: Bioethics and Cochlear Implantation. Washington, DC: Gallau-
det University Press, 137⫺150.
Swanwick, Ruth/Gregory, Susan
2007 Sign Bilingual Education: Policy and Practice. Coleford: Douglas McLean.
Szagun, Gisela
2001 Language Acquisition in Young German-speaking Children with Cochlear Implants:
Individual Differences and Implications for Conceptions of a ‘Sensitive Phase’. In: Au-
diology and Neurotology 6, 288⫺297.
Tellings, Agnes
1995 The Two Hundred Years’ War in Deaf Education: A Reconstruction of the Methods
Controversy. PhD Dissertation, University of Nijmegen.
Tracy, Rosemarie/Gawlitzek-Maiwald, Ira
2000 Bilingualismus in der frühen Kindheit. In: Grimm, Hannelore (ed.), Enzyklopädie der
Psychologie. Bd. IV: Sprachentwicklung. Göttingen: Hogrefe, 495⫺514.
Vercaingne-Ménard, Astrid/Parisot, Anne-Marie/Dubuisson, Colette
2005 L’approche Bilingue à l’École Gadbois. Six Années d’Expérimentation. Bilan et Recom-
mandations. Rapport Déposé au Ministère de l’Éducation du Québec. Université du
Québec à Montréal.
Wilbur, Ronnie B.
2000 The Use of ASL to Support the Development of English and Literacy. In: Journal of
Deaf Studies and Deaf Education 5(1), 81⫺103.
Woll, Bencie
2003 Modality, Universality and the Similarities Among Sign Languages: A Historical Per-
spective. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Cross-
linguistic Perspectives in Sign Language Research. Selected Papers from TISLR 2000.
Hamburg: Signum, 17⫺27.
Woll, Bencie/Ladd, Paddy
2003 Deaf Communities. In: Marschark, Marc/Spencer, Patricia E. (eds.), Oxford Handbook
of Deaf Studies, Language, and Education. Oxford: Oxford University Press, 151⫺163.
Woodward, James
2003 Sign Languages and Deaf Identities in Thailand and Viet Nam. In: Monaghan, Leila/
Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be
Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet Univer-
sity Press, 283⫺301.
Yang, Jun Hui
2008 Sign Language and Oral/Written Language in Deaf Education in China. In: Plaza-Pust,
Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development,
Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benja-
mins, 297⫺331.

Carolina Plaza-Pust, Frankfurt am Main (Germany)


980 VIII. Applied issues

40. Interpreting
1. Introduction
2. Signing communities and language brokering
3. A history of sign language interpreting
4. Research into sign language interpreting
5. International Sign interpreting
6. Conclusions
7. Literature

Abstract
This chapter explores the emerging evidence of the history of interpreting and sign lan-
guage interpreting across the world. Topics to be addressed include signing communities
known to have existed in the last 400 years and the roles adopted by bilingual members
of those communities. The emergence of the profession of sign language interpreters
(Deaf and non-Deaf, deaf and hearing) around the world will be discussed, with a more
detailed analysis of the evolution of the profession within the UK. The chapter then
addresses interpreter bilingualism and the growth of evidence-based research into sign
language interpreting. The chapter concludes with a discussion of interpreting into Inter-
national Sign.

1. Introduction
This chapter gives an overview of the history and evolution of the sign language inter-
preting profession in parallel with the evolution of sign languages within Deaf commu-
nities. Trends in interpreting research will be reviewed and ‘International Sign’ inter-
preting will be discussed.
Often when two communities are in contact, there is effort by community members
to try to learn the other community’s language, both for direct communication and
to help other parties communicate. The relationship between the communities, their
economic value, and their status influence these interactions. This includes interpreter-
mediated interaction, when a bilingual individual facilitates communication between
two parties.
Historically, interpreters and translators have been used to facilitate communication
and trade between groups who do not speak or write the same language. They have
also been used to oppress, manipulate, and control minority cultures and languages.
The oldest recorded use of an interpreter (although often called a translator) was in
2500 BC in ancient Egypt under King Neferirka-Re. Here the interpreters were used
in trade and to ensure that the ‘barbarians’, that is, those who did not speak Egyptian,
obeyed the king (Hermann 1956). Similar power dynamics can be seen within societies
today for both spoken language (Bassnett/Trivedi 1999) and sign language (Ladd 2003)
interpreting; those who speak (or sign) a world language, such as English or American
40. Interpreting 981

Sign Language (ASL), or the dominant language of a country, can and do exercise
power through interpreters whether consciously or not.
Throughout history and across the world, where sufficient numbers of Deaf people
form a community, sign languages have come into existence. Hearing and deaf mem-
bers of those societies who are able to sign have been called upon to act as translators
and interpreters for Deaf people in order to allow them to interact with the non-signing
mainstream and those who come from communities using a different sign language (for
an example, see Carty/Macready/Sayers (2009, 308⫺313)). Whilst the records for sign
language interpreters do not extend as far back as those for spoken languages, there
is documentary evidence recording the development of the sign language interpreting
profession and the involvement of culturally Deaf people (i.e. members of Deaf com-
munities, whether deaf or hearing, who use sign language) and non-Deaf people (i.e.
those people who are not members of Deaf communities, whether deaf or hearing),
thus providing access to information and civil society for Deaf communities.

2. Signing communities and language brokering

There is evidence that in some communities, both past and present, with a relatively
high incidence of audiological deafness (from genetic or other causes), many of the
non-Deaf population know some signs, even if they do not use a full sign language.
Examples of such communities include Martha’s Vineyard in the US from the 17th to
the early 20th centuries (Groce 1985) and, in the 21st century, Desa Kolok in Bali
(Marsaja 2008), Adamorobe in Ghana (Nyst 2007), Mardin in Turkey (Dikyuva 2008),
and a Yucatec Maya village in Mexico (Johnson 1991; Fox Tree 2009) (for an overview,
see Ragir (2002); also see chapter 24, Shared Sign Languages, for further discussion).
The Ottoman Court (1500⫺1700) provides an example of an institutionalized con-
text for the creation of an (uncommonly) high status signing community. According to
Miles (2000), a significant number of mutes (sic) were brought together at the court
and used signs and ‘head actions’ (presumably a manifestation of prosody, see Sandler
(1999)). This community differed from other signing communities in that it had institu-
tional status, created by the ruling Sultan, and hence had associated high status, to the
extent that deaf-mutes were sought throughout the Ottoman Empire to join the court
(Miles 2004). In this context, those fluent in the sign language of the court were en-
gaged as ‘translators’. These often functioned in high status contexts, for example with
“[t]he Dutch ambassador Cornelis Haga, who reached Constantinople around 1612,
[who] went so far as to invite the court mutes to a banquet and, with a sign translator’s
help, was impressed by their eloquence on many topics” (Deusingen 1660; transl. Sib-
scota 1670, 42f; cited in Miles 2000, 123). Those deaf-mutes had some form of access
to education and had specific roles to fulfil within the court.
Latterly, deaf education has been a similar institutional driving force for sign lan-
guage transmission (see chapter 39). For the purpose of education, deaf children are
brought together, often in residential schools. These institutions form the beginnings
of large Deaf communities, regional and national sign languages (Quinn 2010), as well
as language brokering, translation, and interpreting provided within these institutions
by their members (Adam/Carty/Stone 2011; Stone/Woll 2008).
982 VIII. Applied issues

The non-institutionalized examples of sign languages have been called, in more


recent years, ‘rural’ or ‘village’ sign languages, with their counterparts of institutional
origin being referred to as ‘urban’ sign languages (Jepson 1991). Village sign language
communities have rarely had their languages documented (for some of those that have,
see the references provided above); also, the interaction between deaf people and
hearing people who do not sign is not well documented (cf. Miles 2004), nor is the
extent to which interpreting occurs within the community, and between the community
and outsiders. Village sign language contexts would provide an interesting insight into
the power dynamics and language brokering (Morales/Hanson 2005) that occurs in
situations where a spoken language and a sign language have more or less equal status
(within the community) and differing status (outside of the community).
Groce (1985) describes her interviewees from Martha’s Vineyard as being unable
to remember whether certain members of the community were deaf or hearing, and
yet, if conversation was taking place in spoken English and a deaf person arrived,
someone would interpret (see also Woll/Ladd (2010, 166 f.) for a useful description of
several village sign communities). There clearly was an expectation for hearing mem-
bers of the society to interpret, although it is not clear whether a small set of individuals
were relied upon to interpret, and whether community members expressed any prefer-
ence for specific individuals to interpret. Often preferences expressed by community
members make manifest the community’s expectation of interpreter and translation
behaviour, which can be at odds with mainstream expectations (see Stone (2009) for
more extensive treatment of a Deaf translation norm).
Urban sign languages tend to originate in institutions, often educational or religious,
where Deaf people are brought together, many of whom will enter such institutions
knowing only a home sign system (Goldin-Meadow 2003; see chapter 26 for discus-
sion). More rarely, those with parents who are sign language users may already have
acquired a sign language. Within these contexts, the workers in the institutions (mostly
clerics and teachers) have some degree of fluency in the sign language of that commu-
nity. Language brokering often occurs between the workers and the deaf people. Fre-
quently, interpreting is undertaken initially by members of the Deaf community who
can hear (Corfmat 1990) and then by members of the mainstream community who
learned to sign (Simpson 2007). This route to professionalization (that is, the training
of language brokers and then of naïve bilinguals) is a common one for many languages,
both spoken and signed (Mikkelson 1996). Initially, bilingual friends and family inter-
pret, then people within wider community networks including qualified professionals
(religious workers, teachers, welfare professionals). Subsequently, training is formally
established and the role of a professional interpreter is separated from other roles
(Simpson 2007).

3. A history of sign language interpreting

As mentioned above, the earliest record of a ‘sign translator’ is from 1612, in an ac-
count written in 1660, and describes an interpreter working at an international political
level (for a Dutch visitor to the Ottoman court). It is not known who this translator
was, but it is likely that he was a hearing member of the court who had become fluent
40. Interpreting 983

in the sign language of the deaf-mutes brought to the Ottoman court (Miles 2000).
The first record of a Deaf person undertaking institutional language brokering involves
Matthew Pratt, husband to Sarah Pratt, both of whom were deaf and who used sign
language as their principal means of communication (Carty/Macready/Sayers 2009).
Sarah Pratt (1640⫺1729) underwent an interview to be accepted as a member of the
Christian fellowship in the Puritan church of Weymouth, Massachusetts, in colonial
New England. During this interview, both of Sarah’s hearing sisters interpreted for her,
and Matthew Pratt wrote a transcript from sign language to written English. It thus
appears that one of the earliest records documents Deaf and hearing interpreters/
translators working alongside each other.
Samuel Pepys’ diary entry for 9 November 1666 describes his colleague, Sir George
Downing, acting as an interpreter; Pepys asks Downing to tell a deaf boy,

that I was afeard that my coach would be gone, and that he should go down and steal one
of the seats out of the coach and keep it, and that would make the coachman to stay. He
did this, so that the dumb boy did go down, and, like a cunning rogue, went into the coach,
pretending to sleep; and, by and by, fell to his work, but finds the seats nailed to the coach.
So he did all he could, but could not do it; however, stayed there, and stayed the coach till
the coachman’s patience was quite spent, and beat the dumb boy by force, and so went
away. So the dumb boy come up and told him all the story, which they below did see all
that passed, and knew it to be true. (Pepys 1666, entry for 9 November 1666)

Here the deaf servant’s master acts as an interpreter. This type of summarising inter-
preting is also reported in other records when non-signers wish to understand what a
deaf signer is saying.
The oldest mention of sign language interpreter provision in court appears in Lon-
don’s Old Bailey Criminal Court Records for 1771 (Hitchcock/Shoemaker 2008). The
transcripts of the proceedings for this year mention that a person, whose name is
not given, “with whom he [the defendant] had formerly lived as a servant was sworn
interpreter”. The transcript goes on to state that this interpreter “explained to him the
nature of his indictment by signs”. This is the first documented example of a person
serving in the capacity of sign language interpreter in Britain, although there is little
evidence to suggest that British Sign Language (BSL) was used rather than a home
sign system (Stone/Woll 2008). Deaf schools were only just being established at that
time (Lee 2004) and it is not known if there were communities of sign language users
of which the defendant could have been a part.
The first mention of a Deaf person functioning as a court interpreter occurs not
long after, in 1817, in Scotland. This Deaf assistant worked alongside the headmaster
of the Edinburgh school for the Deaf, Mister Kinniburgh (Hay 2008). The deaf defen-
dant, Jean Campbell, an unschooled deaf woman in Glasgow, was charged with throw-
ing her infant off a bridge. As Glasgow had no deaf school, Kinniburgh, principal of
the Edinburgh school, was called to interpret. He communicated by “making a figure
with his handkerchief across his left arm in imitation of a child lying there, and having
afterwards made a sign to her as if throwing the child over the bar [...] she made a sign
and the witness said for her ‘not guilty, my lord’” (Caledonian Mercury 1817). The
unnamed Deaf person working as an interpreter assisted the communication by ensur-
984 VIII. Applied issues

ing that the deaf woman understood the BSL used by Kinniburgh and that Kinniburgh
understood her.
This role is still undertaken by Deaf people: deaf people isolated from the commu-
nity, with limited schooling, or with late exposure to a sign language (some of whom
are described as semi-lingual (Skutnabb-Kangas 1981)) may sign in a highly idiosyn-
cratic way, using visually motivated gestures. In these instances, Deaf interpreters can
facilitate communication (Bahan 1989, 2008; Boudreault 2005). Despite their service
over many years, in many countries Deaf interpreters have not yet undergone profes-
sionalization and still work without training or qualification. This situation has begun
to change in recent years as a result of better educational opportunities with formal
qualifications becoming available to Deaf interpreters, or parallel qualifications being
developed. Hearing interpreters underwent professionalization well before the profes-
sional recognition of their Deaf peers.
Few professional interpreters are from the ‘core’ of the Deaf community, that is,
deaf people from Deaf families. It is often regarded as not feasible for a Deaf person
to work as a translator and/or interpreter (T/I). In most situations, a T/I is required to
interpret from a spoken language into the national sign language and vice versa; as
deaf people are not able to hear the spoken language, they are usually not identified
by the mainstream as interpreters. This contrasts with most minority language T/Is,
who come from those communities rather than being outsiders (Alexander/Edwards/
Temple 2004). There are now, however, possibilities for Deaf interpreters and translat-
ors to be trained and accredited in Australia, Canada, France, South Africa, the UK,
and the US, with an increasing role for Deaf interpreters at national, transnational
(e.g. the European Forum for Sign Language Interpreters ⫺ EFSLI), and international
levels (e.g. the World Association of Sign Language Interpreters ⫺ WASLI) confer-
ences.

3.1. Ghost writers and Deaf language brokering

As well as limited recognition of Deaf people as interpreters, until recently, there has
been little exploration of the role of bilingual deaf people as interpreters and translat-
ors both inside the community and between the community and the mainstream. As
pointed out previously, the first recorded mention of a deaf translator appears in the
mid-17th century and of a deaf interpreter in 1817 (Carty/Macready/Sayers 2009; Hay
2008). This suggests that bilingual Deaf people have been supporting Deaf people in
understanding the world around them, in their interactions within the community, and
with the wider world, for as long as deaf people have come together to form language
communities. Just as interpreters working for governments need to render the accounts
of refugees and asylum seekers into ‘authentic’ accounts for institutions (Inghilleri
2003), Deaf communities also desire ‘authentic’ accounts of the world and institutions
that are easily understandable to them. Deaf people who undertake language broker-
ing, translation, and interpreting are able to provide the community with these authen-
tic accounts.
Since the inception of Deaf clubs, bilingual deaf people have supported the commu-
nity by translating letters, newspapers, and information generally to semi-literate and
monolingual Deaf people. This is still found today (Stone 2009) and is considered by
40. Interpreting 985

these translators as part of their responsibility to the community, an example of the


reciprocal sharing of skills within the community’s collectivist culture (Smith 1996).
Much of this language brokering is hidden and starts at an early age. Socialization
as described by Bourdieu (1986) does not happen within the family for deaf children
born to non-Deaf parents. Rather, socialization occurs with deaf and Deaf children at
school and Deaf adults inside and outside of school. It is this process, or Deafhood
(Ladd 2003), that brings about the identity change from deaf to Deaf, and we often
find a sharing of skills, including bilingual skills, within these communities, understood
as being part of Deaf identity. There are accounts of translation and interpreting within
residential schools, not only in the classroom where students support each other (Boud-
reault 2005, 324), but also in situations where students support each other by helping
with correspondence to parents and family (Adam/Carty/Stone 2011). With the intro-
duction of oral education practices, Deaf pupils with Deaf parents have even inter-
preted for other parents (Thomas 2008). These language brokers are called ‘ghost
writers’ in the Australian Deaf community (Adam/Carty/Stone 2011). This type of
activity has also been reported in the US (Bienvenu 1991, cited in Forestal 2005), in the
UK (Stone/Adam 2008), and in Ireland and Argentina (Adam/Dunne/Druetta 2008).
Of note is the irrelevance of the socioeconomic status of the bilingual Deaf person
within the Deaf community. Being a language professional is often considered a high
status job (when working with a high status ‘developed world’ language) in the main-
stream. This, however, is not the case in the Deaf community where language skills
are freely shared along with other skills. This reciprocity is typical within urban sign
language communities where, for example, a Deaf builder with English literacy will
freely give his building and literacy skills to the community as part of community skills
sharing or reciprocity (Stone/Adam 2008).

3.2. Bilingualism and professional interpreting

One central issue in interpreting is the level and type of bilingualism necessary to
perform as an interpreter, and indeed it is one of the reasons for the development of
Deaf interpreters. Grosjean (1997) observes that bilinguals may have restricted do-
mains of language use in one or both of their languages, since the environments in
which the languages are used are often complementary. He also discusses the skill set
of the bilingual individual before training as a translator or interpreter, and notes
that few bilinguals are entirely bicultural. These factors impact the bilingual person’s
language, for instance, lack of vocabulary and/or restricted access to stylistic varieties
in one or more of their languages. Interpreter training must address these gaps since
interpreters, unlike most bilinguals, must use skills in both their languages for similar
purposes in similar domains of life, with similar people (Grosjean 1997).
Interpreters have to reflect upon their language use and ensure that they have
language skills in both languages sufficient for the areas within which they work. Addi-
tionally, because the Deaf community is a bilingual community and Deaf people have
often had exposure to the language brokering of deaf bilinguals from an early age
(Adam/Carty/Stone 2011), a Deaf translation norm (Stone 2009) may exist. The train-
ing of sign language interpreters not only needs to develop translation equivalents, but
986 VIII. Applied issues

also needs to sensitize interpreters to a Deaf translation norm, should the Deaf com-
munity they will be working within have one.
Hearing people with Deaf parents, sometimes known as Children of Deaf Adults
(CODAs), inhabit and are encultured within both the Deaf community and the wider
community (Bishop/Hicks 2008). Hearing native signers, who may be said to be ‘Deaf
(hearing)’ (Stone 2009), often act informally as T/Is for family and friends from an
early age (Preston (1996); cf. first generation immigrants or children of minority com-
munities (Hall 2004)). Their role and identity are different from Deaf (deaf) interpret-
ers who may undertake similar activities, but within institutions such as deaf schools
that the Deaf (hearing) signers do not attend. Hearing native signers may therefore
not have exposure to a Deaf translation norm that emerges within Deaf (deaf) spaces
and places; exposure to this norm may form part of the community’s selection process
when choosing a hearing member of the Deaf community as an interpreter (Stone
2009).
A common complaint from Deaf people is that many sign language interpreters are
not fluent enough in sign language (Alanwi 2006; Deysel/Kotze/Katshwa 2006; Allsop/
Stone 2007). This may be true both of hearing people with Deaf parents (cf. van den
Bogaerde/Baker 2008) and of those who came into contact with the community at a
later age. Learners of sign languages often struggle with language fluency (Quinto-
Pozos 2005) and acculturation (Cokely 2005), in contrast to many spoken language
interpreters who only interpret into their first language (Napier/Rohan/Slatyer 2005).
Grosjean (1997) discusses language characteristics of ‘interpreter bilinguals’ in spo-
ken languages and the types of linguistic features seen when they are working as T/Is,
such as: (i) loan translations, where the morphemes in the borrowed word are trans-
lated item by item (Crystal 1997); (ii) nonce borrowings, where a source language term
is naturalised by adapting it to the morphological and phonological rules of the target
language; and (iii) code-switching (producing a word in the source rather than the
target language). Parallels can be seen in sign language interpreting, for example, if
mouthing is used to carry meaning or where fingerspelling of a source word is used in
place of the target sign (Napier 2002; Steiner 1998; for discussion of mouthing and
fingerspelling, see chapter 35). With many sign language interpreters being late
learners of the sign language, such features of interpreted language may occur fre-
quently. There is a great deal of current interest in cross-modal bilingualism in sign
language and spoken language. Recent research includes explorations of the interpret-
ing between two modalities (Padden 2000) as well as code-blending in spontaneous
interaction (Emmorey et al. 2008) and when interpreting (Metzger/de Quadros 2011).
Although the grammars of sign languages differ from those of spoken languages, it
is possible to co-articulate spoken words and manual units of sign languages. This
results in a contact form unique to cross-modal bilingualism. Non-native signers ⫺
both deaf and hearing ⫺ may not utilize a fully grammatical sign language, but instead
insert signs into the syntactic structure of their spoken language. Such individuals may
prefer interpreters to use a bimodal contact form of signing. This has been described
in the literature as ‘transliteration’ (Siple 1998) or sign-supported language (e.g. Sign-
Supported English), and some countries offer examinations and certification in this
contact language form, despite the lack of an agreed way of producing this bilingual
blend (see Malcolm (2005) for a description of this form of language and its use when
interpreting in Canada).
40. Interpreting 987

3.3. Interpreter training

The professionalization of sign language interpreting has been similar in most countries
in the Western world: initially, those acting as interpreters would have come from the
community or would have been closely associated with Deaf people (Corfmat 1990).
Early interpreters came from the ranks of educators or church workers (Scott-Gibson
1991), with a gradual professionalization of interpreting, especially in institutional con-
texts such as the criminal justice system. Training and qualifications were introduced,
with qualification certificates awarded by a variety of bodies in different countries,
including Deaf associations, local government, national government, specialist award-
ing bodies, or latterly interpreter associations.
As an example, within the UK, and under the direction of the Deaf Welfare Exami-
nation Board (DWEB), the church “supervised in-service training and examined candi-
dates in sign language interpreting as part of the Board’s Certificate and Diploma
examinations for missioner/welfare officers to the deaf” (Simpson 1991, 217); the list
of successful candidates functioned as a register of interpreters from 1928 onwards.
Welfare officers worked for the church and interpreting was one of their many duties.
This training required the trainee missioner/welfare officers to spend much of their
time in the company of, and interpreting for, deaf people. This socializing with deaf
people occurred within Deaf Societies (church-based social service structures estab-
lished prior to government regulated social services) and other Deaf spaces, with train-
ees learning the language by mixing with Deaf people and supporting communication
and other needs of those attending their churches and social clubs.
From the 1960s onwards, in many countries there were moves to ensure state provi-
sion of social welfare, and during this time, specialist social workers for the deaf were
often also trained in sign language and functioned as interpreters. At different points
in this transitional period, in Western countries Deaf people lobbied their national
administrations for interpreting to be recognised and paid for as a discrete profession
to ensure the autonomy of Deaf people and the independence of the interpreter within
institutional contexts. Within private settings, it is still often family and friends who
interpret, as funds are only supplied for statutory matters. There are differences in
provision in some countries, for instance, Finland (Services and Assistance for the
Disabled Act 380/87), where Deaf people are entitled to a specified number of hours
per year and are free to use these hours as they choose.
With the professionalization of sign language interpreting, national (e.g. RID in
the USA), transnational (e.g. EFSLI), and global interpreting associations have been
established. The World Association of Sign Language Interpreters (WASLI) was estab-
lished in 2005. In January 2006, WASLI has signed a joint agreement with the World
Federation of the Deaf (WFD) to ensure that interpreter associations and Deaf associ-
ations work together in areas of mutual interest. It also gives primacy to Deaf associa-
tions and Deaf communities in the documentation, teaching, and development of pol-
icies and legislation for sign languages (WASLI 2006).
WASLI’s conferences enable accounts of interpreting in different countries to
emerge. Takagi (2005) reports on the need in Japan for interpreters who are able to
translate/interpret to and from spoken English because of its importance as a lingua
franca in the global Deaf community. Sign language interpreter training may need to
change to ensure that applicants have knowledge of several spoken and signed lan-
988 VIII. Applied issues

guages before beginning interpreter training, rather than just knowledge of the one
spoken language and one sign language. The need for sign language interpreters to
work to and from a third language is seen in New Zealand, where Māori Deaf people
need interpreters who are fluent in Te Reo Māori, the language of the Māori commu-
nity (Napier/Locker McKee/Goswell 2006) as well as in New Zealand Sign Language.
In Finland, interpreters need to pass qualifications in Finnish, Swedish, and English as
well as Finnish Sign Language. In other regions, such as Africa, multilingualism is part
of everyday life and interpreters are required to be multilingual.
Napier (2005) undertakes a thorough review of current programmes for training
and accreditation in the UK, US, and Australia. There are many similarities among
these three countries, with all having standards for language competence and interpret-
ing competence to drive the formal assessment of interpreters seeking to gain full
professional status. Other countries within Europe have similar structures (Stone 2008)
including Estonia, Finland, and Sweden, where all interpreter training occurs in tertiary
education settings. In contrast, in some countries, interpreters may only receive a few
days or weeks of training, or undertake training outside their home country (Alawni
2006). Although most training programmes start with people who are already fluent
in the local or national sign language before undertaking the training, as interpreters
become more professionalized, training often moves into training institutions, where
language instruction and interpreter training often form part of the same training pro-
gramme (Napier 2005).
In many countries, two levels ⫺ associate and full professional status ⫺ are available
to members of the interpreting profession. These levels of qualification are differenti-
ated in professional associations and registering bodies. Moving to full professional
status often requires work experience and passing a qualifying assessment as well as
initial training. With the inclusion of sign language in five of the articles from the UN
Convention on Rights for People with Disabilities (CRPD) (Clause 9.2 (e) explicitly
states the need for professional sign language interpreter provision), there is every
expectation that sign language interpreting will be further professionalized (see Stone
(in press) for an analysis of the impact of the CRPD on service provision in the UK).

4. Research into sign language interpreting

Research into sign language interpreting began in the 1980s. Applied studies have
included research on interpreting in different settings such as conference, television,
community, and educational interpreting, and surveys of training and provision; other
studies have explored underlying psycholinguistic and sociolinguistic issues. In recent
years, improved video technology has allowed for more fine-grained analyses of inter-
preting, of the decisions made by the interpreter when rendering one language to
another, and of the target language as a linguistic product.
Early resources on the practice of sign language interpreting (Solow 1981) and on
different models of practice (McIntire 1986) were principally based on interpreters
reflecting on their own practice. Surveys have explored training and number of inter-
preters working in different countries across Europe (Woll 1988). These were followed
in the 1990s by histories of the development of professional interpreters (Moorhead
40. Interpreting 989

1991; Scott-Gibson 1991). A number of textbook resources are available, which include
an overview of the interpreting profession in individual countries (such as Ozolins/
Bridge (1999), Napier/Locker McKee/Goswell (2006), and Napier (2009) for Australia
and New Zealand). Most recently, studies on the quality of sign language interpreters
on television in China have been published in English (Xiaoyan/Ruiling 2009).
One of the first empirical studies of the underlying psychological mechanisms in
interpreting and the sociolinguistics of language choice was Llewellyn-Jones (1981).
This study examined the work of those working as interpreters with two research ques-
tions: What is the best form of training for interpreters? How can we realistically assess
interpreting skills? The study then discusses current processing models of interpreting
for both sign language and spoken language and explores effectiveness of information
transfer, time lag between source language output and start of production of the target
language, and the appropriateness of the choice of variety of target language.
Many of the themes addressed by Llewellyn-Jones (1981) are still being explored;
the process of interpreting is not well understood and it is only in recent years that
modern psycholinguistic experimental techniques have been applied to interpreting.
The 1990s saw much more empirical research into interpreting (between both spoken
languages and sign languages). These empirical studies provide us with further insight
into the process of interpreting (Cokely 1992a) and the interpreting product (Cokely
1992b), using psychological and psycholinguistic methodologies to understand inter-
preting (spoken and signed) (Green et al. 1990; Moser-Mercer/Lambert 1994). This
has led in recent years to an examination of the underpinning cognitive and linguistic
skills needed for interpreter training and interpreting (López Gómez et al. 2007). Yet,
there is still no clear understanding of how interpreters work; many of the models
developed have not been tested empirically. It is expected that modern day techniques,
both behavioural and neuroscience (Price/Green/von Studnitz 1999), will in time pro-
vide further understanding of the underlying networks involved in interpreting and the
time course of language processing for interpreting.
Much research has been undertaken from a sociolinguistic perspective, analysing
interpreting not only in terms of target language choice (as in Llewellyn-Jones 1981),
but also examining the triadic nature of interpreter-mediated communication, which
influences both spoken language (Wadensjö 1998) and sign language interpreting (Roy
1999). The recognition of the effect of an interpreter’s presence has enabled the exami-
nation of interpreting as a discourse process, rather than categorising interpreters as
invisible agents within interpreter-mediated interaction. This approach has led to
greater exploration of the mediation role of the interpreter as a bilingual-bicultural
unratified conversational partner. Metzger (1999) has described the contributions inter-
preters make within interactions and the agency of interpreters in relation to different
participants within interpreter-mediated events. The series of Critical Link conferences
for interpreters working in the community has also provided a forum for interpreters
(of both sign languages and spoken languages) to share insights and research method-
ologies. Most recently, Dickinson and Turner (2008) have explored interpreting for
Deaf people within the work place, providing useful insights into the relationship of
interpreters with the deaf people and how they position themselves within interpreter-
mediated activity.
Sign language interpreting research has also started to look at interpreting within
specific domains. In the legal domain, there is research on access for Deaf people in
990 VIII. Applied issues

general. The large-scale study by Brennan and Brown (1997), which included court
observations and interviews with interpreters, explored Deaf people’s access to justice
via interpreters and the types of interpreters who undertake work in courts in the UK.
Because of restrictions on recording courtroom proceedings, Russell’s study (2002) in
Canada examined the mode of interpreting (consecutive vs. simultaneous) in relation
to accuracy, thus enabling a fine-grained analysis of the interpreted language in a moot
court. The use of court personnel and the design of the study enabled nearly ‘real’
courtroom interpreting. Russell found that the consecutive mode allowed interpreters
to achieve a greater level of accuracy and provide more appropriate target language
syntax.
The extensive use of interpreters within educational settings has also led to studies
about the active role of the recipient of interpreting (Marschark et al. 2004). With
regard to accuracy of translation and linguistic decision-making processes, Napier
(2002) examined omissions made by interpreters in Australian Sign Language (Auslan)
as a target language vis-à-vis the source language (English) when interpreting in terti-
ary educational settings. Napier explores naturalistic language use and strategic omis-
sions used by interpreters to manage the interpretation process and the information
content. This study applies earlier work looking at the influence of interpreters within
the community into conference-type interpreting. She also addresses issues of cognitive
overload when omissions are made unconsciously rather than strategically.
Other more recent examinations of interpreters’ target language include studies of
prosody (Nicodemus 2009; Stone 2009). Nicodemus examines the use of boundary
markers by hearing interpreters at points where Deaf informants agreed boundaries
occur, demonstrating that interpreters are systematic in their use of boundary markers.
Stone compares the marking of prosody by deaf and hearing interpreters, finding that
although hearing interpreters mark clausal level units, deaf interpreters are able to
generate nested clusters where both clausal and discourse units are marked and interre-
lated.
Further research looking at the development of fluency in trainee interpreters, tran-
snational corpora of sign language interpretation, and the interpreting process itself
will provide greater insights into the differences between interpreted and naturalistic
language. Technological developments, including improved video frame speeds, high
definition, technologies for time-based video annotation, and motion analysis, should
provide improved approaches to coding and analysing interpreting data. With the mini-
aturization of technology such techniques may be used in ‘real’ interpreting situations
as well as lab-based interpreting data collection.

5. International Sign interpreting

Where Deaf people from different countries meet, spontaneously developed contact
forms of signing have traditionally been used for cross-linguistic interaction. This form
of communication, often called International Sign (IS) draws on signers’ access to
iconicity and to common syntactic features in sign languages that make use of visual-
spatial representations (Allsop/Woll/Brauti 1995). With a large number of Deaf people
in many countries having ASL as a first or second sign language, it is increasingly
40. Interpreting 991

common for ASL to serve as a lingua franca in such settings. However, ASL uses
fingerspelling more extensively than many other sign languages, a fact which reduces
ASL’s appeal to many Deaf people with limited knowledge of English or the Roman
alphabet. It is possible that an ‘international’ ASL may evolve and that this will be
used as a lingua franca at international events (also see Chapter 35, Language Contact
and Borrowing).
In the absence of a genuine international language, the informal use of IS has been
extended to formal organisational contexts (WFD, Deaflympics, etc.). In these contexts,
a formally or informally agreed-upon international sign lexicon for terminology relat-
ing to meetings (e.g. ‘regional secretariat’, ‘ordinary member’) is used. This lexicon
developed from the WFD’s initial attempts in the 1970s to create a sign ‘Esperanto’
lexicon, called Gestuno, by selecting “naturally spontaneous and easy signs in common
use by deaf people of different countries” (BDA 1975, 2). Besides IS serving as a direct
form of communication between users of different sign languages, IS interpretation
(both into and from IS) is now also increasingly provided.
IS is useful in providing limited access via interpretation where interpretation into
and out of specific sign languages is not available. There have been few studies into IS
interpreting and publications on this topic have followed the general trend of sign
language interpreting literature, with personal reflection and introspection leading the
way (Moody 1994; Scott-Gibson/Ojala 1994), followed by later empirical studies.
Locker McKee and Napier (2002) analysed video-recordings of interpretation from
English into IS in terms of content. They identified difficulties in annotating IS inter-
pretation since, as a situational pidgin, IS has no fixed lexicon. The authors then focus
on typical linguistic structures used by interpreters and infer the strategies employed
by the interpreters. As expected, the pace of signing is slower than for interpretation
into a sign language. The authors also report that the size of sign space is larger than
for interpretation into a national sign language, although it is unclear how this compari-
son was made. Mouthings and mouth gestures are mentioned, with IS interpretations
having fewer mouthings but making enhanced use of mouth gestures for adverbials
and other non-manual markers for emphasising clause and utterance boundaries. With
an increasing number of sign language corpora of various types, in the future these
comparisons can be made in a more detailed manner.
The IS interpreters use strategies common to all interpretation, such as maintaining
a long lag-time to ensure maximum understanding and thus a maximally relevant inter-
pretation. Use of contrasting locations in space facilitates differentiation and serves as
a strategy alongside slower language production to enhance comprehension. Abstract
concepts are made more concrete, with extensive use of hyponyms to allow the audi-
ence to retrieve the speaker’s intent. Similarly, role-shift and constructed action are
used extensively to assist the audience to draw upon experience to infer meaning from
the IS interpretation. Metaphoric use of space also contributes to ease of inference on
the part of the audience. Context-specific information relevant to the environment
further enables the audience to infer and recover meaning.
Rosenstock (2008) provides a useful analysis of an international Deaf event (Deaf-
Way II) and the IS interpreting used there. She describes the conflicting demands on
the IS interpreters: “At times, as in the omissions or the use of tokens, the economic
considerations clearly override the need for iconicity. In other contexts, such as lexical
choices or explanations of basic terms, the repetitions or expansions suggest a heavier
992 VIII. Applied issues

reliance on an iconic motivation” (Rosenstock 2008, 154). This elegantly captures many
of the competing factors interpreters manage when providing IS interpreting. Further
studies are clearly needed, with more information on who acts as an IS interpreter,
what linguistic background IS interpreters have, and possible influences of language
and cultural background on IS interpreting. There are also as yet no studies of how
interpreters (Deaf and hearing) work from IS into other languages, both signed and
spoken.
The fact which is of most applied interest is that IS interpreting can be successful
as a means of communication, depending on the experience of the users of the inter-
preting services. This provides a unique window into inference and pragmatic language
use where the conversational partner is required to make at least as much effort in
understanding as the signer/speaker makes in producing understandable communica-
tion. This will illuminate how we understand language at a discourse level and how
interpreters can work at a meaning-driven level of processing.

6. Conclusions
This chapter has sketched the development of the sign language interpreting profession
and the changes within sign language interpreting and Deaf communities. Research
into sign language interpreting began even more recently than research on sign lan-
guage linguistics. Much of the ground-work has now been laid and the field can look
forward to an increasing number of studies that will provide data-driven evidence from
an increasing number of sign languages. This will in turn provide a broader understand-
ing of the cognitive and linguistic mechanisms underlying the interpreting process and
its associated products.

7. Literature
Adam, Robert/Carty, Breda/Stone, Christopher
2011 Ghostwriting: Deaf Translators Within the Deaf Community. In: Babel 57(3), 1⫺19.
Adam, Robert/Dunne, Senan/Druetta, Juan Carlos
2008 Where Have Deaf Interpreters Come from and Where Are We Going? Paper Pre-
sented at the Association of Sign Language Interpreters (ASLI) Conference, London.
Alawni, Khalil
2006 Sign Language Interpreting in Palestine. In: Locker McKee, Rachel (ed.), Proceedings
of the Inaugural Conference of the World Association of Sign Language Interpreters.
Coleford: Douglas McLean Publishing, 68⫺78.
Alexander, Claire/Edwards, Rosalind/Temple, Bogusia
2004 Access to Services with Interpreters: User Views. York: Joseph Rowntree Foundation.
Allsop, Lorna/Stone, Christopher
2007 Collective Notions of Quality. Paper Presented at Critical Link 5, Sydney, Australia.
Allsop, Lorna/Woll, Bencie/Brauti, Jon-Martin
1995 International Sign: The Creation of an International Deaf Community and Sign Lan-
guage. In: Bos, Heleen/Schermer, Trude (eds.), Sign Language Research 1994. Hamburg:
Signum, 171⫺188.
40. Interpreting 993

Bahan, Ben
1989 Notes from a ‘Seeing’ Person. In: Wilcox, Sherman (ed.), American Deaf Culture. Silver
Spring, MD: Linstok Press, 29⫺32.
Bahan, Ben
2008 Upon the Formation of a Visual Variety of the Human Race. In: Bauman, Hans-Dieter
(ed.), Open Your Eyes: Deaf Studies Talking. Minnesota: University of Minnesota Press,
83⫺99.
Bassnett, Susan/Trived, Harish
1999 Post-Colonial Translation: Theory and Practice. London: Routledge.
Bishop, Michelle/Hicks, Sherry (eds.)
2008 Hearing, Mother Father Deaf: Hearing People in Deaf Families. Washington, DC: Gal-
laudet University Press.
Bogaerde, Beppie van den/Baker, Anne E.
2009 Bimodal Language Acquisition in KODAs. In: Bishop, Michelle/Hicks, Sherry (eds.),
Hearing, Mother Father Deaf: Hearing People in Deaf Families. Washington, DC: Gal-
laudet University Press, 99⫺132.
Boudreault, Patrick
2005 Deaf Interpreters. In: Janzen, Terry (ed.), Topics in Signed Language Interpreting. Am-
sterdam: Benjamins, 323⫺356.
Bourdieu, Pierre
1986 The Forms of Capital. In: Richardson, John G. (ed.), Handbook for Theory and Re-
search for the Sociology of Education. New York: Greenwood Press, 241⫺258.
Brennan, Mary/Brown, Richard
1997 Equality Before the Law: Deaf People’s Access to Justice. Coleford: Douglas McLean.
Caledonian Mercury (Edinburgh, Scotland), Thursday 3 July, 1817, Issue 14916.
Cokely, Dennis
1992a Interpretation: A Sociolinguistic Model. Burtonsville, MD: Linstok Press.
Cokely, Dennis (ed.)
1992b Sign Language Interpreters and Interpreting. Burtonsville, MD: Linstok Press.
Cokely, Dennis
2005 Shifting Positionality: A Critical Examination of the Turning Point in the Relationship
of Interpreters and the Deaf Community. In: Marschark, Marc/Peterson, Rico/Winston
Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Directions for
Research and Practice. Oxford: Oxford University Press, 3⫺28.
Corfmat, Percy
1990 Please Sign Here: Insights Into the World of the Deaf. Vol. 5. Worthing and Folkestone:
Churchman Publishing.
Crystal, David
1997 The Cambridge Encyclopaedia of Language. Cambridge: Cambridge University Press.
Deysel, Francois/Kotze, Thelma/Katshwa, Asanda
2006 Can the Swedish Agency Model Be Applied to South African Sign Language Interpret-
ers? In: Locker McKee, Rachel (ed.), Proceedings of the Inaugural Conference of the
World Association of Sign Language Interpreters. Coleford: Douglas McLean, 60⫺67.
Dickinson, Jules/Turner, Graham
2008 Sign Language Interpreters and Role Conflict in the Workplace. In: Valero-Garcés,
Carmen/Martin, Anne (eds.), Crossing Borders in Community Interpreting: Definitions
and Dilemmas. Amsterdam: Benjamins, 231⫺244.
Dikyuva, Hasan
2008 Mardin Sign Language. Paper Presented at the CLSLR3 Conference, Preston, UK.
Emmorey, Karen/Borinstein, Helsa/Thompson, Robin/Gollan, Tamar
2008 Bimodal Bilingualism. In: Bilingualism: Language and Cognition 11, 43⫺61.
994 VIII. Applied issues

Forestal, Eileen
2005 The Emerging Professionals: Deaf Interpreters and Their Views and Experiences of
Training. In: Marschark, Marc/Peterson, Rico/Winston Elizabeth (eds.), Sign Language
Interpreting and Interpreter Education: Directions for Research and Practice. Oxford:
Oxford University Press, 235⫺258.
Fox Tree, Erich
2009 Meemul Tziij: An Indigenous Sign Language Complex of Mesoamerica. In: Sign Lan-
guage Studies 9(3), 324⫺366.
Goldin-Meadow, Susan
2003 The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About
How All Children Learn Language. New York: Psychology Press.
Green, David/Schweda Nicholson, Nancy/Vaid, Jyotsna/White, Nancy/Steiner, Richard
1990 Hemispheric Involvement in Shadowing vs. Interpretation: A Time-Sharing Study of
Simultaneous Interpreters with Matched Bilingual and Monolingual Controls. In: Brain
and Language 39(1), 107⫺133.
Groce, Nora Ellen
1985 Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard. Cam-
bridge, MA: Harvard University Press.
Grosjean, François
1997 The Bilingual Individual. In: Interpreting 2(1/2), 163⫺187.
Hall, Nigel
2004 The Child in the Middle: Agency and Diplomacy in Language Brokering Events. In:
Hansen, Gyde/Malmkjær, Kirsten/Gile, Daniel (eds.), Claims, Changes and Challenges
in Translation Studies. Amsterdam: Benjamins, 258⫺296.
Hay, John
2008 Deaf Interpreters Throughout History. Paper Presented at the Association of Sign Lan-
guage Interpreters (ASLI) Conference, London.
Hermann, Alfred
1956 Interpreting in Antiquity. In: Pöchhacker, Franz/Shlesinger, Miriam (eds.), The Inter-
preting Studies Reader. London: Routledge, 15⫺22.
Hitchcock, Tim/Shoemaker, Robert
2008 The Proceedings of the Old Bailey. [Available from: http://www.oldbaileyonline.org, case
reference t17710703⫺17; retrieved January 2008]
Inghilleri, Moira
2003 Habitus, Field and Discourse: Interpreting as Socially Situated Activity. In: Target 15(2),
243⫺268.
Jepson, Jill
1991 Urban and Rural Sign Language in India. In: Language in Society 20, 37⫺57.
Johnson, Robert E.
1991 Sign Language, Culture, and Community in a Traditional Yucatec Maya Village. In:
Sign Language Studies 73, 461⫺474.
Ladd, Paddy
2003 In Search of Deafhood. Clevedon: Multilingual Matters.
Lee, Raymond
2004 A Beginner’s Introduction to Deaf History. Feltham, UK: British Deaf History Society
Publications.
Llewellyn-Jones, Peter
1981 Simultaneous Interpreting. In: Woll, Bencie/Kyle, Jim/Deuchar, Margaret (eds.), Per-
spectives on British Sign Language and Deafness. London: Croom Helm, 89⫺103.
Locker McKee, Rachel/Napier, Jemina
2002 Interpreting Into International Sign Pidgin: An Analysis. In: Sign Language & Linguis-
tics 5(1), 27⫺54.
40. Interpreting 995

López Gómez, Maria José/Bajo Molina, Teresa/Padilla Benítez, Presentación/Santiago de Tor-


res, Julio
2007 Predicting Proficiency in Signed Language Interpreting: A Preliminary Study. In: Inter-
preting 9(1), 71⫺93.
Malcolm, Karen
2005 Contact Sign, Transliteration and Interpretation in Canada. In: Janzen, Terry (ed.),
Topics in Signed Language Interpreting. Amsterdam: Benjamins, 107⫺133.
Marsaja, I Gede
2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen:
Ishara Press.
Marschark, Marc/Sapere, Patricia/Convertino, Carol/Seewagen, Rosemarie/Maltzen, Heather
2004 Comprehension of Sign Language Interpreting: Deciphering a Complex Task Situation.
In: Sign Language Studies 4(4), 345⫺368.
McIntire, Marina (ed.)
1986 Interpreting: The Art of Cross-Cultural Mediation. Proceedings of the 9th RID National
Convention. Silver Spring, MD: RID Publications.
Metzger, Melanie
1999 Sign Language Interpreting: Deconstructing the Myth of Neutrality. Washington, DC:
Gallaudet University Press.
Metzger, Melanie/Quadros, Ronice M. de
2011 Cognitive Control in Bimodal Bilingual Sign Language Interpreters. Paper Presented
at the Workshop “Text: Structures and Processing” at the 33rd Annual Conference of
the German Linguistic Society (DfGS), Göttingen.
Mikkelson, Holly
1996 The Professionalization of Community Interpreting. In: Jérôme-O’Keeffe, Muriel (ed.),
Global Vision: Proceedings of the 37 th Annual Conference of the American Translators
Association. Alexandria, VA: American Translators Association, 77–89.
Miles, Michael
2000 Signing in the Seraglio: Mutes, Dwarfs and Jestures at the Ottoman Court 1500⫺1700.
In: Disability & Society 15(1), 115⫺134.
Miles, Michael
2004 Locating Deaf People, Gesture and Sign in African Histories, 1450s⫺1950s. In: Disabil-
ity & Society 19(5), 531⫺545.
Moody, Bill
1994 International Sign: Language, Pidgin or Charades? Paper Presented at the Issues in
Interpreting 2 Conference, University of Durham.
Moorhead, David
1991 Social Work and Interpreting. In: Gregory, Susan/Hartley, Gillian (eds.), Constructing
Deafness. London: Pinter, in Association with the Open University, 259⫺264.
Morales, Alejandro/Hanson, William E.
2005 Language Brokering: An Integrative Review of the Literature. In: Hispanic Journal of
Behavioral Sciences 27(4), 471⫺503.
Moser-Mercer, Barbara/Lambert, Sylvie
1994 Bridging the Gap: Empirical Research in Simultaneous Interpretation. Amsterdam: Ben-
jamins.
Napier, Jemina
2002 Sign Language Interpreting: Linguistic Coping Strategies. Coleford: Douglas McLean.
Napier, Jemina
2005 A Time to Reflect: An Overview of Signed Language Interpreting, Interpreter Educa-
tion and Interpreting Research. In: Locker McKee, Rachel (ed.), Proceedings of the
Inaugural Conference of the World Association of Sign Language Interpreters. Coleford:
Douglas McLean, 12⫺24.
996 VIII. Applied issues

Napier, Jemina (ed.)


2009 International Perspectives on Sign Language Interpreter Education. Washington, DC:
Gallaudet University Press.
Napier, Jemina/Locker McKee, Rachel/Goswell, Della
2006 Sign Language Interpreting: Theory and Practice in Australia and New Zealand. Sydney:
Federation Press.
Napier, Jemina/Rohan, Meg/Slatyer, Helen
2005 Perceptions of Bilingual Competence and Preferred Language Direction in Auslan/
English Interpreters. In: Journal of Applied Linguistics 2(2), 185⫺218.
Nicodemus, Brenda
2009 Prosodic Markers and Utterance Boundaries in American Sign Language Interpretation.
Washington, DC: Gallaudet University Press.
Nyst, Victoria
2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, Uni-
versity of Amsterdam. Utrecht: LOT.
Ozolins, Uldis/Bridge, Marianne
1999 Sign Language Interpreting in Australia. Melbourne: Language Australia.
Padden, Carol
2000 Simultaneous Interpreting Across Modalities. In: Interpreting 5(2), 171⫺187.
Preston, Paul
1996 Chameleon Voices: Interpreting for Deaf Parents. In: Social Science & Medicine 42,
1681⫺1690.
Price, Cathy/Green, David/Studnitz, Roswitha von
1999 A Functional Imaging Study of Translation and Language Switching. In: Brain 122(12),
2221⫺2235.
Rosenstock, Rachel
2008 The Role of Iconicity in International Sign. In: Sign Language Studies 8(2), 131⫺159.
Roy, Cynthia B.
1999 Interpreting as a Discourse Process. New York: Oxford University Press.
Russell, Debra
2002 Interpreting in Legal Contexts: Consecutive and Simultaneous Interpretation. Burtons-
ville, MD: Linstok Press.
Pepys, Samuel
1666 The Diary of Samuel Pepys (November 1966). Ed. by Henry Benjamin Wheatley.
Project Gutenberg Release #4200. [Available from: http://onlinebooks.library.upenn.
edu/webbin/gutbook/lookup?num=4169; retrieved 24th November 2011]
Quinn, Gary
2010 Schoolization: an Account of the Origins of Regional Variation in British Sign Lan-
guage. In: Sign Language Studies 10(4), 476⫺501.
Quinto-Pozos, David
2005 Factors that Influence the Acquisition of ASL for Interpreting Students. In: Marschark,
Marc/Peterson, Rico/Winston, Elizabeth (eds.), Sign Language Interpreting and Inter-
preter Education: Directions for Research and Practice. Oxford: Oxford University
Press, 159⫺187.
Ragir, Sonia
2002 Constraints on Communities with Indigenous Sign Languages: Clues to the Dynamics
of Language Genesis. In: Wray, Alison (ed.), The Transition to Language. Studies in the
Evolution of Language. Oxford: Oxford University Press, 272⫺294.
Sandler, Wendy
1999 Prosody in Two Natural Language Modalities. In: Language and Speech 42, 127⫺142.
Scott-Gibson, Liz
1991 Sign Language Interpreting: An Emerging Profession. In: Gregory, Susan/Hartley, Gil-
lian (eds.), Constructing Deafness. London: Pinter in Association with the Open Univer-
sity, 253⫺258.
40. Interpreting 997

Scott-Gibson, Liz/Ojala, Raili


1994 International Sign Interpreting. Paper Presented at the Fourth East and South African
Sign Language Seminar, Uganda.
Simpson, Stewart
1991 A Stimulus to Learning, a Measure of Ability. In: Gregory, Susan/Hartley, Gillian (eds.),
Constructing Deafness. London: Pinter in Association with the Open University, 217⫺
226.
Simpson, Stewart
2007 Advance to an Ideal: The Fight to Raise the Standard of Communication Between Deaf
and Hearing People. Edinburgh: Scottish Workshop Publications.
Siple, Linda A.
1998 The Use of Addition in Sign Language Transliteration. In: Weisel, Amatzia (ed.), Issues
Unresolved: New Perspectives on Language and Deaf Education. Washington, DC: Gal-
laudet University Press, 65⫺75.
Skutnabb-Kangas, Tove
1981 Bilingualism or Not: The Education of Minorities. Clevedon: Multilingual Matters.
Smith, Theresa B.
1996 Deaf People in Context. PhD Dissertation, University of Washington.
Solow, Sharon
1981 Sign Language Interpreting: A Basic Resource Book. Silver Spring, MD: National Asso-
ciation of the Deaf.
Steiner, Ben
1998 Signs from the Void: The Comprehension and Production of Sign Language on Televi-
sion. In: Interpreting 3(2), 99⫺146.
Stone, Christopher
2008 Whose Interpreter Is She Anyway? In: Roy, Cynthia (ed.), Diversity and Community
in the Worldwide Sign Language Interpreting Profession: Proceedings of the 2nd Confer-
ence of the World Association of Sign Language Interpreters, held in Segovia, Spain,
2007. Coleford: Douglas McLean, 75⫺88.
Stone, Christopher
2009 Towards a Deaf Translation Norm. Washington, DC: Gallaudet University Press.
Stone, Christopher
in press The UNCRPD and ‘Professional’ Sign Language Interpreter Provision. In: Schaeffner,
Christina (ed.), The Critical Link 6: Interpreting in a Changing Landscape. Amster-
dam: Benjamins.
Stone, Christopher/Adam, Robert
2008 Deaf Interpreters in the Community ⫺ The Missing Link? Paper Presented at the CIT
Conference, Puerto Rico.
Stone, Christopher/Woll, Bencie
2008 DUMB O JEMMY and Others: Deaf People, Interpreters and the London Courts in
the 18th and 19th Centuries. In: Sign Language Studies 8(3), 226⫺240.
Takagi, Machiko
2005 Sign Language Interpreters of Non-English Speaking Countries Who Support Interna-
tional Activities of the Deaf. In: Locker McKee, Rachel (ed.), Proceedings of the Inau-
gural Conference of the World Association of Sign Language Interpreters. Coleford:
Douglas McLean, 25⫺31.
Thomas, Esther
2008 Interview with Clara Allardyce. In: NEWSLI, July 2008, 18.
WASLI (World Association of Sign Language Interpreters)
2006 Joint Statement of WASLI and WFD (adopted on 23/01/2006). [Available from: http://
www.wasli.org/joint-agreements-p21.aspx; retrieved December 2011]
998 VIII. Applied issues

Wadensjö, Cecelia
1998 Interpreting as Interaction. London: Longman.
Woll, Bencie
1988 Report on a Survey of Sign Language Interpreter Training and Provision Within the
Member Nations of the European Community. In: Babel 34(4), 193⫺210.
Woll, Bencie/Ladd, Paddy
2010 Deaf Communities. In: Marschark, Marc/Spencer, Patricia E. (eds.), Oxford Handbook
of Deaf Studies, Language, and Education (2nd Edition). Oxford: Oxford University
Press, 159⫺172.
Xiaoyan, Xiao/Ruiling, Yu
2009 Survey on Sign Language Interpreting in China. In: Interpreting 11(2), 137⫺163.

Christopher Stone, London (United Kingdom)

41. Poetry
1. Introduction
2. Definition(s) of sign language poetry
3. Sources of sign language poetry
4. Sign language poets
5. Purposes of sign language poetry
6. Genres within the poetic genre
7. Figurative poetic language
8. Visual creativity
9. Repetition, rhythm, and rhyme
10. Conclusion
11. Literature

Abstract

This chapter explores some linguistic, social, and cultural elements of sign language
poetry. Taking sign language poetry to be a language art-form recognised by its commu-
nity of users as somehow noticeably ‘different’ and poetic, I identify characteristics of
signing poets and the cultural, educational, and personal uses of sign language poetry.
Genres of signed poetry include signed haiku and short narrative poems, as well as ‘lyric’
poems. Deliberately ‘deviant’ creation of meaning is seen where figurative language is
used extensively in signed poems, especially as language form and meaning interact to
produce metaphors, similes, and hyperbolic forms, while ‘deviant’ creation of highly
visual new signs draws further attention to the poetic language. Noticeable elements of
repetition occur at the grammatical, sign, and sub-sign levels to create additional poetic
effect.
41. Poetry 999

1. Introduction
This chapter will consider some general points about the function, content, and form
of sign language poetry and the contexts in which it occurs. It highlights linguistic,
sociolinguistic, and cultural features of the art form, offering some definitions of sign
language poetry and reviewing its origins and purpose. A brief examination of some
of the different types of poems within the poetic genre is followed by a description of
the form of language used in the poems, including figurative and visually creative lan-
guage and repetition, rhyme, and rhythm in signs. Examples are drawn from poems in
a range of sign languages, showing the similarities (and some differences) in sign lan-
guage poetry across languages. Many of the illustrative examples come from poems
that are available to wider audiences either through the Internet or commercial DVDs
with a reliable source. Readers should note that sign language poems often have no
permanent record, being more like poems in oral traditions. Dates of performances or
published recordings may not reflect their time of composition. The dates given here
for some poems are those of recordings on commercial video or DVD format or of
poems available on the Internet, especially at www.bristol.ac.uk/bslpoetryanthology (a
list of the poems mentioned in this chapter that are available at this website is provided
in section 11). Other poems mentioned here have been performed live or have been
recorded by individuals but the recordings are not available commercially.
It is important to remember that sign language poetry is enjoyable. It is often fun.
Signed poems frequently make people smile, laugh, and applaud and make deaf chil-
dren giggle with delight. It is a very positive, celebratory aspect of life for communities
of people whose daily experiences are not always easy. Any analysis or observations
on sign language poetry should serve to highlight why this is so. The more we under-
stand the workings and sheer genius of this language art form, the richer our lives
become ⫺ Deaf or hearing, signer or non-signer.

2. Definition(s) of sign language poetry


Defining poetry is probably impossible. Perhaps the one defining feature of any poetry
is that it defies definition. It may be just as naïve and fruitless to seek a definition of
sign language poetry, beyond saying that “we know a poem when we see one”. How-
ever, even this approach is strongly related to culture and literacy, as we will see later,
because many Deaf people have little exposure to the poetry of their own language
and even less education in the way to appreciate and engage with it. Given this, there
may be considerable disagreement over what is and is not a sign language poem, but
there are certain elements of form, content, and function of poetry that many Deaf
poets, at least, appear to agree upon.
In a general sense, it is understood that sign language poetry is the ‘ultimate’ in
aesthetic signing, in which the language used is as important as ⫺ or even more impor-
tant than ⫺ the message. Poetic language in general stems from everyday language but
deviates from it so that the language itself stands out in the foreground of the utter-
ance, increasing its communicative power beyond simple propositional meaning (see,
for example, Leech 1969). Sign language poetry is an art form that entertains and
1000 VIII. Applied issues

educates members of the Deaf community ⫺ creating, challenging, and strengthening


the bonds within the community. Beyond these rather general definitions, however,
lies the challenge of specifying how all this is realised.
Sign language folklore plays a key role in poetry. In American Sign Language
(ASL), signlore (Carmel 1996) includes stories and games using the manual alphabet
and numbers, sign language puns, and deliberate blending of different signs. These
elements may all be seen in some signed poems. Different Deaf communities have
their own signlore (see, for example, Smith/Sutton-Spence 2007), but it appears that
signlore is a major part of any genre of creative art signing such as narrative, anecdote,
and jokes, and that there is no clear dividing line between these and poems (especially
narrative poems, poems that give brief stories of incidents or humorous poems). Be-
cause poetic elements of sign language may be seen in other creative language forms,
what one person calls a poem, another person might see as a different genre containing
poetic language. The ASL poet Clayton Valli (1993) remarked that differences between
ASL poetry and prose were matters of degree. Richard Carter, a British Sign Language
(BSL) poet and storyteller, explained his view (in a seminar at the University of Bristol,
2008) that stories are more likely to express events or ideas realistically and more
literally than poems do. Stories use altered signs much less than poems, where the
events or ideas are expressed more metaphorically and with more attention to the
language. Richard offered the following examples:

In a story involving a bear in a zoo, the bear might fingerspell normally, but in my poem
[Sam’s Birthday], the bear fingerspells using bear-paw clawed handshapes. In my poem
Looking for Diamonds, I use the slow-motion technique to show running towards the
diamond but in a story, I would just sign running normally. The slow-motion is poetic ⫺
more dreamlike. (Author’s translation)

Poetry is a cultural construction and Deaf people’s ideas about the form and function
of sign language poetry do not necessarily coincide with those of hearing people in
their surrounding social and literary societies. Deaf people around the world may also
differ in their beliefs about what constitutes sign language poetry. They may even deny
that their culture has poetry. This may be especially so where Deaf people’s experien-
ces of poetry have been overwhelmingly of the written poetry of a hearing majority.
For a long time, Deaf people have been surrounded by the notion that spoken lan-
guages are for high status situations and that ‘deaf signing’ is inferior and only fit for
social conversation. Poetry has been seen as a variety of language that should be con-
ducted in spoken language, because of its status.
What is considered acceptable or relevant as poetry varies with time and different
perspectives of different poets and audiences. For example, opinions are divided over
whether signed songs are a valid form of signed poetry. Signed songs (such as hymns
or pop songs) are signed translations of song lyrics, performed in accompaniment to the
song. Rhythm (a key feature of signed poetry, see section 9.1) is a strong component in
signed songs but this is driven by the song, not by the sign language. Where translations
are more faithful to the words of the song, there is little creation of the strong visual
images often considered important in original sign language poetry, but other perform-
ers of signed songs may bring many of the features of creative language into their
translations such as metaphorical movement and blending (see sections 7 and 8).
41. Poetry 1001

Signed songs are particularly enjoyed by audiences with some experience of the origi-
nal songs ⫺ either because they have some hearing or because they heard the songs
before becoming deaf. The performances are clearly poetic. The debate, however, is
whether they belong to the genre of sign language poetry. They will not be discussed
further here.
‘Deaf poetry’ and ‘sign language poetry’ are not necessarily the same thing. Both
are composed by people for whom sound-based language is essentially inaccessible but
while sign language poetry is normally composed and performed by Deaf people, not
all deaf poets compose and perform in sign language, preferring instead to write poetry
in another language. Poetry that is written by deaf people is clearly different in its
form (composed in a two-dimensional static form rather than a three-dimensional ki-
netic form), is frequently different in its content, and often has a very different function
than sign language poetry. Despite observations that some written poetry by deaf peo-
ple can take positive approaches to being deaf and the visual world, there is no doubt
that there are more references to sound and its loss in deaf-authored written than in
signed poetry (Esmail 2008). We will see that themes in sign language poetry usually
emphasise the visual world and the beauty of sign language (Sutton-Spence 2005). Loss
and negative attitudes towards deafness are far less common. That said, signed poetry
by some young Deaf people addresses themes of resentment toward being deaf, and it
is important to acknowledge this.

2.1. Historical change

Historical records in France and the USA mention performances of sign language
poetry at the large Deaf Banquets of the 19th century, although it is not clear what
form these poems took (Esmail 2008). Early ASL “poetry” recorded in films from the
mid 20th century was often purely rhythmic “percussion signing” in which simple signs
or phrases were repeated in different rhythms, often in unison (Peters 2000). These
were often performed at club outings. In Britain, for most of the 20th century, sign
language poetry competitions were the venue for performance of the art form, and
most poems were translations of English poetry, for which highly valued skills in Eng-
lish were required. There was also a tendency for the performances of these poems to
incorporate large, dramatic, or “theatrical” gestures or styles in the signing. This link
between English poetry and signed poetry may account for some of the attitudes that
many older British Deaf people still hold towards sign language poetry.
In the 1970s, the pioneering Deaf poet Dorothy (‘Dot’) Miles began experimenting
with creating ASL poetry along poetic principles in the sign language ⫺ using repeti-
tion of rhythm and sub-sign parameters such as handshape, movement, or location (see
chapter 2, Phonology, for details). Although this was an important development in
signed poetry, many of her earlier works were composed simultaneously in English
and a sign language (either ASL or BSL), and it was nearly two decades before sign
language stood alone in her poems. As Dorothy’s work loosened its ties with English,
she began building more visually creative elements into her poetry, by using more
poetic structures of space, balance, and symmetry, and by making more creative use of
classifiers and role shift. Clayton Valli’s 1993 analysis of ASL poetry also used theories
derived from English poetry (although his work often contained many forms unrelated
1002 VIII. Applied issues

to English poetics). Contemporary poets around the world like Paul Scott, Wim Em-
merik, Nelson Pimenta, and Peter Cook rely far less on poetic principles shared with
their national written language. However, in Britain at least, some children and
younger Deaf people are again playing with mixed forms of English and BSL to create
poetry (see, for example, Life and Deaf, 2006), and current British Deaf poets draw on
a range of these poetic and linguistic devices to create powerful visual representation of
imagery.
Video technology has also been highly significant in the linguistic and cultural devel-
opment of signed poetry. It is possible to see a division of “pre-video” creative sign
and “post-video” sign language poetry (Rose 1994). Video allows poets to refine their
work, creating a definitive “text” or performance of that text that is very complex, and
others can review it many times, unpacking dense meanings from within the rich lan-
guage in a way that was not possible when the only access to a poem was its live
performance. Krentz (2006) has argued that the impact of video on sign language
literature has been as significant as that of the printing press upon written literature.
As well as allowing work to be increasingly complex, video has greatly expanded the
impact of sign language poetry through its permanence and widespread distribution,
and shifted the “ownership” of works away from the community as a whole towards
individual poets who compose or perform poems in the videos.

2.2. Themes and content

Theme may determine whether or not people consider something poetry. For example,
in some circles in the USA in the 1990s, a creative piece was not considered to be ASL
poetry unless it was about Deaf identity. Almost two decades later, sign language po-
etry can be seen in any topic. This is further evidence that sign language poetry, like
other poetry, changes over time as poets and audiences develop different expectations
and attitudes.
Despite the move away from a focus on specifically Deaf-related themes, Christie
and Wilkins (2007) found that over half of the ASL poems they reviewed in their
corpus could be interpreted as having themes relating to Deaf identity, including Deaf
resistance to oppression and Deaf liberation. Similarly, in the poems collected in 2009⫺
2010 for the online BSL poetry anthology, just over a half had Deaf protagonists or
characters and thus could be seen to address “Deaf issues” including Deaf education,
sign language, and Deaf community resistance. Nevertheless, in terms of content, sign
language poems also often explore the possibly universal “Big Issues” explored by any
poetry: the self, mortality, nationality, religion, and love. Examples of poems carrying
any of these themes may be seen in the online BSL poetry anthology. Sign language
poetry tackles the themes from the perspective of a Deaf person and/or their Deaf
community, using an especially visual Deaf take on them, often showing the world
from a different perspective. Morgan (2008, 25) has noted “There are only two ques-
tions, when you come down to it. What is the nature of the world? And how should
we live in it?” Deaf poets ask, in paraphrase, “What is the nature of the Deaf world?
And how should Deaf people live in it?” Consequently, sign language poetry frequently
addresses themes such as a Deaf person’s identity, Deaf people’s place in the world,
Deaf values and behaviour, the ignorance of the hearing society, the visual and tactile
41. Poetry 1003

sensory Deaf life experience, and sign language. Questions of nationality, for example,
may reflect on the place of a Deaf person within a political nation or may consider
the worldwide Deaf Nation. Paul Scott’s BSL poem Three Queens (2006) and Nelson
Pimenta’s Brazilian Sign Language poem The Brazilian Flag (2003) both consider the
poets’ political, historical, and national heritage from a Deaf perspective (Sutton-
Spence/de Quadros 2005). Dorothy Miles’ ASL poem Word in Hand (Gestures, 1976)
explores membership of the World Deaf Nation for any deaf child in any country.
Clayton Valli’s ASL poem Deaf World (1995) considers all worlds, rejecting the hearing
world in favour of a world governed by a Deaf perspective.
Most sign language poetry (at least in Britain and several other countries whose
sign language poetry I have been privileged to see) is ‘positive’ ⫺ optimistic, cheerful,
celebratory, and confident, showing pride in being Deaf and delight in sign language.
While some poems may be considered ‘negative’ ⫺ referring to problems of oppres-
sion, frustration, and anger ⫺ they often deal with these issues in a positive or at least
humorous way, and even the angry poems are frequently very funny, albeit often with
a rather bleak, dark humour.
Signed poetry frequently addresses issues of sign language or the situation of the
Deaf community. Very rarely do we see laments for the loss of hearing, although young
people may refer to this more (for example, some children’s poems in Life and Deaf,
2006). We are more likely to see teasing or objecting to the behaviour of hearing
people, and this includes poems about fighting against oppression. Poems are con-
cerned with celebrating sign language and what is valued in the daily Deaf experience:
such as sight, communication, and Deaf togetherness. In BSL, Paul Scott’s Three
Queens (2006) and Dorothy Miles’ The Staircase (1998), for example, show these el-
ements clearly. Celebration of Deaf success and discovering or restating identity, at
both the collective and individual levels, is also important, although some poems may
issue challenges to members of Deaf communities. For example, Dorothy Miles’ poem
Our Dumb Friends (see Sutton-Spence 2005) exhorts community members to stop in-
fighting. Many poems do all this through extended metaphor and allegories.

2.3. Formal aspects of sign language poetry

The form of language used in sign language poetry is certainly one of its key identifying
features. As the ultimate form of aesthetic signing, poetry uses language highly crea-
tively, drawing on a wide range of language resources (which will be considered in
more detail below), such as deliberate selection of sign vocabulary sharing similar
parameters, creative classifier signs, role shift (or characterisation), specific use of
space, eye-gaze, facial expressions, and other non-manual features. In addition, repeti-
tion and the rhythm and timing of signs are frequently seen as crucial to signed poetry.
It is not easy to determine which of these elements are essentially ‘textual’ (that is,
inherent to the language used) and which are performance-related. In written poetry,
separation of text and performance might be important and is mostly unproblematic,
but in sign language poetry, they are so closely interlinked that it is perhaps counter-
productive to seek distinctions. Deaf poets and their audiences frequently mention the
importance of poetic signing being ‘visual’ and this highly visual effect can be achieved
through both the text and the performance. Clearly, all sign language is visual because
1004 VIII. Applied issues

it is perceived by the eye, but the specific meaning here alludes to the belief that the
signer must create a strong visual representation of the concepts, in order to produce
a powerful visual image in the mind of the audience.
Different styles of poem are considered appropriate for audiences of different ages.
Many deaf poets attach great importance to sharing poetry with deaf children and
encouraging the children to create their own work. Younger children’s poetry focuses
more on elements of repetition and rhythm and less on metaphor. Older children
may be encouraged to play with the ambiguous meaning of classifier signs, presenting
alternative interpretations of the size and identity of the referent. The richer metaphor-
ical uses of signed poetry, however, are considered more appropriate for older audien-
ces or for those with some more advanced understanding or “literacy” of ways to
appreciate poetry (Kuntze 2008). We cannot expect audiences with no experience of
sign language poetry to understand signed poems and make inferences in the way
intended by the poets. Such literacy needs to be taught and many Deaf audiences still
feel a little overwhelmed by, alienated from, or frankly bored by sign language poetry.
A member of the British Deaf community recently claimed to us that sign language
poetry was only for “Clever Deaf”. In fact, far from being only for Clever Deaf, sign
language poetry is part of the heritage of the signed folklore seen in any Deaf Club.

3. Sources of sign language poetry

As was mentioned above in relation to signlore, many of the roots of sign language
poetry lie in Deaf folklore. Bascom (1954, 28) has observed that “[i]n addition to
the obvious function of entertainment and amusement, folklore serves to sanction the
established beliefs, attitudes and institutions, both sacred and secular, and it plays a
vital role in education in nonliterate societies”. Folklore transmits culture down the
generations, provides rationalisation for beliefs and attitudes if they are questioned,
and can be used to put social pressure on deviants from social norms. It may be said
that many of the functions of sign language poetry are identical, because they stem
from the same source. Some poetic forms may spread and be shared with other Deaf
communities; for example, there is some evidence that some ABC games that origi-
nated in ASL have been adopted and adapted by other Deaf communities around the
world. However, it appears that sign language poets mostly draw upon those language
and thematic customs that can be appreciated by their own audiences.
Sign language poetry has also grown out of specific language learning environments.
Sometimes poetry is used for second-language learning in adults. Poetry workshops
have also been used to encourage sign language tutors to think more deeply about sign
language structure in order to aid their teaching. Signing poetry to Deaf children is a
powerful way to teach them about sign language in an education system where any
formal sign language study is often haphazard at best. Many Deaf poets (including
Paul Scott, Richard Carter, John Wilson, Peter Cook, and Clayton Valli in the UK and
USA) have worked extensively with children to encourage them to compose and per-
form sign language poetry. Poetry improves children’s confidence and allows them to
express their feelings and develop their Deaf identity, by encouraging them to focus on
elements of the language and play with the language form, exploring their language’s
41. Poetry 1005

potential. When children are able to perform their work in front of others, it further
validates their language and gives them a positive sense of pride. Teachers also use
poems to teach lessons about Deaf values. In a narrative poem by Richard Carter
(discussed in more detail below), a signing Jack-in-a-Box teaches a little boy about
temptation, self-discipline, guilt, and forgiveness.

4. Sign language poets

Sign language poetry is usually composed by members of the Deaf community. Fluent
hearing signers might compose and perform such poetry (for example, Bauman’s work
referred to in Bauman/Nelson/Rose 2006), and language learners might do so as part
of their exploration of the language (Vollhaber 2007). However, these instances are
rare and peripheral to the core of Deaf-owned sign language poetry. A belief in the
innate potential for any deaf signer to create sign language poetry spurs Deaf teachers
to bring sign language poetry to deaf children, and organisers to hold festivals such as
the BSL haiku festival where lay signers can watch established poets and learn poetry
skills for themselves. Nevertheless, it is clear that some signers in every community
have a specific poetic gift. They are the ones sought out at parties and social events to
tell jokes or stories or to perform. Rutherford (1995) describes them as having “the
knack”; Bahan (2006) has called them “smooth signers”. Some British Deaf people
say that a given signer “has beautiful signing” or simply “has it”. These signers may
not immediately recognise their own poetic skills but they come to be accepted as
poets. Sometimes this validation comes from others in the community but it may also
come from researchers at universities ⫺ if analysis of your poetic work merits the
attention of sign language linguistic or cultural research, then it follows that you must
be a poet. Sometimes it comes from invitations to perform at national or international
festivals or on television or from winning a sign language poetry competition. Some-
times it is simply a slow realisation that poetry is happening within.
It might be expected that Deaf poets would come from Deaf families, where they
grew up with the greatest exposure to the richest and most diverse forms of sign lan-
guage. Indeed, some recognised creative signers and poets do have Deaf parents who
signed to them as children. However, many people recognised as poets did not grow
up with sign language at home. Clive Mason, a leader in the British Deaf community
(who did not grow up with sign language at home), has suggested to me why this might
be the case (personal communication, December 2008). There are two types of creative
sign language ⫺ that which has been practised to perfection and performed, and that
which is more spontaneous. Clive suggested that the former is characteristic of people
from Deaf families who grew up signing, while the latter is seen in people whose
upbringing was primarily oral. Signers with Deaf parents might be more skilled crea-
tively in relation to the more planned and prepared performances of poetry because
they have been exposed to the rules and conventions of the traditional art forms. Well-
known and highly respected American Deaf poets, such as Ben Bahan and Ella Mae
Lentz, and some British poets, including Ramon Wolfe, Judith Jackson, and Paul Scott,
grew up in signing households. On the other hand, signers with an oral upbringing are
skilled in the spontaneous creativity simply because it was the sole creative language
1006 VIII. Applied issues

option when they had no exposure to the traditional art forms. Recognised poets who
grew up without signing Deaf parents include Dot Miles, Richard Carter, John Wilson,
and Donna Williams in Britain, and Nigel Howard, Peter Cook, and Clayton Valli in
North America.
Many sign language poets work in isolation but increasing numbers of workshops
and festivals, both nationally and internationally, allow them to exchange ideas and see
each other’s work. Video recordings of other work, especially now via the Internet,
also allow the exchange of ideas. These developments mean that we can expect more
people to try their hand at poetry and for that poetry to develop in new directions.

5. Purposes of sign language poetry

Poetry can be seen as a game or a linguistic ‘luxury’, and its purpose can be pure
enjoyment. For many Deaf poets and their audiences, that is its primary and worthy
aim. It is frequently used to appeal to the senses and the emotions, and humour in
signed poems is especially valued. Sometimes the humour may be “dark”, highlighting
a painful issue but, as was mentioned above (section 2.2), it is achieved in a safely
amusing way. For many Deaf people, much of the pleasure of sign language poetry lies
in seeing their language being used creatively. It strengthens the Deaf community by
articulating Deaf Culture, community, and cultural values, especially pride in sign lan-
guage (Sutton-Spence 2005; Sutton-Spence/de Quadros 2005).
Poetry shows the world from new and different perspectives. Sign language poetry
is no exception. The BSL poet John Wilson described how his view of the world
changed the first time he saw a BSL poem at the age of 12 (Seminar at Bristol Univer-
sity, February 2007). Until then, he felt as though he saw the world through thick fog.
Seeing his first BSL poem was like the fog lifting and allowing him to see clearly for
the first time. He recalls that it was a brief poem about a tree by a river but he felt
like he was part of that scene, seeing and experiencing the tree and river as though
they were there. He laughed with the delight of seeing the world clearly through that
poetic language for the first time at 12 years old.
Sign language poems also empower Deaf people to realise themselves through their
creativity. Many sign poets tell of their use of poetry to express and release powerful
feelings that they cannot express through English. Dorothy Miles wrote in some un-
published notes in 1990 that one aim for sign language poetry is to satisfy “the need
for self-identity through creative work”. Poets can gain satisfaction from having people
pay attention to them. Too often Deaf people are ignored or marginalized (and this is
especially true in childhood), so some poets are fulfilled by the attention paid to them
while performing. Richard Carter’s experience reveals one personal path to poetry that
supports Dorothy’s claim and which may resonate with other Deaf poets. In a seminar
given to post-graduate students at Bristol University (February 2008), he explained:

I have a gift. I really believe it comes from all my negative experiences before and the
frustrations they created within me. I resolved them through poetry in order to make
myself feel good. I think it’s because when I was young, I didn’t get enough attention. […]
So I tried to find a way to get noticed and I feel my poetry got me the attention.
41. Poetry 1007

Beyond self-fulfilment, however, many poets frequently mention the wish to show
other people ⫺ hearing people, Deaf children, or other members of the Deaf commu-
nity ⫺ what sign language poetry can do. For hearing non-signers, the idea that poetry
is possible in sign languages is an eye-opener. Hearing people can be shown the beauty
and complexity of sign language and, through it, learn to respect Deaf culture and
Deaf people.
In many cases, sign language poetry shows a Deaf worldview, considering how the
world might be if it were a Deaf world (see Bechter 2008). Bechter argues that part of
the Deaf cultural worldview is that their world is ⫺ or should be ⫺ made of Deaf lives.
Consequently, the job of a Deaf storyteller or poet is to see or show Deaf lives where
others might not see them. The key linguistic and metaphorical devices for creating
visions of these alternative realities are role shift and anthropomorphism (or personifi-
cation). Role shift in these ‘personification’ pieces uses elements that Bechter identifies
as being central to the effect ⫺ lack of linear syntax or citation signs, and extensive
use of “classifier-expressions, spatial regimentation and facial affect” (2008, 71). Role
shift blurs the distinction between text and performance considerably. The elements
used allow poets to closely mimic the appearance and behaviour of people described
in the poems. Importantly, role shift frequently allows the poet to take on the role of
non-human entities and depict them in a novel and entertaining way. By “becoming”
the entity, the poet highlights how the world would be if these non-human entities
were not just human, but also deaf, seeing the world as a visual place and communicat-
ing visually (see chapter 17, Utterance Reports and Constructed Action, for details).
A BSL sign sometimes used in relation to Deaf creativity, including comedy and poetry,
is empathy ⫺ a sign that might be glossed as ‘change places with’, in relation to the
poet or comedian changing places with the creature, object, or matter under discussion.
Empathy is the way that Deaf audiences enjoy relating to the performer’s ideas. As
the character or entity comes to possess the signer’s body, we understand we are seeing
that object as a Deaf human.

6. Genres within the poetic genre

Given that much of the source of sign language poetry lies in Deaf folklore, we might
expect the poetic genres in sign languages to reflect the language games and stories
seen there. Indeed, some pieces presented as ASL poems are within the genres of
ABC-games, acrostics (in which the handshape of each sign corresponds to a letter
from the manual alphabet, spelling out a word), and number games seen in ASL folk-
lore. Clayton Valli, for example, was a master of such poems, essentially building on
these formats with additional rhythm or creativity in sign use. Poetic narratives are
also the sources for longer pieces that might be termed (for a range of reasons) narra-
tive poems.
Traditions of poetry from other cultures also influence sign language poetry so that
genres there may influence genres in sign language poems (especially haiku, described
below). Established ideas of form in English poetry lay behind many of Dorothy Miles’
sign language poems. In a television interview for Deaf Focus in 1976, she explained:
1008 VIII. Applied issues

I am trying […] to find ways to use sign language according to the principles of spoken
poetry. For example, instead of rhymes like ‘cat’ and ‘hat’, I might use signs like wrong
and why, with the same final handshape. [in this ASL case, the d-handshape]

Many sign language poems today are essentially “lyric poems” ⫺ short poems, densely
packed with images and often linguistically highly complex. Additionally, Beat poetry,
Rap, and Epic forms have all been used in sign language poems. However, perhaps
the most influential “foreign” genre has been the haiku form (Kaneko 2008). Haiku
originated in Japan as a verse form of seventeen syllables. Adherence to the syllable
constraints is less important in modern English-language haiku, where it is more impor-
tant that the poem should be very brief and express a single idea or image and stir up
feelings. Haiku is sometimes called “the six second poem”. Haiku’s strong emphasis
on creating a visual image makes sign language an ideal vehicle for it. Dorothy Miles
defined haiku as “very short poems, each giving a simple, clear picture”. Of her poems
in this genre, she wrote, “I tried to do the same thing, and to choose signs that would
flow smoothly together” (1988, 19). The features that Dorothy identified appear to
have become the “rules” for a signed haiku. Her four Seasons haiku verses (1976) ⫺
Spring, Summer, Autumn, and Winter ⫺ have been performed in ASL by other per-
formers and were analysed in depth by Klima and Bellugi as part of their ground-
breaking and highly influential linguistic description of ASL, The Signs of Language
(1979). Their analysis of Summer, using their ideas of internal structure, external struc-
ture, and kinetic superstructure, is well worth reading. Signed haiku has subsequently
given rise to signed renga, a collaboratively created and performed set of related haiku-
style poems. Signed renga has spread internationally and has been performed in coun-
tries including Britain, Ireland, Sweden, and Brazil.

7. Figurative poetic language


In traditional haiku, the direct representation of an image is presented literally, so
figurative language such as metaphor is not usually considered appropriate. However,
in many other sign language poems, figurative language is used to increase the commu-
nicative power of the signs in the poem. Metaphor, simile, hyperbole, and personifica-
tion are all figurative devices used in sign language poetry. Reference has already
been made to the importance of personification in sign language poetry. Hyperbole, or
caricature, is seen as a central element of creative sign language, often shown through
exaggerated facial expression. It is frequently a source of humour in poetry and often
works in conjunction with personification, so that its skilled use is highly valued. Many
signed poems and comments upon the figurative language within them may be found
at http://www.bristol.ac.uk/bslpoetryanthology.

7.1. Metaphor

Metaphor may be seen in many signed poems where the apparent topic in the poem
does not coincide with the theme of the poem. When the content gives no direct clue
41. Poetry 1009

to the theme in a signed poem, its interpretation is dependent on the expectations of


the audience, guided at times by the poet’s own explanations. For example, Clayton
Valli’s ASL poem Dandelions describes dandelions that keep growing in a lawn, de-
spite the gardener’s attempts to pull them out or mow them flat. Although there is no
mention of deafness in the poem, signing audiences might understand that the content
of the poem carries the theme of Deaf people’s resilience in the face of constant at-
tempts to destroy their sense of themselves and the Deaf community. Paul Scott’s
entertaining BSL poem The Tree (2006) ostensibly describes the life cycle of a tree,
but he has explained that it is to be understood as a commentary on the belief that
the Deaf community cannot be erased simply by cutting it down. The tree may be
felled and dragged away, but seeds will grow again into another tree. Similarly, many
of Dorothy Miles’ “animal poems” such as Elephants Dancing (1976) or The Ugly
Duckling (1998) are only superficially about animals. Their themes address the situa-
tion of Deaf people in relation to the wider hearing society. In the former poem,
Dorothy describes elephants that had been taught to “dance” for human entertainment
by having their legs chained to inhibit their natural movement. She ends her English
version of this poem with the lines “I hope one day to see/ Elephants dancing free”.
Despite the focus on elephants and the lack of any reference to Deaf people, this is
clearly intended to present an analogy with Deaf people being required to use speech
for the satisfaction of others, rather than their natural mode of signs. She explained
this in an introduction to a performance of the poem she gave in London in 1992.
Even without this introduction, however, the use of sign language in the poem and the
expectations of Deaf audiences would lead to this interpretation. The message of the
well-known story of the Ugly Duckling can be interpreted in many ways, but when the
story is presented as a sign language poem, most Deaf audiences will see it as the story
of a Deaf child growing up among hearing people before finding a Deaf identity in the
Deaf community.
Not all Deaf audiences will bring the same expectations to a poem in sign language,
however. Wilcox (2000), considering a signed poem about two dogs forced to cooperate
because they are chained together, has observed that members of different national
Deaf communities interpreted it in different ways according to their cultural experien-
ces and beliefs: Deaf Americans, aware of divisions within their own community, sug-
gested that the dogs stood for ASL users and Signed English users; Deaf Swiss-German
people saw the dogs as deaf and hearing people who needed to work together; and
Deaf Italians thought the dogs were people of different races, and not Deaf at all
because they believed that a defining part of being Deaf is to be united.
It is important to note that not all larger metaphorical themes in signed poems are
specifically related to deafness. Dorothy Miles’ poem Hang Glider (1976) is about the
fear that anyone ⫺ Deaf or hearing ⫺ might have when facing a new situation that
promises great reward in success but great loss in failure. Paul Scott’s Too Busy to Hug
is a warning to us all to open our eyes to the beauty of nature around us. Richard
Carter’s Looking for Diamonds is about the search for enduring love ⫺ something
both Deaf and hearing people might long for.
Many conceptual and orientational metaphors (Lakoff/Johnson 1980) are similar in
the thought processes and languages of several cultures. For example, many spoken
and sign languages share widespread conceptual metaphors such as LIFE IS A JOUR-
NEY and THE MIND IS A CONTAINER and orientational metaphors like GOOD
1010 VIII. Applied issues

IS UP and BAD IS DOWN (see, for example, Wilcox (2000); for further discussion,
see chapter 18). These metaphors are exploited in sign language poetry through the
use of symbolism in the formational parameters of signs. For example, signs in poems
that move repeatedly upward may be interpreted as carrying positive meaning and
downward signs carry negative connotations. Thus, as Taub (2001) has described, Ella
Mae Lenz’s ASL poem The Treasure uses images of burying and uncovering treasure
to describe appreciation of ASL ⫺ signs moving downward show negative views to-
ward the language and signs that move up show positive views. Thanks, a poem in
Italian Sign Language (LIS) by Giuranna and Giuranna (2000), contrasts downward-
moving signs when describing perceived shortcomings of the language with upward-
moving signs used to insist on the fine qualities of LIS.
Kaneko (2011), exploring signed haiku, found a strong correlation between hand-
shape and meaning in many of the poems she considered. Signs using an open hand-
shape correlate with positive semantic meaning and those with a tense ‘clawed’ hand-
shape tend to carry negative semantic meaning. This correlation is seen generally in
the BSL lexicon (and I would expect from informal observation and remarks in publi-
cations on other sign languages that BSL is not unique in this respect). Using the
Dictionary of BSL/English (1992), Kaneko calculated that of all the 2,124 signs listed,
7 % had a positive valence and 15 % had a negative valence (the remaining signs car-
ried neutral meaning). However, the distribution of these semantic attributes was dif-
ferent for signs with different handshapes. Of the signs made with the fully open <-
handshape (one of the most common handshapes in BSL), 22 % were positive and
6 % negative, suggesting that the open handshape is associated with positive meaning.
However, the semantic distribution of signs using the <-handshape with bent fingers
was 4 % positive and 31 % negative. Similar distributions can be found in the W- and
bent W-handshape, where 9 % of signs with the W-handshape were semantically nega-
tive while 23 % of the clawed or bent W-signs were negative. This form-meaning pattern
is frequently carried over into signed poetry, including signed haiku. Nigel Howard’s
haiku Deaf (2006) describes the happy event of a baby’s birth, but on discovery of the
child’s deafness, the doctors implant it with a cochlear implant. All the signs in the
poem use the open [-handshape (formationally very similar to the <-handshape in
its openness) until the final sign, which involves two bent W-handshapes for cochlear
implantation. In Wim Emmerik’s Sign Language of the Netherlands (NGT) poem Gar-
den of Eden (1995), the open handshapes of signs depicting the garden and the tree
(that is, representing paradise before The Fall) are followed by signs made with bent
handshapes referring to the snake, the worm, and even the apple. Clearly, an apple is
not inherently semantically negative but in this context, it was a source of the trouble,
so using a sign with a bent handshape fits into the overall image of something bad
following from something good.
Metaphorical use of language in poetry is also seen with reference to speed and
movement of signs. Richard Carter has suggested (seminar at the University of Bristol,
February 2008) that speed and movement of the signs may partly determine if some-
thing is a poem or a narrative. In storytelling, the timing and movement of signs are
motivated by the way that things actually move. In poetry, the timing and movement
are internal to the poem, and used to express more metaphorical perspectives and to
increase emotional impact. The slow-motion running in Looking for Diamonds does
not represent slow running, but rather the effort needed to reach the diamond.
41. Poetry 1011

7.2. Simile and analogy

Simile and analogy in signed poetry are common, both in the content of the poems
and the images presented. John Wilson’s BSL poem From the Depths clearly juxtaposes
the destruction of whales with the destruction of deaf schools and the sign language
and deaf culture that arise in them. A characteristic feature of analogies and similes
in signed poems is the way they blend form with meaning. Richard Carter’s Operation,
supposedly addressed to a small child, likens an impending operation to fixing a televi-
sion. He uses the clear analogy “having your operation is like repairing a television”
producing signs that are remarkably similar formationally to compare the two situa-
tions. For example, the same handshape and movement are used to depict the shaking
fuzzy lines on the faulty television screen and the discomfort in the child’s tummy.
Only the location differs ⫺ in neutral space for the television screen and at the lower
abdomen for the child’s pain. A sign meaning go-to-sleep involves the closing of both
hands into fists. When attributed to unplugging the faulty television, the hands are
closed to show the disappearance of the picture from the screen, but when they are
located at the eyes, the same sign shows the child going to sleep under anaesthetic.
This blending of similarly formed signs with parallel meanings occurs throughout the
poem and allows the poet to develop the analogy extensively.
Of similes seen at the level of individual phrases, one of the most famous in BSL
poetry is that in Dorothy Miles’ Trio (1998) in which darkness is explicitly likened to
a bat ⫺ dark like b-a-t bat-flies (“Darkness, like a bat/ flies close”). A widespread
BSL sign dark (or night) uses two <-handshapes crossing over in front of the face.
An entity classifier to show a bat flying at the face uses the same handshapes crossed
over but linked at the thumbs. Thus, although there are many conceptual reasons why
we might liken darkness to a bat, a key reason in BSL is because the sign dark is like
the sign bat.

8. Visual creativity

8.1. Neologism

Poetic language becomes more obtrusive and obvious when the poet breaks the rules
of the language. One way to break the rules is to create neologisms ⫺ words/signs no
one has seen before. As well as drawing attention to the language through the novelty
of these signs, creating new signs allows poets to create rhyming schemes (see section 9).
Neologisms made as productive signs can be visually very dense, creating considerable
extra meaning in few words. Creativity that produces a strong visual image is especially
valued in sign language poetry, and non-manual elements, such as facial expression,
body movement, and eye gaze are very important. An example of a highly creative
sign producing a memorable visual image is twin-trees, created by Dorothy Miles in
her poem Trio (1998; see, for example, Sutton-Spence (2001, 2005) for commentary on
this sign). Using a highly marked configuration of two arms, each creating the sign
tree, joined at the elbows, this sign directly depicts a tree reflected in water.
1012 VIII. Applied issues

Some neologisms are made through borrowing from English or by modifying exist-
ing, established, signs in the lexicon (Sutton-Spence 2005) but many more are created
from extensive use of the productive lexicon. This lexical resource is arguably distinct
from the established lexicon (see, for example, Brennan 1990) and is where so-called
“classifier signs” combine elements of language and gesture to show the movement,
appearance, location, and behaviour of the referent (see chapters 8 and 34 for discus-
sion). Novel signs may represent the characters or entities (or parts of those characters
or entities), often showing how they interact with the surroundings. The classifier signs
permit role shift and, especially, the characterisation and personification of animals
and other non-human objects.

8.2. Personification

In each of Dorothy Miles’ animal poems (both in ASL and BSL), such as The Cat
(1976) and The Ugly Duckling (1998), she “becomes” the animal. In his various BSL
narrative poems, Richard Carter becomes a bear, a reindeer, a fish, and a Jack-in-a-
Box. In all these examples, the non-human entity is shown imaginatively with specially
chosen emotions and actions to reflect the chosen character, and they all sign in a way
that we understand such a creature might sign. We may think that this challenge is
possible because many of the body parts of the animals (or the already anthropomor-
phic Jack-in-a-Box) can be mapped fairly directly onto the human body. They all have
eyes, mouths, heads, bodies, and arms or legs of some sort and so does the poet. How-
ever, the challenge is to represent them with imagination (and to depict body parts we
do not have, such as the reindeer’s antlers that occur in Carter’s poem). Additionally,
even totally inanimate objects can take possession of the poet’s body. Maria Gibson’s
prize-winning haiku poem The Kettle shows her become a kettle. In Paul Scott’s poem
Five Senses (2006), each sense in turn very explicitly takes possession of the poet in
order to show how the world might be if all our senses were Deaf. In his poem The
Tree (2006), Paul becomes the trees, and in Too Busy to Hug, he becomes a very
convincing mountain. In many instances, the role shift is achieved non-manually while
the hands articulate conventional “human” signs. There are too many examples of
poems using this device of personification to enumerate (see Sutton-Spence/Napoli
2010). At issue is not that this is “easy” to do in a visual language, but rather that it is
possible and that it is a highly skilled way of representing alternative Deaf worldviews.

8.3. Ambiguity

Creativity can also lead to ambiguity, which increases the communicative power of the
poetic language. An established lexical sign may be a homonym with two possible
meanings, but more often one established sign and one productive sign may have the
same form but different meanings. In Dorothy Miles’ ASL poem Our Dumb Friends
(1976), a dog’s tail asks, “Where’s the excitement?” This poetic device works because
the productive sign tail-wagging used to express excitement has the same form as the
lexical item where. When this poem is performed in BSL (see Sutton-Spence 2005),
41. Poetry 1013

the dog is asking, “What’s the excitement?” because the same productive sign depicting
the excited wagging tail has the BSL lexical meaning what. Most frequently, however,
a productive sign may have more than one interpretation, depending on the context
brought to it by the language environment and the audience’s expectations. It is impor-
tant to note here that the audience’s literacy in poetry and their understanding of the
context in which the poem is produced are often needed to appreciate the ambiguities
in the poem. Productive signs are underspecified semantically and derive much of their
meaning from context. Where poets deliberately obscure the context, the audience
needs to create it from their understanding of the frame of the poem.
Dorothy Miles’ poem Language for the Eye (1976), composed to show children the
richness of sign language, famously plays with ambiguous classifier signs. This can be
seen in the signs that form the ASL and BSL lines equivalent to her English translation
“Follow the sun from rise to set/ Or bounce it like a ball”. The same handshape repre-
senting a “spherical body” is used for both the sun and the ball, with the entertainment
deriving from the sudden shifts in interpretation of its meaning. Richard Carter (per-
sonal communication, January 2009) has described a child’s poem in which a boy opens
the curtains one morning to be blinded by the bright sun. He puts on his sunglasses,
reaches out and grabs the sun, then takes a bite out of it for his morning fruit. The curved
<-handshape in the sign is the same whether it represents the whole shining sun or a
boy handling an orange.

8.4. Morphing
In the process of morphing, one sign becomes another by merging or blending the
parameters in their production. Morphing smoothes the transition between signs to
make an aesthetically attractive flow of signing and also links the meaning of ideas
through the form of the signs, as we saw above in the discussion of signed similes. Paul
Scott’s BSL poem Three Queens repeatedly uses a motif of a flag flying to emphasise
the idea of national unity and continuity. This sign uses a [-handshape angled with
the fingers pointing contralaterally, a handshape and orientation also seen in the sign
recognise. When, at the climax of the poem, BSL is recognised as a true language by
the national government, the hand making the sign recognise draws back to become
the flag flying over the whole nation. Morphing is seen in the poetry of many different
sign languages. Wim Emmerik uses the same device in his NGT poem Garden of Eden
(1995) in the final sign (which may be glossed as bastard or a**hole), which morphs
out of the preceding sign apple, as the two signs share the same handshape and orienta-
tion, and the movement of the sign apple naturally ends where the expletive sign
begins. Adam’s bite of the apple is closely related to the poet’s opinion of his behaviour
by blending the two signs. The device is especially widespread in haiku poems where
the poem needs to carry as much meaning as possible in as few signs as possible.
Dorothy Miles’ Seasons haiku (1976) is rich with examples of signs blending into each
other, as the movement and location of one sign start where the previous sign ended
and handshapes of each sign blend into each other. Richard Carter’s Infancy, which is
part of a haiku quartet on the stages of life, uses the same handshapes and configura-
tion to represent both the birth canal and the child holding the mother’s breast to
suckle. The change in position of the poet’s head in relation to the two signs tells the
audience of the change in meaning.
1014 VIII. Applied issues

9. Repetition, rhythm, and rhyme


Neologisms draw attention to poetic language because they are unexpected. They are
elements that have not previously existed in the language, so audiences are obliged to
notice them for their intrusive irregularity. In contrast, obtrusive regularity (Leech
1969) brings the form of the language to the audience’s attention by using an existing
element of the language with unusual frequency. For many commentators, obtrusive
repetition is one of the defining features of poetry. Repetition creates patterns that
stand out as unusual. Elements may be repeated at different linguistic levels, including
syntactic, lexical, morphological, and sub-sign levels.
While an element may be repeated just once or many times, the “power of three”
in repetition is frequently seen in sign language poetry. This threefold repetition has
been noted as a major characteristic of folklore in general (see, for example, Olrik
1909 [1965]) and is seen in many types of signed folklore, feeding its way into poetry.
Amongst other things, there may be three clear stanzas within the poem; three different
scenarios may be described; three characters may be depicted; three actions may be
performed; or one action may be repeated three times. Hall (1989) has noted that the
importance of the number three is very culture-bound. Although three is widespread
in Europe, in Navajo, the number is four and she argues that this is also the case for
ASL. However, Booker (2004) has noted that four can be the number of perfection in
languages that use three as a pattern for repetition as three elements lead to a fourth
stage of “transformation”.

9.1. Rhythm

Rhythm is often considered to be a key element of signed poetry and may be used to
determine if a performance should be considered primarily poetic. Kaneko (2008, 149)
has defined rhythm for the purposes of exploring sign language poetry as an “arrange-
ment of distinct events according to time, perceived as forming a pattern”. In sign
language poems, these “distinct events” are recognisable movements or lack of move-
ment (holds) in signs. She notes that this definition encompasses key ideas of regularity
and predictability of the events (which the poet can choose to maintain or break for
effect) as well as their sequential temporal nature. She also highlights the importance
of the perception of the events as rhythmic. Rhythm is experienced only if the audience
has identified the patterns of regularity. Repeated use of movements and holds with
identical timing can be highly poetic. Blondel and Miller (2000, 2001), analysing the
rhythmic structure of nursery rhymes in French Sign Language, found that creation of
rhythm was an important element of the pieces they analysed.
Repetition and rhythm call attention once again to the distinction between perform-
ance and text. The repetition of handshape, movement, and location may be seen as
“text” where it is part of the internal structure. Rhythm may be a part of “text” when
the speed and timing of the signs’ movement serves some identifying or referential
purpose, or may have a “performance” element, external to the poem’s signs but con-
tributing to its overall meaning (see Klima/Bellugi 1979).
In some poems, the smooth and regular “metronome” beat of the signing in per-
formance can signal poetic intent. In others, rhythm becomes metaphorically linked to
41. Poetry 1015

abstract notions. Dorothy Miles said in a BBC TV interview in 1985: “If they want to
make it exciting, they will have a fast rhythm. If they want it slow, boring, sleepy, they’ll
have a long rhythm […]” (Sutton-Spence 2005). She demonstrated this in her BSL
poem Trio, in which Morning is characterised by a fast rhythm, Afternoon by instances
of long, slow rhythm, and Evening by a series of stops and holds. Rhythm itself may
be used to stand for equilibrium while lack of rhythm equates with lack of equilibrium.
Paul Scott’s Five Senses (2006) is signed with a smooth, predictable rhythm to show
that everything is functioning perfectly until the fourth sense (Hearing) fails to engage,
when the rhythm is lost. As the fifth sense of Sight takes centre stage, the rhythm picks
up again, but faster and more energetic than before, highlighting the key role of sight
in a signer’s world. As well as slow and fast rhythms, there might also be jerky, robotic
rhythms which are sometimes seen in creative pieces about machines and technology.
Kaneko has also identified rhythm as occurring in the “visual density” of signs.
Signs conveying a great deal of information (usually highly productive, creative signs,
with large amounts of non-manual input) may contrast with signs carrying much less
information (usually simple lexical items) to create patterns. Just as final signs in poems
are usually characterised by holds or extended movement, so they are often visually
dense ⫺ making them especially salient.

9.2. Repetition of signs

Repetition of entire signs may create a range of poetic effects. Again, as so much of
sign language poetry is directed towards pleasure in some aspects of language, repeat-
ing signs that are aesthetically pleasing will increase the audience’s enjoyment. This
may be especially the case where the poet has created an entertaining neologism that
the audience appreciated the first time they saw it. Repetition can create expectation
of patterns that may entertain as they continue (as in Ben Bahan’s (2006) ASL narra-
tive poem The Ball Story, or Clayton Valli’s (1995) ASL poems Cow and Rooster or
The Bridge) or break suddenly (as in Paul Scott’s (2006) BSL poem Five Senses).
Repeating signs may also build up visual imagery within the poem. The ASL poem
The Cowboy (which is reported to be well known among Gallaudet University gradu-
ates) repeats signs both across stanzas and within lines, so that the poem opens with
the signs galloping (x5), mountain (x3 on both hands), galloping (x4), gun-slapping-
hip (x3 on both hands), galloping (x2), and tree (x4 on both hands), and closes with
the same signs in reverse order. This extensive repetition sets up an enjoyable rhythm
and builds up a clear image of the cowboy riding.
Paul Scott’s BSL poem Train Journey (2006) repeats many signs to build visual
imagery and to create enjoyable patterns. One of the recognisable features of a long
train journey is its monotonous, repetitive nature, so repetition of signs reflects the
experience. However, when a train passes in the other direction, Scott’s representation
of this sharp interruption to his contemplation of the tedium is highly effective, startling
the audience to laughter. Therefore he uses the same construction again later in the
poem. The rhythmic component of this poem is also central to its success ⫺ a repetitive,
monotonous rhythm of signing that mirrors that of the train journey.
Repeating signs with small alteration is also highly effective. Modifying repeated
signs slightly is one way to show how something may appear the same but be perceived
1016 VIII. Applied issues

very differently depending on the context. Modified repeated signs can contrast emo-
tions in different situations. In an earlier part of Richard Carter’s narrative poem about
the Jack-in-a-Box, signs such as go-downstairs, open-door, open-parcel, and wind-
handle-on-box are signed rapidly and exuberantly as the child excitedly opens his
presents before he’s allowed to. Later in the poem, after he has been roundly scolded
by the Jack-in-a-Box, the same signs occur but the movements are smaller, slower, and
more reluctant, emphasising the boy’s guilt and dread at his parents finding out about
his transgression.
Repetition of a whole sign can be used to emphasise neologisms, as pointed out
previously. In an unwritten art form that is essentially transient, audiences do not have
the chance to linger upon a particular complex new sign. Poets may hold the neologism
for an unusually long time to allow the audience to focus on it, or they may repeat it.
Dorothy Miles’ powerful poem Hang Glider opens and ends with the sign here-are-
my-wings. At the start of the poem, the sign is introduced in a fairly low-key way but
at the end, it is repeated several times, each time with more pride, relief, and triumph.
The sign, a neologism made by holding both hands up and the arms open wide is
complex and brilliant. It is pretty much the largest sign one can make ⫺ and as such
is an especially “sonorant” sign (see Brentari (1998) for a discussion of sonority in sign
language phonology) – and it is closely associated with raising the hands in victory and
success. There is a quasi-Messianic resonance, as the stance reflects the posture of the
famous welcoming statue of Christ the Redeemer in Rio de Janeiro. The sign invites
signers to see triumph in their very hands, as it presents the hands as both the articula-
tors that make the sign wings and as the sign wings itself. With such a complex and
important sign, carrying all the implications of freedom and triumph in the hands of
Deaf people, it would be a wasted opportunity for the poet not to repeat it several
times.

9.3. Repetition of ‘sub-sign’ elements

One of the key elements of many signed poems is the repetition of sub-sign elements
to create patterns that increase the poetic significance of the signs. Clearly, rhythm
generated by timing of movements might be regarded as repetition of sub-sign el-
ements. The repetition of the elements at this level of the language may be seen as
analogous to patterns in spoken language such as rhyme, alliteration, and consonance,
and is often loosely (and cautiously) termed “rhyme”. It is hard to treat it as a direct
correlate of rhyme, however, because rhyme occurs as a result of sequences of sounds
in spoken words, through repetition of word-final sounds, and the sign parameters do
not occur sequentially ⫺ handshape, location, and movement path occur more simulta-
neously and cannot be isolated. There may be some argument for equating movement
repetition with “vowel” repetition, as movement is most easily manipulable through
time, and treating handshape with location more like “consonant” repetition, but still
the mode of the visual language does make signed “rhyme” different from that in
spoken languages.
Signs may share varying numbers of parameters. They may simply have handshape
or location in common, or they may be identical in both handshape and location but
differ in movement (or share location and movement but differ in handshape, and so
41. Poetry 1017

on). The more parameters are shared, the tighter the “rhyme”, but some shared param-
eters may be more salient than others. Certainly, due to the tradition of handshape
games in ASL folklore, many poets and audiences have come to appreciate poems in
which the handshape is repeated (Klima/Bellugi 1979; Sutton-Spence 2001).

9.4. Symmetry

Symmetry and balance are integral parts of poetry. Symmetry may be spatial or tempo-
ral. While both spoken and sign language poems may show temporal symmetry, spatial
symmetry is the preserve of sign languages (with the exception of some written poetry,
such as concrete poetry). Spatial symmetry is especially valued for its visual beauty. It
may be deliberately created or broken for poetic effect. Symbolically it can be a way
to acknowledge contrast without conflict by allowing balance to create harmony be-
tween the different elements.
As sign languages are articulated by humans with bilaterally symmetrical arms, it
should be no surprise that sign language lexicons contain high percentages of spatially
symmetrical signs (Napoli/Wu 2003; Sutton-Spence/Kaneko 2007). However, sign lan-
guage poets often make particular use of the two hands in a way that is marked in
comparison to everyday signing, selecting and creating large numbers of signs that are
symmetrical (Russo/Giuranna/Pizzuto 2001; Crasborn 2006). The BSL poet John Wil-
son (in a seminar at the University of Bristol, 2006) has found that he does not need
to teach symmetry to children in relation to sign language poetry. They seem to create
it naturally and only need tutoring in the ways that its use can be refined and used
symbolically within poetry. In Wim Emmerik’s NGT poem Garden of Eden (1995),
both hands are used up until the moment when Adam bites into the apple, when only
one hand is used. The symbolism shows symmetry and balance of signs being equated
with harmony in Paradise and the loss of this symmetry with the loss of Paradise.
Symmetry in spatial arrangement of the hands may operate in both the internal and
external poetic structures (a distinction introduced in relation to ASL poetry by Klima/
Bellugi 1979). As part of the internal poetic structure (creating the simple “text” of
the poem), two-handed signs in which the two hands are mirror-images of each other
can be used. To create the external poetic structure, two one-handed signs may be
placed symmetrically simultaneously. Alternatively, one-handed or two-handed signs
can be placed in symmetrically opposing areas sequentially, so that the viewer mentally
constructs an impression of symmetry. Symmetry may be right and left (vertical symme-
try), above and below (horizontal symmetry), and front and back. Additionally, any
symmetrical plane may also be on the diagonal. The first of these planes of symmetry
is the most common because signers are physically vertically symmetrical (Sutton-
Spence/Kaneko 2007).
Symmetry is especially important in sign language haiku, where it helps signers to
produce a great deal of meaning in a very short time as well as being visually appealing
in a genre whose aim is to produce powerful and interesting visual images. Almost any
haiku poem (see, for example, those at www.bristol.ac.uk/bslpoetryanthology) will
show considerable symmetry.
1018 VIII. Applied issues

9.5. Effects of repetition

Repetition can create an aesthetic effect, making a poem look elegant or entertaining,
and this can be sufficient reason for its use. Where there is strict discipline set by
repetitive patterns, audiences can admire the poet’s skill in creating them. As the audi-
ence start to detect patterns of repetition within the poem, they can be drawn further
into the poem, led by their expectations of the pattern. If the pattern continues, they
can delight in that; if the pattern created by the repetitions is suddenly broken, there
is the pleasure of the surprise of the shift.
Several commentators have written extensively on the effect of repetitive sub-sign
elements (see, for example, Valli 1993; Sutton-Spence 2000, 2005). It is clear that repeti-
tion may be purely for aesthetic reasons, or it may increase communicative power (as
was described above in our consideration of figurative language). Clayton Valli’s poem
for children Cow and Rooster (1995) simply plays with repetition. The poem uses two
main handshapes ⫺ the d-handshape for all references to the cow, and the X-hand-
shape for all references to the rooster. The handshape is maintained even when it
would not normally be used for a particular sign. For example, the sign graze or chew-
the-cud would normally use a closed-fist /-handshape but here the sign uses the d
because the activity is attributed to the cow. Additionally, location and movement of
different signs are repeated, as are body posture and eye gaze. Repetition here main-
tains identity of the two characters. Wim Emmerik’s more adult NGT poem Red Light
District (1995) repeatedly uses both hands in the <-handshape ⫺ for the fluttering
eyelashes of the prostitute, the men walking down the street, the physical reaction of
their nether regions to the prostitutes, and the figure ‘Ten’ for the price. This repetition
delights by its imaginative use of the handshape in the rather saucy tale. Dorothy
Miles’ BSL poem The Staircase, an Allegory (1998, composed to celebrate the univer-
sity graduation of Deaf sign language tutors) uses repeated <-handshapes with an
internal fluttering movement to link the key and very positive ideas of glimmering
lights, many people rushing towards a prize, and signed applause. The link between
these three ideas would be lost without the use of the same handshape and internal
movement. Repeated upward movements in this poem also serve to reinforce the ideas
of success and achievement (using the orientational metaphor GOOD IS UP, dis-
cussed above).
Beyond the poetic effect of repetition is its educational importance. Creating texts
that use repetition teaches children and language learners about the language. For
example, single handshape games, in which children or other language learners need
to produce coherent stories or poems using signs sharing the same handshape, can be
used to extend sign vocabularies or simply encourage children or other language
learners to think about the sign vocabulary they do already know.

10. Conclusion
This chapter has identified elements that may be seen in work that is valued and judged
to be poetry within Deaf communities, and produced by recognised sign language
poets. In-depth and careful study of the poems provides a wealth of insight into the
41. Poetry 1019

languages and cultures of Deaf people around the world, some of which may have
parallels with poetry in other spoken and written languages. Different poets highlight
different elements in their compositions and performances. All of them, however, pro-
duce highly creative, strongly visual, entertaining, and Deaf-affirming work, using sign
language in novel and noticeable ways. Neologisms and characterisation through role
shift work with repetition of a range of elements to create powerful poetic effects. With
increasing recognition of the genre and interest in its performance, we may expect to
see much more of this beautiful art form in years to come.

11. Literature

Bahan, Ben
2006 Face to Face Tradition in the American Deaf Community. In: Bauman, H.-Dirksen L./
Nelson, Jennifer/Rose, Heidi (eds.), Signing the Body Poetic. Berkeley, CA: University
of California Press, 21⫺50.
Bascom, William R.
1954 Four Functions of Folklore. In: Dundes, Alan (ed.), The Study of Folklore. Englewood
Cliffs, NJ: Prentice Hall, 279⫺298.
Bauman, H.-Dirksen L./Nelson, Jennifer/Rose, Heidi
2006 Introduction. In: Bauman, H.-Dirksen L./Nelson, Jennifer/Rose, Heidi (eds.), Signing
the Body Poetic. Berkeley, CA: University of California Press, 1⫺18.
Bechter, Frank
2008 The Deaf Convert Culture and Its Lessons for Deaf Theory. In: Bauman, H.-Dirksen
L. (ed.), Open Your Eyes: Deaf Studies Talking. Minneapolis: Minnesota University
Press, 60⫺82.
Blondel, Marion/Miller, Christopher
2000 Rhythmic Structures in French Sign Language (LSF) Nursery Rhymes. In: Sign Lan-
guage and Linguistics 3(1), 59⫺77.
Blondel, Marion/Miller, Christopher
2001 Movement and Rhythm in Nursery Rhymes in LSF. In: Sign Language Studies 2, 24⫺61.
Booker, Christopher
2004 The Seven Basic Plots ⫺ Why We Tell Stories. London: Continuum.
Brennan, Mary
1990 Word Formation in British Sign Language. Stockholm: University of Stockholm Press.
Brentari, Diane
1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Brien, David
1992 Dictionary of British Sign Language/English. London: Faber & Faber.
Carmel, Simon
1996 Deaf Folklore. In: Brunvand, Jan H. (ed.), American Folklore: An Encyclopedia. Lon-
don: Garland Publishing, 200⫺202.
Christie, Karen/Wilkins, Dorothy M.
2007 Themes and Symbols in ASL Poetry: Resistance, Affirmation, and Liberation. In: Deaf
Worlds 22(3), 1⫺49.
Crasborn, Onno
2006 A Linguistic Analysis of the Use of the Two Hands in Sign Language Poetry. In: Weijer,
Jeroen van de/Los, Bettelou (eds.), Linguistics in the Netherlands 2006. Amsterdam:
Benjamins, 65⫺77.
1020 VIII. Applied issues

Dundes, Alan
1965 What Is Folklore? In: Dundes, Alan (ed.), The Study of Folklore. Englewood Cliffs, NJ:
Prentice Hall, 1⫺6.
Esmail, Jennifer
2008 The Power of Deaf Poetry: The Exhibition of Literacy and Nineteenth Century Sign
Language Debates. In: Sign Language Studies 8(4), 348⫺368.
Hall, Stephanie
1989 ‘The Deaf Club is Like a Second Home’: An Ethnography of Folklore Communication
in American Sign Language. PhD Dissertation, University of Pennsylvania.
Kaneko, Michiko
2008 The Poetics of Sign Language Haiku. PhD Dissertation, Centre for Deaf Studies, Uni-
versity of Bristol.
Kaneko, Michiko
2011 Alliteration in Sign Language Poetry. In: Roper, Jonathan (ed.), Alliteration in Culture.
Basingstoke: Palgrave Macmillan, 231⫺246.
Klima, Edward S./Bellugi, Ursula
1979 The Signs of Language. Cambridge, MA: Harvard University Press.
Krentz, Christopher
2006 The Camera as Printing Press. How Film Has Influenced ASL Literature. In: Bauman,
H.-Dirksen L./Nelson, Jennifer/Rose, Heidi (eds.), Signing the Body Poetic. Berkeley,
CA: University of California Press, 51⫺70.
Kuntze, Marlon
2008 Turning Literacy Inside out. In: Bauman, H.-Dirksen L. (ed.), Open Your Eyes: Deaf
Studies Talking. Minneapolis: University of Minnesota Press, 146⫺157.
Lakoff, George/Johnson, Mark
1980 Metaphors We Live by. Chicago: University of Chicago Press.
Leech, Geoffrey
1969 A Linguistic Guide to English Poetry. London: Longman.
Miles, Dorothy
1976 Gestures: Poetry in Sign Language. Northridge, CA: Joyce Motion Picture Co.
Miles, Dorothy
1988 Bright Memory. Middlesex: British Deaf History Society.
Morgan, Theresa
2008 Best Behaviour. In: Times Literary Supplement, February 1st, 25.
Napoli, Donna Jo/Wu, Jeff
2003 Morpheme Structure Constraints on Two-handed Signs in American Sign Language:
Notions of Symmetry. In: Sign Language & Linguistics 6(2), 123⫺205.
Olrik, Axel
1909 Epic Laws of Folk Narrative. In: Dundes, Alan (ed.), The Study of Folklore. Englewood
Cliffs, NJ: Prentice Hall, 129⫺141.
Peters, Cynthia
2000 Deaf American Literature: From Carnival to the Canon. Washington, DC: Gallaudet
University Press.
Rose, Heidi
1994 Stylistic Features in American Sign Language Literature. In: Text and Performance
Quarterly 14(2), 144⫺157.
Russo, Tomasso/Giuranna, Rosaria/Pizzuto, Elena
2001 Italian Sign Language (LIS) Poetry: Iconic Properties and Structural Regularities. In:
Sign Language Studies 2(1), 84⫺112.
Rutherford, Susan
1995 A Study of American Deaf Folklore. Silver Spring, MD: Linstok Press.
41. Poetry 1021

Smith, Jennifer/Sutton-Spence, Rachel


2007 What Is the Deaflore of the British Deaf Community? In: Deaf Worlds 23, 46⫺69.
Sutton-Spence, Rachel
2001 Phonological ‘Deviance’ in British Sign Language Poetry. In: Sign Language Studies
2(1), 62⫺83.
Sutton-Spence, Rachel
2005 Analysing Sign Language Poetry. Basingstoke, UK: Palgrave Macmillan.
Sutton-Spence, Rachel/Kaneko, Michiko
2007 Symmetry in Sign Language Poetry. In: Sign Language Studies 7(3), 284⫺318.
Sutton-Spence, Rachel/Napoli, Donna Jo
2010 Anthropomorphism in Sign Languages: A Look at Poetry and Storytelling with a Focus
on British Sign Language. In: Sign Language Studies 10(4), 442⫺475.
Sutton-Spence, Rachel/Quadros, Ronice M. de
2005 Sign Language Poetry and Deaf Identity. In: Sign Language & Linguistics 8(1/2), 175⫺210.
Taub, Sarah
2001 Complex Superposition of Metaphors in an ASL Poem. In: Dively, Valerie/Metzger,
Melanie/Taub, Sarah/Baer, Anne Marie (eds.), Signed Languages: Discoveries from In-
ternational Research. Washington, DC: Gallaudet University Press, 197⫺230.
Valli, Clayton
1993 Poetics of American Sign Language Poetry. PhD Dissertation, Union Institute Gradu-
ate School.
Vollhaber, Tomas
2007 Zeig es ihnen! ⫺ Haiku und Gebärdensprachen. In: Das Zeichen 76, 213⫺222.
Wilcox, Phyllis
2000 Metaphor in American Sign Language. Washington, DC: Gallaudet University Press.

Literature/poetry on video or DVD


Emmerik, Wim
1995 Poezie in Gebarentaal 1. Amsterdam: Het Komplex (Video, 33min).
Giuranna, Rosaria/Giuranna, Giuseppe
2000 Seven Poems in Italian Sign Language (LIS). Rome: Graphic Service, Istituto di Psicolo-
gia, Consiglio Nazionale delle Ricerche.
The Life and Deaf Association
2006 Life and Deaf. London: The Life and Deaf Association (DVD).
Miles, Dorothy (1999), In DVD Accompanying: Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Pimenta, Nelson
2003 LSB Vídeo. Rio de Janeiro: Editora Abril.
Scott, Paul
2006 Sign Poetry. Coleford, Glos: Forest (DVD).
Smith, June (1999), In DVD Accompanying: Sutton-Spence, Rachel/Woll, Bencie
1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge Uni-
versity Press.
Valli, Clayton
1995 ASL Poetry: Selected Works of Clayton Valli. San Diego, CA: Dawn Sign Press.

Poems mentioned in the text available at www.bristol.ac.uk/bslpoetryanthology


(Anthology made possible by AHRC grant AH/G011672/1)
Carter, Richard
Jack in the Box; Operation; Diamonds are Forever; Sam’s Birthday
[click on Poems > Poems by Professionals]
1022 VIII. Applied issues

Gibson, Maria
Kettle
[click on Poems > Poems by Participants > Festival 2006]
Howard, Nigel
Deaf
[click on Poems > Poems by Participants > Festival 2006]
Miles, Dorothy
Our Dumb Friends; Seasons
[click on Poems > Poems by Dot Miles]
Scott, Paul
Five Senses; The Tree; Three Queens; Too Busy to Hug
[click on Poems > Poems by Professionals]

Rachel Sutton-Spence, Bristol (United Kingdom)


IX. Handling sign language data

42. Data collection


1. Introduction
2. Introspection
3. Data elicitation
4. Sign language corpus projects
5. Informant selection
6. Video-recording data
7. Conclusion
8. Literature

Abstract
This chapter deals with data collection within the field of sign language research and
focuses on the collection of sign language data for the purpose of linguistic ⫺ mainly
grammatical ⫺ description. Various data collection techniques using both introspection
and different types of elicitation materials are presented and it is shown how the selection
of data can actually have an impact on the research results. As the use of corpora is an
important recent development within the field of (sign) linguistics, a separate section is
devoted to sign language corpora. Furthermore, two practical issues that are more or
less modality-specific are discussed, i.e. the problem of informant selection and the more
technical aspects of video-recording the data. It is concluded that in general, publications
should contain sufficient information on data collection and informants in order to help
the reader evaluate research findings, discussions, and conclusions.

1. Introduction
Sign language linguistics is a broad research field including several sub-disciplines, such
as (both diachronic and synchronic) phonetics/phonology, morphology, syntax, seman-
tics and pragmatics, sociolinguistics, lexicography, typology, and psycho- and neurolin-
guistics. Each sub-domain in turn comprises a wide range of research topics. For exam-
ple, within sociolinguistics one can discern the linguistic study of language attitudes,
bi- and multilingualism, standardisation, language variation and language change, etc.
Furthermore, each sub-domain and research question may require specific types of
data. Phonological and lexicographical research can focus on individual lexemes, but
morphosyntactic research requires a different approach, using more extensive corpora,
certain language production elicitation methods, and/or introspection. For discourse
related research into turn-taking, on the other hand, a researcher would need to video-
tape dialogues or multi-party meetings. Even within one discipline, it is necessary to
first decide on the research questions and then on which methodologies can be used
1024 IX. Handling sign language data

to find answers. Since it is not possible to deal with all aspects of linguistic research in
this chapter, we have decided to focus on data collection for the purpose of linguistic
description.
In general, (sign language) linguists claim to be using either qualitative or quantita-
tive methodologies and regard these methodologies as two totally different (often in-
compatible) approaches. However, we prefer to talk about a continuum of research
methodologies from qualitative to quantitative approaches rather than a dichotomy.
At one end of the continuum, introspection can be found as the ultimate qualitative
methodology (see section 2); at the other end, experimentation as the typically quanti-
tative one is situated. To the best of our knowledge, the latter methodology has not
been used in studies of linguistic description of sign languages. In between, there are
mainly methods of observation and focused description on the basis of systematic elici-
tation (see section 3). Next, and newer to the field of sign language research, there are
corpus-based studies where a (relatively) large corpus is mined for examples of struc-
tures and co-occurrences of items that then constitute the data for analysis (see section
4). When designing a study, it is also very important to think about the selection of the
informants (see section 5) and to take into account the more technical aspects of data
collection (see section 6).

2. Introspection

2.1. Value and limitations

According to Larsen-Freeman and Long (1991, 15) “[p]erhaps the ultimate qualitative
study is an introspective one” in which subjects (often the researchers themselves)
examine their own linguistic behaviour. In linguistics (including sign language linguis-
tics) this methodology has frequently been used for investigating grammaticality judg-
ments by means of tapping the intuitions of the “ideal native speaker” (Chomsky
1965). Schütze (1996, 2) gives some reasons why such an approach can be useful:

⫺ Certain rare constructions are sometimes very hard to elicit and hardly ever occur
in a corpus of texts. In this case, it is easier to present a native speaker with the
construction studied and ask him/her about grammaticality and/or acceptability.
⫺ A corpus of texts or elicited data cannot give negative information, that is, those
data cannot tell the researcher that a certain construction is ungrammatical and/or
unacceptable.
⫺ Through tapping a native speaker’s intuitions, performance problems in spontane-
ous speech, such as slips of the tongue or incomplete utterances, can be weeded out.

At the same time, Schütze (1996, 3⫺6) acknowledges that introspection as a methodol-
ogy has also attracted a great deal of criticism:

⫺ Since the elicitation situation is artificial, an informant’s behaviour can be entirely


different from what s/he would normally do in everyday conversation.
42. Data collection 1025

⫺ Linguistic elicitation as it has been done in the past few decades does not follow
the procedures of psychological experimentation since the data gathering has been
too informal. Sometimes researchers only use their own intuitions as data, but in
Labov’s terms: “Linguists cannot continue to produce theory and data at the same
time” (1972, 199). Moreover “[b]eing a native speaker doesn’t confer papal infalli-
bility on one’s intuitive judgments” (Raven McDavid, quoted in Paikeday 1985).
⫺ Basically, grammaticality judgments are another type of performance. Although
they supposedly tap linguistic competence, linguistic intuitions “are derived and
rather artificial psycholinguistic phenomena which develop late in language acquisi-
tion […] and are very dependent on explicit teaching and instruction” (Levelt et al.
1977, in Schütze 1996).

The last remark in particular is highly relevant for sign language linguistics, since many,
if not most, native signers will not have received any explicit teaching and instruction
in their own sign language when they were at school. This fact in combination with
the scarcity or complete lack of codification of many sign languages and the atypical
acquisition process of sign languages in many communities (which results in a wide
variety of competencies in these communities) raises the question to what extent it is
possible to tap the linguistic intuitions of native signers in depth (see also section 5 on
informant selection).
Schütze himself proposes a modified approach in order to answer the above criti-
cism. The central idea of his proposal is that one should investigate not only a single
native speaker’s linguistic intuitions, but rather those of a group of native speakers:

I argue […] that there is much to be gained by applying the experimental methodology of
social science to the gathering of grammaticality judgments, and that in the absence of
such practices our data might well be suspect. Eliminating or controlling for confounding
factors requires us to have some idea of what those factors might be, and such an under-
standing can only be gained by systematic study of the judgment process. Finally, I argue
that by studying interspeaker variation rather than ignoring it (by treating only the majority
dialect or one’s own idiolect), one uncovers interesting facts. (Schütze 1996, 9)

Clearly, caution remains necessary. Pateman (1987, 100), for instance, argues that “it
is clear and admitted that intuitions of grammaticality are liable to all kinds of interfer-
ence ‘on the way up’ to the level at which they are given as responses to questions. In
particular, they are liable to interference from social judgments of linguistic accepta-
bility”.

2.2. Techniques

Various techniques have been used to tap an informant’s intuitions about the linguistic
issue under scrutiny. Some of them will be discussed in what follows, but this is certainly
not an exhaustive list.

(i) Error recognition and correction


Error recognition is a fairly common task but has not been used all that frequently
in sign language research. Here informants are presented with a number of utter-
1026 IX. Handling sign language data

ances and are asked to detect possible errors and to correct them if there are any.
However, since many sign languages have not yet (or hardly) been codified, and
since many sign language users have not been educated in their sign language,
this may prove to be a difficult task for certain informants (see above). Therefore,
caution is warranted here.
(ii) Grammaticality judgments
In this type of task, informants are presented with a number of utterances and
are asked whether they would consider them grammatically correct and/or appro-
priate or not. If a negative reply is given, informants can be asked to correct the
utterance as well. An example of this in sign language research is a task in which
a participant who is presented with a number of classifier handshapes embedded
in the same classifier construction is asked whether the particular classifier hand-
shape is appropriate/acceptable in the context provided.
An extension of this task would be to vary certain aspects of the execution of
a sign (for instance, the handshape, the location, the rate, the manner, the non-
manual aspects, etc.) and to ask the informants what the consequences of the
change(s) actually are (morphologically, semantically, etc.) rather than just asking
them whether the modified production would still be grammatically correct and/
or acceptable.
(iii) Semantic judgments
Informants can be asked what the exact meaning of a certain lexeme is and in
which contexts it would typically occur or in which contexts and situations it would
be considered appropriate. In sign language research, informants can also be asked
whether a certain manual production would be considered a lexical, conventional
sign, or whether it is rather a polycomponential construction.
(iv) Other judgment tasks
Informants can also be asked to evaluate whether certain utterances, lexemes,
etc. are appropriate for a given discourse situation (for instance, with respect to
politeness, style, genre, and/or register). Furthermore, they could be asked to in-
trospect on the speech act force of a certain utterance. To our knowledge, this
type of task has not been used all that frequently in sign language research. What
has been done quite frequently in sign language research though, is checking back
with informants by asking them to introspect on their own productions of certain
elicited data and/or by asking (a group of) native signers to introspect on certain
elicited data (and/or the researchers’ analyses) (see also section 3.3).

3. Data elicitation
In the first part of this section, some examples of tasks which can be used for data
elicitation will briefly be explained. A number of these have been used quite exten-
sively by various researchers investigating different sign languages while others have
been used far less frequently (cf. Hong et al. 2009). The discussion proceeds from tasks
with less control exerted by the researcher to tasks with more control. The second part
discusses certain decisions with respect to data collection and the impact these deci-
sions can have on the results obtained. Finally, an integrated approach, in which various
methodologies are used in sequence, is presented.
42. Data collection 1027

3.1. Elicitation techniques and materials

3.1.1. Recording natural language use in its context

On the “tasks with less ⫺ or no ⫺ control” end of the continuum, one finds the
recording of traditional narratives in their appropriate context. One could, for example,
videotape the after-dinner speech of the Deaf club’s president at the New Year’s Eve
party as an example of a quasi-literal oratorical style involving the appropriate adjust-
ments for a large room and the formality of the occasion. A priest’s sermon would be
a similar example. In studies of language acquisition, videotaping bathtime play is a
good way to collect data from parent-child interaction (Slobin/Hoiting/Frishberg, per-
sonal communication). In this context, it is very important for the researcher to be
aware of the “Observer’s Paradox”, as first mentioned by Labov in the late 1960s (e.g.
Labov 1969, 1972). Labov argues that even if the observer is very careful not to influ-
ence the linguistic activity, the mere presence of an observer will have an impact on
the participants, who are likely to produce utterances in a manner different from when
the observer is not present.

3.1.2. Free and guided composition

In free composition, the researcher merely provides the informant with a topic and
asks him/her to talk about that topic. Again, there is little control although it is possible
to use this task to elicit particular structures. An obvious example is to ask an inform-
ant about his/her past experiences, in order to get past time references in the data. An
example of guided composition that has been used in sign language research, is when
informants are asked to draw their own family tree and to talk about family relations
in order to elicit kinship terms.

3.1.3. Role play and simulation games

Role play and simulation games are tasks which can also easily be used to elicit particu-
lar grammatical structures. If one informant is told to assume the role of interviewer
and another is the interviewee (a famous athlete, for instance), the elicited data are
expected to contain many questions. Creative researchers can certainly invent other
types of role play yielding different grammatical structures.
Simulation games are usually played on a larger scale (with more participants), but
are less well-defined in that the players only get a prompt and have to improvise as
the conversation progresses (for example, they have to simulate a meeting of the board
of a Deaf club, a family birthday party). As such, the researcher does not have a lot
of control, but this procedure can nevertheless yield focused data (to look at turn-
taking or register variation, for instance).

3.1.4. Communication games

Communication games have also been used in sign language research to elicit produc-
tion data. An example is a game played by two people who are asked to look at
1028 IX. Handling sign language data

drawings which contain a number of (sometimes subtle) differences. The players can-
not see each other’s drawings and have to try to detect what exactly those differences
are by asking questions. Other possibilities include popular guessing games like I spy …
or a game played between a group of people in which one participant thinks of a
famous person and the others have to guess the identity of this person by asking yes/
no questions (and other variants of this game).

3.1.5. Story retelling

In sign language research, story retelling is commonly used for data elicitation. Here
we present four forms of story telling: (i) picture story retelling, (ii) film story retelling,
(iii) the retelling of written stories, and (iv) the retelling of signed stories.

(i) Picture story retelling

In some picture story elicitation tasks, informants are presented with a picture story
made up of drawings and are asked to describe the depicted events. Normally such
picture stories do not contain any type of linguistic information, that is, there is no
written language accompanying the pictures. The following stories have been quite
widely used in sign language research:

The Horse Story


The Horse Story (Hickmann 2003) was originally used in spoken language acquisition
research but has also been used to elicit sign language data from adult signers, espe-
cially but not exclusively with a view to crosslinguistic comparison, as well as in re-
search on ‘homesign’ (see chapter 26). It is a rather short picture story made up of five
drawings about a horse that wants to jump over a fence in order to be with a cow in
an adjacent meadow. However, the horse hits the fence, hurts its leg, and falls down.
A little bird has witnessed the scene and flies off to get a first-aid kit. This is then used
by the cow to bandage up the horse’s leg.

The Snowman
A longer picture story with a longer history of being used for the elicitation of sign
language data is The Snowman, a children’s book by Raymond Briggs, first published
in 1978 and turned into an animated film in 1982. The story is about a boy who makes
a snowman that comes to life the following night. A large part of the story deals with
the boy showing the snowman appliances, toys, and other bric-a-brac in the boy’s
house, while they are trying to keep very quiet so as not to wake up the boy’s parents.
Then the boy and the snowman set out on a flight over the boy’s town, over houses
and large buildings, before arriving at the sea. While looking at the sea, the sun starts
to rise and they have to return home. The next morning, the boy wakes up to find the
snowman melted. This is the story as it appears in the book; the film has additional
parts, including a snowmen’s party and a meeting with Father Christmas and his rein-
deer.
42. Data collection 1029

Frog, where are you


Another wordless picture story often used to elicit sign (and spoken) language narra-
tives is Frog, Where Are You by Mercer Mayer, published in 1969. This story is about
a boy who keeps a frog captured in a jar. One night, however, the frog escapes and
the boy, accompanied by his dog, goes looking for it in the woods. Before finding the
frog, they go through various adventures.

(ii) Film story retelling


Next to narratives elicited by means of drawings, there is also film story retelling:
informants are shown animated cartoons or (part of) a film and are asked to re-tell
what they have just seen. Usually, cartoons or films used to elicit sign language narra-
tives contain no or little spoken or written language. Examples include clips taken
from The Simpsons, Wallace and Gromit, The Pink Panther, and Tweety Bird & Syl-
vester cartoons as well as short episodes from Die Sendung mit der Maus, a German
children’s television series featuring a large personified mouse and a smaller personi-
fied elephant as the main protagonists. All of these animated cartoons were produced
to be shown on television. There are also films that were made specifically for use in
linguistic research. A well-known example is The Pear Story, a six-minute film devel-
oped by Wallace Chafe and his team in the mid-1970s to elicit narratives from speakers
around the world (Chafe 1980). The film shows a man harvesting pears, which are
stolen by a boy on a bike. The boy has some other adventures with other children,
before the farmer discovers that his pears are missing. The film includes sound effects
but no words. The Pear Story has also been used in sign language research.

(iii) Retelling of written stories


There are some examples of signed narratives elicited by means of written stories. In
the context of Case Study 4: Sign Languages of the ECHO (European Cultural Heritage
Online) project, for example, stories from Aesop’s Fables in written English, Swedish,
and Dutch were used to elicit narratives in British Sign Language (BSL), Swedish Sign
Language (SSL), and Sign Language of the Netherlands (NGT). Working with this
type of translated texts can have two major drawbacks. First, the (morpho)syntax of
the target language may be influenced by the source language, and second, one needs
to make sure that informants have a (near-)native proficiency in both languages. At
the same time, however, working with parallel corpora of translated texts can be inter-
esting for other purposes, e.g. for translation studies.

(iv) Retelling of signed stories


Some of the NGT-fables mentioned above were used as elicitation materials during
more recent NGT-data collection sessions: signers were shown the signed fables and
asked to retell them (Crasborn/Zwitserlood/Ros 2008). These signed stories can then
again be used for analysis towards linguistic description.

3.1.6. Video clip description


In the 1970s, Supalla created elicitation materials designed to elicit polycomponential
verbs of motion and location (Supalla 1982). The original materials, known as the Verbs
1030 IX. Handling sign language data

of Motion Production Test (VMP), include some 120 very short video clips showing
objects moving in specific ways. Informants (American deaf children in the original
Supalla (1982) study) are asked to watch the animated scenes and to describe the
movement of the object shown in the clip. The VMP task can easily be used to study
verbs of motion and location in other sign languages and/or produced by other groups
of informants, although Schembri (2001, 156) notes that the task may be of less use
with signers from non-Western cultures because the objects include items that may not
be familiar to members of these cultures. There is a shorter version of the VMP task
which consists of 80 coded items and five practice items. This version is included as
one of twelve tasks in the Test Battery for American Sign Language Morphology and
Syntax (Supalla et al., no date). Both the longer and the short version of the VMP task
have been used in a number of studies on different sign languages and are still used
today, for example, in the context of some of the corpus projects discussed in section 4.
A somewhat comparable set of stimuli are the ECOM clips from the Max Planck
Institute for Psycholinguistics (Nijmegen): 74 animations showing geometrical entities
that move and interact. These have also been used in sign language research, mainly
to study classifier constructions. A set of stimuli consisting of 66 videotaped skits of
approximately 3⫺10 seconds depicting real-life actors performing and undergoing cer-
tain actions was used to study indexicality of singular versus plural verbs in American
Sign Language (Cormier 2002).

3.1.7. Picture description

Picture description may take the form of a question-and-answer session. Participants


are asked to look at a picture or a series of pictures (or drawings) and then answer
questions designed to elicit particular structures under study. This is a fairly common
procedure in lexicographical research, but has also been used to target certain gram-
matical patterns. In such a question-and-answer session, there is linguistic interaction
between the informant and at least one other person. In another task involving picture
description, the signer describes a specific picture to an interlocutor who subsequently
has to select the correct picture (i.e. the picture described by the signer) from a series
of (almost identical) pictures or drawings. This elicitation procedure is often used to
elicit specific forms or structures, e.g. plural forms, locative constructions, or classifier
constructions. A well-known example in sign language research is the, by now classical,
study on word order in Italian Sign Language, for which Volterra et al. (1984) designed
elicitation materials. Since then, these materials have been used for the analysis of
constituent order in declarative sentences in a number of other sign languages (John-
ston et al. 2007).
In the Volterra et al. task, eighteen pairs of drawings with only one contrastive
element (e.g. ‘A cat is under a chair’ versus ‘A cat is on a chair’) are used to elicit
sentences describing three distinct states of affairs: six locative states of affairs (e.g.
‘The tree is behind/in front of the house’), six non-reversible states of affairs (e.g. ‘The
boy/girl eats a piece of cake’), and six reversible states of affairs (e.g. ‘The car is towing
the truck/The truck is towing the car’). The person videotaped is a signer who has the
drawings before him/her, and for each pair, one of the drawings is marked with an
arrow. The interlocutor, another signer who is not being videotaped, has the same
42. Data collection 1031

drawings, but without arrows. The first signer is asked to sign one sentence describing
the drawing marked with the arrow; the interlocutor is asked to indicate which of the
two drawings of each pair is being described.
The main purpose of studies using this elicitation task has been to analyse whether
the sign language under investigation exhibits systematic ordering of constituents in
declarative utterances that contain two arguments, and if this is the case, to determine
the patterns that occur.
A variant of the Volterra et al. task makes use of elicitation materials that consist
of sets of pictures, e.g. four pictures, with only one deviant picture. The signer is asked
to describe the picture that is different. This task may, for example, be used to elicit
negative constructions, when the relevant picture differs from the others in that there
is something missing.

3.1.8. Elicited translation

In elicited translation, the researcher provides the informant with an isolated utterance
in one language (usually the surrounding spoken language, but it could also be another
sign language) and asks the informant to translate the utterance into his/her own (sign)
language. This procedure has been widely used in sign language research, especially in
its early days, but has more recently been regarded with suspicion as it is faced with
the risk of interference from the source language onto the target language. Conse-
quently, (mostly morphosyntactic) linguistic descriptions of target sign language struc-
tures elicited by means of this method may be less valid.
A slightly less controlled form of elicited translation consists in presenting inform-
ants with verbs in a written language and asking them to produce a complete signed
utterance containing the same verb. In order to further minimize possible interference
from the written language, these utterances can subsequently be shown to another
informant who is asked to copy the utterance. It is the final utterance which is then
used for the analysis. This would be an example of elicited imitation (see next sub-sec-
tion).

3.1.9. Elicited imitation

In elicited imitation, the researcher produces an utterance containing a certain linguis-


tic structure and asks the informant to repeat what s/he has just produced. If the utter-
ance is long enough, the informant will not be able to rely on working memory, but
will have to rely on semantic and syntactic knowledge of the language. To our knowl-
edge, this procedure has not been used in sign language research yet, but it could yield
interesting results when executed correctly. One could imagine that this procedure
might be used to study non-manuals, for instance.

3.1.10. Completion task

In a manner fairly similar to the previous one, informants are asked to complete an
utterance started by the researcher. This type of task can be used to study plural
1032 IX. Handling sign language data

formation, for instance. The researcher signs something like “I have one daughter, but
John has …” (three daughters). As far as we know, this technique has only rarely been
used in sign language research.

3.1.11. Structured exercises

In structured exercises, informants are asked to produce certain sentence structures in


a predetermined manner. Informants can be presented with two clauses, for instance,
and asked to turn them into one complex sentence (e.g. by means of embedding one
of the clauses as a relative clause into the other), or can be asked to turn a positive
utterance into a negative one. Again, this technique has been used in sign language
research, but certainly not on a large scale.

3.2. Data selection and impact on results

The selection of data can, of course, have a major impact on research results. When
examining the degree of similarity across the grammars of different sign languages, for
instance, looking at elicited utterances produced in isolation may lead to a conclusion
which is very different from the overall picture one would get when comparing narra-
tives resulting from picture story descriptions. The latter type of data contains many
instances where the signer decides to “tell by showing” (“dire en montrant”; Cuxac
2000), and it seems likely that the resulting prominence of visual imagery in the narra-
tives ⫺ among other issues ⫺ yields more similarity across sign languages (see, for
instance, Vermeerbergen (2006) for a more comprehensive account). In general, the
strategy of ‘telling by showing’ is (far) less present in isolated declarative sentences,
and it is in these constructions where we find more differences between different sign
languages (Van Herreweghe/Vermeerbergen 2008).
The nature of the data one works with might also influence one’s opinion when it
comes to deciding on how to approach the analysis of a sign language. Does one opt
for a more ‘oral language compatible view’ or rather decide on a ‘sign language differ-
ential view’?

On the one hand, there is the oral language compatibility view. This presupposes that most
of SL structure is in principle compatible with ordinary linguistic concepts. On the other
hand, there is the SL differential view. This is based on the hypothesis that SL is so unique
in structure that its description should not be primarily modelled on oral language analo-
gies. (Karlsson 1984, 149 f.)

Simplifying somewhat, it could be argued that the question whether ‘spoken language
tools’, that is, theories, categories, terminology, etc. developed and used in spoken lan-
guage research, are appropriate and/or sufficient for the analysis and description of
sign languages will receive different answers depending on whether one analyzes the
signed production of a deaf comedian or a corpus consisting of single sentences trans-
lated from a spoken language into a sign language. A similar observation can be made
with regard to the relationship between the choice of data and the issue of sign lan-
42. Data collection 1033

guages as homogeneous systems or as basically heterogeneous systems in which mean-


ings are conveyed using a combination of elements, including linguistic elements but
also components traditionally regarded as not being linguistic in nature.

3.3. An integrated approach

When it comes to studying a certain aspect of the linguistic structure of a sign language,
we would like to maintain that there is much to be gained from approaching the study
by using a combination of the above-mentioned methodologies and techniques. An
example of such an integrated line of research for the study of negatives and interroga-
tives in a particular sign language might include the following steps:

Step 1: Making an inventory of the instances of negatives and interrogatives in a


previously collected and transcribed corpus of monologues and dialogues in
the sign language studied.
Step 2: Eliciting more focused production data in which negatives and interrogatives
can be expected, such as a role play between signers in which one informant
takes the role of an interviewer (asking questions) and the other of the inter-
viewee (giving negative replies), or by means of communication games.
Step 3: Transcribing the data collected in step 2 and making an inventory of the nega-
tives and interrogatives, followed by an analysis of these occurrences.
Step 4: Checking the analysis against the intuitions of a group of (near-)native signers
by means of introspection.
Step 5: Designing a more controlled judgment study in which one group is confronted
with (what the researchers think are) correct negative and interrogative con-
structions and another with (what the researchers think are) incorrect nega-
tives and interrogatives.
Step 6: Proposing a description of the characteristic properties of negatives and inter-
rogatives in the sign language under scrutiny.

4. Sign language corpus projects

4.1. Why corpus linguistics?

Corpus linguistics is a fairly new branch of linguistic research which goes hand in hand
with the possibilities offered by more and more advanced computer technology. In the
past, any set of data on which a linguistic analysis was performed was called a ‘corpus’.
However, with the advent of computer technology and corpus-based linguistics, use of
the term ‘corpus’ has become more and more restricted to any type of collection of
texts in a machine-readable form. Johnston (2009, 18) argues: “Corpus linguistics is
based on the assumption that processing large amounts of annotated texts can reveal
patterns of language use and structure not available to lay user intuitions or even to
1034 IX. Handling sign language data

expert detailed linguistic analyses of particular texts.” In corpus linguistics, “quantita-


tive analysis goes hand in hand with qualitative analysis” (Leech 2000, 49) since

[e]mpirical linguists are interested in the actual phenomena of language, in the recordings
of spoken and written texts. They apply a bottom-up procedure: from the analysis of indi-
vidual citations, they infer generalizations that lead them to the formulation of abstractions.
The categories they design help them understand differences: different text types, syntactic
oppositions, variations of style, shades of meaning, etc. Their goal is to collect and shape
the linguistic knowledge needed to make a text understandable. (Mahlberg 1996, iv)

The same obviously holds for sign language corpora. However, since they contain face-
to-face interaction, they are more comparable to spoken language corpora than to
written language corpora, and according to Leech (2000, 57),

[t]here are two different ways of designing a spoken corpus in order to achieve ‘representa-
tiveness’. One is to select recordings of speech to represent the various activity types,
contexts, and genres into which spoken discourse can be classified. This may be called
genre-based sampling. A second method is to sample across the population of the speech
community one wishes to represent, in terms of sampling across variables such as region,
gender, age, and socio-economic group, so as to represent a balanced cross-section of the
population of the relevant speech community. This may be called a demographic sampling.

In sign language corpora, it is especially the latter type of sampling that has been done
so far. Moreover, sign language corpora are similar to spoken language corpora (and
not so much to written language corpora) since they are only machine-readable when
transcriptions and annotations are included (for the transcription of sign language data,
we refer the reader to chapter 43).

4.2. Sign language corpora

In sign language linguistics, corpus (at least in its more restricted sense of machine-
readable corpus) linguistics is still in its infancy, although rapidly growing. Johnston
(2008, 82) expresses the need for sign language corpora as follows:

Signed language corpora will vastly improve peer review of descriptions of signed lan-
guages and make possible, for the first time, a corpus-based approach to signed language
analysis. Corpora are important for the testing of language hypotheses in all language
research at all levels, from phonology through to discourse […]. This is especially true of
deaf signing communities which are also inevitably young minority language communities.
Although introspection and observation can help develop hypotheses regarding language
use and structure, because signed languages lack written forms and well developed commu-
nity-wide standards, and have interrupted transmission and few native speakers, intuitions
and researcher observations may fail in the absence of clear native signer consensus of
phonological or grammatical typicality, markedness or acceptability. The past reliance on
the intuitions of very few informants and isolated textual examples (which have remained
essentially inaccessible to peer review) has been problematic in the field. Research into
signed languages has grown dramatically over the past three to four decades but progress
in the field has been hindered by the resulting obstacles to data sharing and processing.
42. Data collection 1035

One of the first (if not the first) large-scale sign language corpus projects is the corpus
of American Sign Language (ASL) collected by Ceil Lucas, Robert Bayley, and their
team (see, for instance, Lucas/Bayley/Valli 2001). In the course of 1995, they collected
data in seven cities in the United States that were considered to be representative
of the major areas of the country: Staunton, Virginia; Frederick, Maryland; Boston,
Massachusetts; Olathe, Kansas/Kansas City, Missouri; New Orleans, Louisiana; Fre-
mont, California; and Bellingham, Washington. All of these cities have thriving com-
munities of ASL users and some also residential schools for deaf children and as such
long-established Deaf communities. 207 African-American and white working and
middle-class men and women participated in the project. They could be divided into
three age groups: 15⫺25, 26⫺54, and 55 and up. All had either acquired ASL natively
at home or had learned to sign in residential schools before the age of 5 or 6 (see
Lucas/Bayley/Valli 2001). For each site, at least one contact person was asked to iden-
tify fluent, lifelong ASL users who had to have lived in the community for at least ten
years. The contact persons, deaf themselves and living in the neighborhood, assembled
groups of two to seven signers. At the sites where both white and African-American
signers were interviewed, two contact persons were appointed, one for each commu-
nity. All the data were collected in videotaped sessions that consisted of three parts.
In the first part of each session, approximately one hour of free conversation among
the members of each group was videotaped, without any of the researchers being
present. In a second part, two participants were selected and interviewed in depth by
the deaf researchers. The interviews included topics such as background, social net-
work, and patterns of language use. Finally, 34 pictures were shown to the signers to
elicit signs for the objects or actions represented in the pictures. It was considered to
be very important not to have any hearing researcher present in any of the sessions:
“It has been demonstrated that ASL signers tend to be very sensitive to the audiologi-
cal and ethnic status of an interviewer […]. This sensitivity may be manifested by rapid
switching from ASL to Signed English or contact signing in the presence of a hearing
person.” (Lucas/Bayley 2005, 48). Moreover, the African-American participants were
interviewed by a deaf African-American research assistant, and during the group ses-
sions with African-American participants, no white researchers were present. In total,
data from 62 groups were collected at community centers, at schools for deaf children,
in private homes, and at a public park. At the same time, a cataloguing system and a
computer database were developed to also collect and store metadata, that is, details
as to when and where each group was interviewed and personal information (name,
age, educational background, occupation, pattern of language use, etc.). Furthermore,
the database also contained details about phonological, lexical, morphological, and
syntactic variation, and further observations about other linguistic features of ASL that
are not necessarily related to variation. The analysis of this corpus has led to numerous
publications about sociolinguistic variation in ASL (see chapter 33 on sociolinguistic
variation).
Since this substantial ASL corpus project, for which the data were collected in 1995,
sign language corpus projects have been initiated in other countries as well, including
Australia, Ireland, The Netherlands, the United Kingdom, Germany, China (Hong
Kong), Italy, Sweden, and France, and more are planned in other places. Some of these
corpus projects also focus on sociolinguistic variation, but most have multiple goals,
and the data to be obtained cannot only be used as data for linguistic description,
1036 IX. Handling sign language data

but also for the preservation of older sign language data for future research (i.e. the
documentation of diachronic change) or as authentic materials to be used in sign lan-
guage teaching. The reader can find up to date information with respect to these (and
new) corpus projects at the following website: http://www.signlanguagecorpora.org.

4.3. Metadata

When collecting a corpus it is of the utmost importance to also collect and store meta-
data related to the linguistic data gathered. In many recent sign language corpus pro-
jects, the IMDI metadata database is being used, an already existing database which
has been further developed in the context of the ECHO project at the Max Planck
Institute for Psycholinguistics in Nijmegen (The Netherlands) (Crasborn/Hanke 2003;
also see www.mpi.nl/IMDI/). This approach is being increasingly used in smaller re-
search projects as well. A good example is presented in Costello, Fernández, and Landa
(2008, 84⫺85):

We video-record our informants in various situations and contexts, such as spontaneous


conversations, controlled interviews and elicitation from stimulus material. Each recording
session is logged in the IMDI database to ensure that all the related metadata are recorded.
The metadata relate to the informant, for example:
⫺ age, place of birth and sex
⫺ hearing status, parents’ hearing status, type of hearing aid used (if any)
⫺ age of exposure to sign language
⫺ place and context of sign language exposure
⫺ primary language of communication within the family
⫺ schooling (age, educational program, type of school)
and also to the specific context of the recording session, such as:
⫺ type of communicative act (dialogue, storytelling, question and answer)
⫺ degree of formality
⫺ place and social context
⫺ topic of the content.

Another important piece of information to include in the metadata is birth order of the
informant and hearing status of siblings, if any. There are, for instance, clear differences
between the youngest/oldest deaf person in a family with hearing parents and three
older/younger deaf siblings and the youngest/oldest deaf person in a family with hear-
ing parents and three older/younger hearing siblings.

5. Informant selection
Not all users of a specific language show the same level of language competence. This
is probably true of all language communities and of all languages, but it is even more
true of sign language communities. This is, of course, related to the fact that across the
world, 90 to 95 percent (or more, cf. Johnston 2004) of deaf children are born to
hearing parents, who are very unlikely to know the local sign language. Most often
42. Data collection 1037

deaf children only start acquiring a sign language when they start going to a deaf
school. This may be early in life, but it may also be (very) late or even never, either
because the deaf child’s parents opt for a strictly oral education with no contact with
a sign language or because the child does not go to school at all. Consequently, only a
small minority of signers can be labelled “mother tongue speaker” in the strict sense
of the word, and in most cases, these native signers’ signing parents will not be/have
been native signers themselves. When deaf parents are late learners of a sign language,
for instance, when they did not learn to sign until they were in their teens, this may be
reflected in their sign language skills, which may in turn have an effect on their chil-
dren’s sign language production.
In spoken language research, especially in the case of research on the linguistic
structure of a given language, the object of study is considered to be present in its most
natural state in the language production of a native speaker (but see section 2 above).
When studying form and function of a specific grammatical mechanism or structure in
a spoken language, it would indeed be very unusual to analyse the language production
of non-native speakers and/or to ask non-native speakers to provide grammaticality
judgments. The importance of native data has also been maintained for sign language
research, but, as stated by Costello, Fernández, and Landa (2008, 78), “there is no
single agreed-upon definition of native signer, and frequently no explanation at all is
given when the term is used”. The “safest option model of native signers” (Costello/
Fernández/Landa 2008, 79) is the informant who is (at least) a second generation deaf-
of-deaf signer. However, in small Deaf communities, such ideal informants may be
very few in number. For example, Costello et al. themselves claim that they have not
managed to find even seven second-generation signers in the sign language community
of the Basque Country, a community estimated to include around 5,100 people. John-
ston (2004, 370 f.) mentions attempts to locate deaf children of deaf parents under the
age of nine and claims that it was not possible to locate more than 50 across Australia.
Especially in small communities where there is merely a handful of (possibly) native
signers, researchers may be forced to go for the second best and decide to stipulate a
number of criteria which informants who are not native signers must meet. Such crite-
ria often include:

⫺ early onset of sign language acquisition; often the age of three is mentioned here,
but sometimes also six or seven;
⫺ education in a school for the deaf, sometimes stipulating that this should be a resi-
dential school;
⫺ daily use of the sign language under investigation (e.g. with a deaf signing partner
and/or in a deaf working environment);
⫺ prolonged membership of the Deaf community.

Note that it may actually be advisable to apply these criteria to native signers as well.
At the same time, we would like to make two final comments:

(1) In any community of sign language users, small or large, there are many more non-
native signers than native signers. This means that native signers most often have
non-native signers as their communication partners and this may affect their intui-
tions about language use. It may well be that a certain structure is over-used by
1038 IX. Handling sign language data

non-native signers so that that structure is seen as “typical” of or “normal” for the
language, although it is not very prominent in the language production of native
signers. One can even imagine that a structure (e.g. a certain constituent order)
which results from the influence of the spoken majority language and is frequently
used by non-native signers is characterized as “acceptable” by native signers even
though the latter would not use this structure themselves, at least not when signing
to another native language user.
(2) If one wants to get an insight into the mechanisms of specific language practices
within a certain sign language community (e.g. to train the receptive language skills
of sign language interpreter students), it might be desirable in certain sign language
communities not to restrict the linguistic analysis to the language use of third-
generation native signers. Because non-native signers make up the vast majority
of the language community, native signers are not necessarily “typical” representa-
tives of that community.

Natural languages are known to show (sociolinguistic) variation. It seems that for sign
languages, region and age are among the most important determining factors, although
we feel it is safe to say that in most, if not all, sign languages the extent and nature of
variation is not yet fully understood. Thus, variation is another issue that needs to be
taken into account when selecting informants. Concerning regional variation in the
lexicon of Flemish Sign Language (VGT), for example, research has shown that there
are five variants, with the three most centrally located areas having more signs in
common, compared to the two more peripheral provinces. Also, there seems to be an
ongoing spontaneous standardization process with the most central regions “leading
the dance” (Van Herreweghe/Vermeerbergen 2009). Therefore, in order to study a
specific linguistic structure or mechanism in VGT, it is best to include data from all
different regions. Whenever that is not possible, it is important to be very specific
about the regional background of the informants because it may well be the case that
the results of the analysis are valid for one region but not for another.
Finally, we would like to stress the necessity of taking into account the anthropologi-
cal and socio-cultural dimensions of the community the informants belong to. When
working with deaf informants, researchers need to be sensitive to the specific values
and traditions of Deaf culture, which may at times be different from those of the
surrounding culture. Furthermore, when the informants belong to a Deaf community
set within a mainstream community that the researcher is not a member of, this may
raise other issues that need to be taken into consideration (e.g. when selecting elicita-
tion materials). A discussion of these and related complications, however, is beyond
the scope of this chapter.

6. Video-recording data
6.1. Recording conditions
Research on sign languages shares many methodological issues with research on spo-
ken languages but it also comprises issues of its own. The fact that data cannot be
audio-recorded but need to be video-recorded is one of these sign language specific
42. Data collection 1039

challenges. Especially when recording data to study the structure of the language, but
also when it comes to issues such as sociolinguistic research on variation, one of the
major decisions a researcher needs to make is whether to opt for high quality recording
or rather to try to minimize the impact of the data collection setting on the language
production of the informants. It is a well-known fact that language users are influenced
by the formality of the setting. Different situations may result in variations in style and
register in the language production. This is equally true for speakers and signers, but
in the latter group, the specific relationship between the sign language and the spoken
language of the surrounding hearing community is an additional factor that needs to
be taken into account. In many countries, sign languages are not yet seen as equal to
spoken languages, but even if a sign language is recognized as a fully-fledged natural
language, it is still a minority language used by a small group of language users sur-
rounded by a much larger group of majority language speakers. As a result, in many
Deaf communities, increased formality often results in increased influence from the
spoken language (Deuchar 1984).
A problem related to this issue is the tendency to accommodate to the (hearing)
interlocutor. This is often done by including a maximum of characteristics from the
structure of the spoken language and/or by using structures and mechanisms that are
supposedly more easily understood by people with poor(er) signing skills. For example,
when a Flemish signer is engaged in the Volterra et al. elicitation task (see section 3.1.7)
and needs to describe a picture of a tree in front of a house, s/he may decide to start
the sentence with the two-handed lexical sign house followed by the sign tree and a
simultaneous combination of a ‘fragment buoy’ (Liddell 2003) referring to house on
the non-dominant hand and a ‘classifier’ referring to the tree on the dominant hand,
thereby representing the actual spatial arrangement of the referents involved by the
spatial arrangement of both hands. Alternatively, s/he might describe the same picture
using the three lexical signs tree C in-front-of C house in sequence, that is, in the
sequential arrangement familiar to speakers of Dutch. In both cases, the result is a
grammatically correct sentence in VGT, but whereas the first sentence involves sign
language specific mechanisms, namely (manual) simultaneity and the use of space to
express the spatial relationship between the two referents, the same is not true for the
second sentence, where the relationship is expressed through the use of a preposition
sign and word order, exactly as in the Dutch equivalent De boom staat voor het huis
(‘the tree is in front of the house’). One way to overcome this problem in an empirical
setting is by engaging native signers to act as conversational partners. However, be-
cause of the already mentioned specific relationship between a sign language and the
majority spoken language, signers may still feel that they should use a more ‘spoken
language compatible’ form of signing in a formal setting (also see the discussion of the
‘Observer’s Paradox’ in section 3.1).
Because of such issues, researchers may try and make the recording situation as
informal and natural as possible. Ways of doing this include:

⫺ organising the data collection in a place familiar to the signer (e.g. at home or in
the local Deaf club);
⫺ providing a deaf conversational partner: This can be someone unknown to the
signer (e.g. a deaf researcher or research assistant, a deaf student), although the
presence of a stranger (especially if it is a highly educated person) may in itself
1040 IX. Handling sign language data

have an impact on the language production of the informant. It may therefore be


better to work with an interlocutor the signer knows, but at the same time, it should
not be an interlocutor the signer is too closely related with (e.g. husband/wife or
sibling) because this may result in a specific language use (known as ‘within-the-
family-jargon’) which may not be representative of the language use in the larger
linguistic community;
⫺ avoiding the presence of hearing people whenever possible;
⫺ only using one (small-size) camera and avoiding the use of additional recording
equipment or lights;
⫺ not using the first ten minutes of what has been videotaped; these first ten minutes
can be devoted to general conversation to make sure that the signer is at ease and
gradually forgets the presence of the camera.

6.2. Technical issues

In certain circumstances, for instance when compiling a corpus for pedagogical reasons,
researchers may opt for maximal technical quality when recording sign language data.
Factors that are known to increase the quality of a recording include the following:

⫺ Clothing: White signers preferably wear dark, plain clothes and black signers light,
plain clothes to make sure there is enough contrast between the hands and the
background when signs are produced on or in front of the torso. Jewellery can be
distracting. If the informant usually wears glasses, it may be necessary to ask him/
her to take off the glasses in order to maximize the visibility of the non-manual
activity (obviously, this is only possible when interaction with an interlocutor is
not required).
⫺ Background: The background can also influence the visibility of the signed utteran-
ces. Consequently, a simple, unpatterned background is a prerequisite, and fre-
quently, a certain shade of blue or green is used. This is related to the use of the
chroma key (a.k.a. bluescreen or greenscreen) technique, where two images are
being mixed. The informant is recorded in front of a blue or a green background
which is later replaced by another image so that the informant seems to be standing
in front of the other background. If there is no intention to apply this technique,
then there is no need for a blue or green background, simply “unpatterned” is good
enough. However, visual distraction in the form of objects present in the signer’s
vicinity should be avoided.
⫺ Posture: When a signer sits down, this may result in a different dimension of the
signing space as compared to the same signer standing upright (and this may be a
very important factor in phonetic or phonological research, for instance).
⫺ Lighting: There clearly needs to be enough foreground lighting. Light sources be-
hind the signer should be avoided as much as possible since it results in low visibility
of facial expressions. The presence of shadows should be avoided as much as pos-
sible.
⫺ Multiple cameras: How many cameras are necessary, their position, and what they
focus on will be determined by the specific research question(s); the analysis of
non-manual activity, for example, requires the use of one camera zooming in on
42. Data collection 1041

the face of the informant(s) (although nowadays it is also possible to afterwards


electronically zoom in on a selected area within the image).
⫺ Position of the camera(s) in relation to the signer(s): In order to fully capture the
horizontal dimension of the signed production, some researchers avoid full frontal
recording and prefer a slight angle. A top view facilitates the analysis of the relation-
ship of the hands and the body, which may be important when studying the use
of space.
⫺ Use of elicitation materials: The signer should not hold any papers or other things
in his/her hands while signing and should not start to sign while (still) looking at
the materials.

6.3. Issues of anonymity


One major disadvantage of the necessity to video-record sign language production is
related to the issue of anonymity. When presenting or publishing their work, research-
ers may wish to illustrate their findings with sequences or stills taken from the video-
recorded data. However, not all signers like the idea of their face being shown to a
larger public. In the age of online publishing, this problem becomes even more serious.
Obviously, making the signer unrecognisable, for instance, by blurring his/her face ⫺ a
strategy commonly used to anonymise video-taped speakers ⫺ is not an option because
important non-manual information expressed on the face will be lost. It may therefore
be necessary to make use of a model reproducing the examples for the purpose of
dissemination. This solution may be relatively easy for individual signs or for construc-
tions to be reproduced in isolation but may be problematic in the case of longer
stretches of language production. The problem of privacy protection is, of course, also
highly relevant in the case of on-line publication of sign language video recordings and
annotations. This issue cannot be further dealt with here, but we would like to refer
to Crasborn (2008), who discusses developments in internet publishing of sign language
data and related copyright and privacy issues.
The fact that sign language production needs to be video-recorded also has conse-
quences in terms of research design. A well-known research design to study language
attitudes is the “matched guise” technique developed by Lambert and colleagues
(Lambert et al. 1960) to study attitudes towards English and French in Montreal, Ca-
nada. The visual nature of sign languages makes it difficult to apply this technique
when studying sign language attitudes because it will soon be obvious that one and the
same signer is producing two samples in two different languages or variants. Fenn
(1992, in Burns/Matthews/Nolan-Conroy 2001, 189) attempted to overcome this by
selecting physically similar signers, dressed in a similar fashion. However, he encoun-
tered another difficulty since many of his subjects recognized the signers presenting
the language samples.

7. Conclusion
In this chapter, we have attempted to give a brief survey of data collection techniques
using different types of elicitation materials and using corpora. We have also focused
1042 IX. Handling sign language data

on the importance of deciding which type of data should be used for which type of
analysis. Furthermore, we have discussed the problem of informant selection and some
more technical aspects of video-recording the data. Throughout the chapter, we have
focused on data collection in the sense of collecting sign language data. Sign language
research may also involve other types of data collection, such as questioning signers
on matters related to sign language use or (sign) language attitudes. In this context,
too, the sociolinguistic reality of Deaf communities may require a specific approach.
Matthews (1996, in Burns/Matthews/Nolan-Conroy 2001, 188) describes how he and
his team, because of a very poor response from deaf informants on postal question-
naires, decided to travel around Ireland to meet with members of the Deaf community
face to face. They outlined the aims and objectives of their study (using Irish Sign
Language) and presented informants with the possibility to complete the questionnaire
on the spot, giving them the opportunity to provide their responses in Irish Sign Lan-
guage (which were later translated into written English in the questionnaires). Thanks
to this procedure, response rates were much higher.
Finally, we would also like to stress the need for including sufficient information on
data collection and informants in publications in order to help the reader evaluate the
research findings, discussion, and conclusions. It is quite customary to collect and pro-
vide metadata in the context of sociolinguistic research and it has become standard
practice in the larger corpus projects as well, but we would like to encourage the
collection of the above type of information for all linguistic studies, as we are convinced
that this will vastly improve the comparability of studies dealing with different sign
languages or sign language varieties.

8. Literature

Briggs, Raymond
1978 The Snowman. London: Random House.
Burns, Sarah/Matthews, Patrick/Nolan-Conroy, Evelyn
2001 Language Attitudes. In Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cam-
bridge: Cambridge University Press, 181⫺216.
Chafe, Wallace L.
1980 The Pear Stories: Cognitive, Cultural, and Linguistic Aspects of Narrative Production.
Norwood, NJ: Ablex.
Chomsky, Noam
1965 Aspects of the Theory of Syntax. Cambridge, MA: The MIT Press.
Cormier, Kearsy
2002 Grammaticization of Indexic Signs: How American Sign Language Expresses Numeros-
ity. PhD Dissertation, University of Texas at Austin.
Costello, Brendan/Fernández, Javier/Landa, Alazne
2008 The Non-(existent) Native Signer: Sign Language Research in a Small Deaf Population.
In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past,
Present and Future. TISLR 9: Forty Five Papers and Three Posters from the 9 th Theoreti-
cal Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006.
Petrópolis/RJ, Brazil: Editora Arara Azul, 77⫺94. [Available at: www.editora-arara-
azul.com.br/EstudosSurdos.php]
42. Data collection 1043

Crasborn, Onno
2008 Open Access to Sign Language Corpora. Paper Presented at the 3rd Workshop on the
Representation and Processing of Sign Languages (LREC), Marrakech, Morocco, May
2008 [http://www.lrec-conf.org/proceedings/lrec2008, 33⫺38].
Crasborn, Onno/Hanke, Thomas
2003 Additions to the IMDI Metadata Set for Sign Language Corpora. Agreements at an
ECHO workshop, May 2003, Nijmegen University. [Available at: http://www.let.kun.nl/
sign-lang/echo/docs/SignMetadata_May2003.doc]
Crasborn, Onno/Zwitserlood, Inge/Ros, Johan
2008 Corpus NGT. An Open Access Digital Corpus of Movies with Annotations of Sign Lan-
guage of the Netherlands. Centre for Language Studies, Radboud University Nijmegen.
[Available at: http://www.ru.nl/corpusngt]
Cuxac, Christian
2000 La Langue des Signes Française. Les Voies de l’Iconicité (Faits de Langues No 15⫺16).
Paris: Ophrys.
Deuchar, Margaret
1984 British Sign Language. London: Routledge & Kegan Paul.
Hickmann, Maya
2003 Children’s Discourse: Person, Space and Time Across Languages. Cambridge: Cam-
bridge University Press.
Hong, Sung-Eun/Hanke, Thomas/König, Susanne/Konrad, Reiner/Langer, Gabriele/Rathmann,
Christian
2009 Elicitation Materials and Their Use in Sign Language Linguistics. Poster Presented at
the Sign Language Corpora: Linguistic Issues Workshop, London, July 2009.
Johnston, Trevor
2004 W(h)ither the Deaf Community? Population, Genetics, and the Future of Australian
Sign Language. In: American Annals of the Deaf 148(5), 358⫺375.
Johnston, Trevor
2008 Corpus Linguistics and Signed Languages: No Lemmata, No Corpus. Paper Presented
at the 3rd Workshop on the Representation and Processing of Sign Languages (LREC),
Marrakech, Morocco, May 2008. [http://www.lrec-conf.org/proceedings/lrec2008/, 82⫺
87]
Johnston, Trevor
2009 The Reluctant Oracle: Annotating a Sign Language Corpus for Answers to Questions
We Can’t Ask Any Other Way. Abstract of a Paper Presented at the Sign Language
Corpora: Linguistic Issues Workshop, London, July 2009.
Johnston, Trevor/Vermeerbergen, Myriam/Schembri, Adam/Leeson, Lorraine
2007 “Real Data Are Messy”: Considering Cross-linguistic Analysis of Constituent Ordering
in Auslan, VGT, and ISL. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.),
Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de
Gruyter, 163⫺205.
Karlsson, Fred
1984 Structure and Iconicity in Sign Language. In: Loncke, Filip/Boyes-Braem, Penny/Leb-
run, Yvan (eds.), Recent Research on European Sign Languages. Lisse: Swets and Zeit-
linger, 149⫺155.
Labov, William
1969 Contraction, Deletion, and Inherent Variability of the English Copula. In: Language
45, 715⫺762.
Labov, William
1972 Sociolinguistic Patterns. Philadelphia, PA: University of Pennsylvania Press.
1044 IX. Handling sign language data

Lambert, Wallace, E./Hodgson, Richard C./Gardner, Robert C./Fillenbaum, Samuel


1960 Evaluational Reactions to Spoken Language. In: Journal of Abnormal and Social Psy-
chology 60, 44⫺51.
Larsen-Freeman, Diane/Long, Michael H.
1991 An Introduction to Second Language Acquisition Research. London: Longman.
Leech, Geoffrey
2000 Same Grammar or Different Grammar? Contrasting Approaches to the Grammar of
Spoken English Discourse. In: Sarangi, Srikant/Coulthard, Malcolm (eds.), Discourse
and Social Life. Harlow: Pearson Education, 48⫺65.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Lucas, Ceil/Bayley, Robert
2005 Variation in ASL: The Role of Grammatical Function. In: Sign Language Studies 6(1),
38⫺75.
Lucas, Ceil/Bayley, Robert/Valli, Clayton
2001 Sociolinguistic Variation in American Sign Language. Washington, DC: Gallaudet Uni-
versity Press.
Mahlberg, Michaela
1996 Editorial. In: International Journal of Corpus Linguistics 1(1), iii⫺x.
Mayer, Mercer
1969 Frog, Where Are You? New York: Dial Books for Young Readers.
Paikeday, Thomas M.
1985 The Native Speaker Is Dead. Toronto: Paikeday Publishing Inc.
Pateman, Trevor
1987 Language in Mind and Language in Society: Studies in Linguistic Reproduction. Oxford:
Clarendon Press.
Schembri, Adam
2001 Issues in the Analysis of Polycomponential Verbs in Australian Sign Language (Auslan).
PhD Dissertation, University of Sydney, Australia.
Schütze, Carson T.
1996 The Empirical Base of Linguistics. Grammaticality Judgments and Linguistic Methodol-
ogy. Chicago: University of Chicago Press.
Supalla, Ted
1982 Structure and Acquisition of Verbs of Motion and Location in American Sign Language.
PhD Dissertation, University of California, San Diego.
Supalla, Ted/Newport, Elissa/Singleton, Jenny/Supalla, Sam/Metlay, Don/Coulter, Geoffrey
no date The Test Battery for American Sign Language Morphology and Syntax. Manuscript,
University of Rochester.
Van Herreweghe, Mieke/Vermeerbergen, Myriam
2008 Referent Tracking in Two Unrelated Sign Languages and in Home Sign Systems. Paper
Presented at the Workshop “Gestures: A Comparison of Signed and Spoken Lan-
guages” at the 30 th Annual Meeting of the German Linguistic Society (DGfS), Bamberg,
February 2008.
Van Herreweghe, Mieke/Vermeerbergen, Myriam
2009 Flemish Sign Language Standardisation. In: Current Issues in Language Planning 10(3),
308⫺326.
Vermeerbergen, Myriam
2006 Past and Current Trends in Sign Language Research. In: Language and Communication
26(2), 168⫺192.
43. Transcription 1045

Volterra, Virginia/Corazza, Serena/Radutsky, Elena/Natale Francesco


1984 Italian Sign Language: The Order of Elements in the Declarative Sentence. In: Loncke,
Filip/Boyes-Braem, Penny/Lebrun, Yvan (eds.), Recent Research on European Sign
Languages. Lisse: Swets and Zeitlinger, 19⫺48.

Mieke Van Herreweghe, Ghent (Belgium)


Myriam Vermeerbergen, Antwerp & Leuven (Belgium)

43. Transcription
1. Introduction
2. Transcription at the level of phonology
3. Transcription at the level of morphology
4. Multimedia tools
5. Conclusion
6. Literature and web resources

Abstract
The international field of sign language linguistics is in need of standardized notation
systems for both form and function. This chapter provides an overview of available
means of notating components of manual signs, non-manual devices, and meaning. At-
tention is also paid to problems of representing simultaneous articulators of hands, face,
and body. A final section provides an overview of several tools of multimedia analysis.
Standardization, in the twenty-first century, requires attention to computer-based storage
and processing of data; numerous links are provided to web-based facilities. Throughout,
the chapter addresses theoretical problems of defining and relating linguistic levels of
analysis in the study of sign languages.
“What is on a transcript will influence and
constrain what generalizations emerge”.
Elinor Ochs (1979, 45)

1. Introduction
Transcription serves a number of functions, such as linguistic analysis, pedagogy, pro-
viding deaf signers with a writing system, creating input to an animation program, and
others. Because this chapter appears in a handbook of sign language linguistics, we
limit ourselves to those notation systems that have played a role in developing and
advancing our understanding of sign languages as linguistic systems. Although most
notation schemes have been devised for the descriptive study of particular sign lan-
1046 IX. Handling sign language data

guages, here we aim at the goals of an emerging field of sign language linguistics, as
exemplified by other chapters in this volume. The field of sign language linguistics is
rapidly expanding in scope, discovering and describing sign languages around the world
and describing them with greater depth and precision. At this point, successful descrip-
tive and typological work urgently requires consensus on standardized notations of
both form and function.
The study of spoken languages has a long history, and international levels of stand-
ardized notation and analysis have been achieved. We begin with a brief overview of
the sorts of standardization that can be taken as models for sign language linguistics.
In 1888, linguists agreed on a common standard, the International Phonetic Alpha-
bet (IPA), for systematically representing the sound segments of spoken language. Ex-
ample (1) presents a phonetic transcription of an American English utterance in casual
speech, “I’m not gonna go” (Frommer/Finnegan 1994, 11).

(1) amnátgunegó

The IPA represents an international consensus on phonological categories. On the level


of morphology, basic categories have been used since antiquity ⫺ in India, Greece,
Rome, and elsewhere ⫺ with various sorts of abbreviations. For the past several gener-
ations, the international linguistic community has agreed on standard terms, such as
SG or sg or Sg for ‘singular’, regardless of varying theoretical persuasions, and with
minimal adjustments for the language of the publication (e.g., Russian ed.č. as a stand-
ard abbreviation for edinstvennoe čislo ‘singular’). In the first issue of Language, in
1925, the Linguistic Society of America established a model of printing foreign lan-
guage examples in italics followed by translations in single quotes. And for the past
four or five decades, an international standard of interlinear morpheme glossing has
become widespread, most recently formulated by The Leipzig Glossing Rules (based,
in part, on Lehmann (1982, 1994); see section 6 for website). Example (2) presents the
format that is now used in most linguistic publications, with interlinear morpheme-by-
morpheme glosses and a translation in the language of the publication. Examples under
analysis can be presented in various notations, as needed ⫺ generally in the orthogra-
phy of the source. Grammatical morphemes are indicated by glosses in small caps,
following a standard list; lexical items are given in the language of the publication
(which we refer to as the ‘description language’). The term ‘gloss’ refers both to the
grammatical codes and translations of lexical items in the second line. There is a strict
correspondence of units, indicated by hyphens, in both lines. A free translation or
paraphrase is given, again in the language of the analysis, at whatever level of specific-
ity is necessary for exposition. In the Turkish example in (2), dat is dative case, acc is
accusative case, and pst is past tense.

(2) Dede çocuğ-a top-u ver-di [Turkish]


grandfather child-dat ball-acc give-pst
‘Grandfather gave (the) child (the) ball.’

The Leipzig Glossing Rules note: “Glosses are part of the analysis, not part of the
data” (p. 2). In this example, there would be little disagreement about dat and pst, but
more arguable examples are also discussed on the website.
A profession-wide standard makes it possible to carry out systematic crosslinguistic,
typological, and diachronic analyses without having a command of every language in
43. Transcription 1047

the sample, and without knowing how to pronounce the examples. To give a simple
example, compare (2) with its Korean equivalent in (3) (from Kim 1997, 340). The
sentence is not given in Korean orthography, which would be impenetrable to an Eng-
lish-language reader; rather, a standard romanization is presented. Additional gram-
matical terms used in the Korean example are nom (‘nominative’), hon (‘honorific’),
and decl (‘declarative’).

(3) Halapeci-ka ai-hanthey kong-ul cwu-si-ess-ta [Korean]


grandfather-nom child-dat ball-acc give-hon-pst-decl
‘Grandfather gave (the) child (the) ball.’

Comparing (2) and (3), one can see that the Turkish and Korean examples are both
verb-final; that subject, indirect object, and direct object precede the verb; and that
verb arguments receive case-marking suffixes. In addition, Korean has an honorific
marker. One might propose that the two languages are typologically similar. This is a
quick comparison of only two utterances, but it illustrates the value of standardized
morphological glossing in crosslinguistic and typological comparisons. Morphosyntactic
analysis generally does not require phonological transcription or access to audio exam-
ples of utterances.
The study of sign languages is historically very recent, and the field has not yet
achieved the level of careful standardization that is found in the linguistics of spoken
languages. In this chapter, we give a brief overview of several attempts to represent
the forms and meanings of signed utterances on the printed page. Notation systems
proliferate, and we cannot present all of them. We limit ourselves to the task of tran-
scription, which we understand as the representation of signed utterances (generally
preserved in video format) in the two-dimensional, linear medium of print. A transcrip-
tion makes use of a notation system ⫺ that is, a static visual means of capturing a
signed performance or presenting hypothetical sign language examples.
Various formats have been used for notation, because signed utterances make si-
multaneous use of a number of articulators. Formats include subscripts, superscripts,
and parallel horizontal arrays of symbols and lines, often resembling a musical score.
In this chapter, we are concerned with the portions of notation systems that play a role
in linguistic analysis. In this regard, it is particularly important to independently indi-
cate form ⫺ such as handshape or head movement ⫺ and function ⫺ such as reference
to shape or indication of negation. Because forms can be executed simultaneously ⫺
making use of hands, head, face, and body ⫺ sign language transcription faces particu-
lar challenges of representing co-occurring forms with varying temporal contours. The
chapter treats issues of form and function separately. Section 2 deals with issues of
representing signed utterances in terms of articulation and phonology. Section 3 at-
tends to levels of meaning, including morphology, syntax, and discourse organization.
The fourth section deals with multimedia formats of data presentation, followed by a
concluding section.

2. Transcription at the level of phonology


Over the past 40 years, sign languages achieved recognition as full human languages,
in part as a result of the analysis of formational structure. The pioneer in linguistic
1048 IX. Handling sign language data

description of sign language was William Stokoe at Gallaudet University in Washing-


ton, DC (see chapter 38 on the history of sign linguistics). To appreciate Stokoe’s
contribution, we step back briefly to consider the distinction between writing and tran-
scription.
Writing is so much a part of the life of those who read and write easily that it is
sometimes difficult to remember that writing is actually a secondary form of language,
derived from continuous, ephemeral, real-time expressive forms (speech or signing).
The representation of language in written form, for many languages, separates words
from one another (prompting the comment from at least one literate adult fitted with
a cochlear implant, “I can’t hear the spaces between the words”). Written form elimi-
nates many of the dynamic characteristics of speech, including pitch, intonation, vowel
lengthening, pausing, and other time-relevant aspects of the signal. We include imita-
tive grunts and growls in vocal narrative to describe the sounds made by our car, or
the surprise felt when snow slides off the roof just as we open the door. We use a
variety of typographic conventions (punctuation, bold face, italics) to indicate a few of
those dynamics, but we tolerate the loss of many of them as well.
When we look at signing with the idea of preserving the ephemeral signal in written
form, we are confronted by decisions about what is language and what is an expressive
gesture. There is not a long tradition of writing, nor standards for what ought to be
captured in written form, for signing communities. There is no Chaucer, Shakespeare,
or Brontë to provide models for us of written signing from hundreds of years ago. The
recordings of deaf teachers and leaders in the US from the early part of the twentieth
century provide us with models of more and less formal discourse (Padden 2004; Su-
palla 2004). The Hotchkiss lecture (1912), recalling his childhood memories of Laurent
Clerc, includes a few examples of presumably intentionally humorous signs (the elon-
gated long) that illustrate a dynamic which might be difficult to capture in writing.
For those who haven’t seen this example, the sign which draws the index finger of the
dominant hand along the back of the opposite forearm is performed by drawing the
finger from wrist to shoulder and beyond, followed by a laugh.
Moreover, the distinction between ordinary writing (whether listmaking or litera-
ture) and scientific notation is a further refinement of what counts as worthy of repre-
sentation. This chapter focuses on scientific notations that are capable of recording
and representing sign languages, especially as they have evolved in communities of use.
Scientific notation for language ⫺ transcription ⫺ aims to include just enough symbols
to represent all the signs of any natural (sign) language (and probably some of the
invented systems that augment the languages). Scientific notations typically do not
have good tools or symbols for some of the dynamic aspects (pitch, in speech; size of
signing space in sign), unless the aspects are systematic and meaningful throughout
a speech community. Otherwise, there are side comments, footnotes, or other extra-
notational ways of capturing these performance variants.

2.1. Stokoe Notation

William Stokoe posited that the sign language in North America used conventionally
by deaf people is composed of a finite set of elements that recombine in structured
ways to create an unlimited number of meaningful ‘words’ (Stokoe 1960 [1978]).
43. Transcription 1049

Stokoe’s analysis went further, to define a set of symbols that notate the components
of each sign of American Sign Language (ASL). He and his collaborators used these
symbols again in A Dictionary of American Sign Language on Linguistic Principles
(Stokoe/Casterline/Croneberg 1965), the first comprehensive lexicon of a sign language
arranged by an order of structural elements, rather than by their translation into a
spoken (or written language), or by semantic classes. In those days before easy access
to computers, Stokoe commissioned a custom accessory for the IBM Selectric type-
writer that would allow him and others access to the specialized symbol set to be able
to write ASL in its own ‘spelling system’.
Stokoe Notation claims to capture three dimensions of signs. He assigned invented
names to these dimensions, tabula or tab for location, designator or dez for handshapes,
and signation or sig for movement. A close examination of the Dictionary shows that
at least one (and perhaps two or three) additional dimensions are encoded or predict-
able from the notations, namely, orientation of the hands, position of hands relative to
each other, and change of shape or position. It’s largely an emic system ⫺ that is, it is
aimed at the level of categorical distinctions between sign formatives (cf. phonemic as
contrasted with phonetic). In that regard, it is ingeniously designed and parsimonious.
Stokoe Notation makes use of 55 symbols. Capital letter forms are used for 19
symbols (including digits where appropriate) indicating dez (handshapes), with just a
few modifications (e.g., erasing the top of the number 8 to indicate that thumb and
middle finger have no contact in this handshape, as they would for the number 8 in
ASL (s)). The system uses 12 symbols (including a null for neutral space) for tab
(locations), which evoke depiction of body parts, such as h for forehead or a for
wrist surface of a fist. Finally, 24 symbols (including <, >, and n, for example) indicate
movements of the hands (sig).
Positions of the symbols within the written form are important: the order of mention
is tab followed by dez followed by sig. The position of sig symbols stacked vertically
indicates movements that are realized simultaneously (such as circling in a forward
[away from the signer’s body] motion), while sig symbols arranged left to right indicate
successive motions (such as contact followed by opening of the hand).

(4) h5’5∞ Ø5 v• Ο ɒ Ο ɒ a,a∞ 55>< [ASL]


grandfather child give ball
‘Grandfather gives (or gave) the child the ball.’

Example (4) shows the four signs that participate in the utterance ‘Grandfather gave
the child the ball’, but it does not account for at least two important adjustments on
the sign for ‘give’ that would happen to the signs in the performance of this sentence
in ordinary ASL. The sign for ‘give’ would likely assimilate handshapes to the sign for
‘ball’, to indicate transfer of the literal object, and would be performed downward,
from the position of an adult to the position of a (smaller) child. Nor does the sequence
of signs noted show eye gaze behavior accompanying the signs.
Stokoe Notation writes from the signer’s point of view, where the assumption is
that asymmetrical signs are performed with the right hand dominant, as some of the
notational marks refer to ‘left’ or ‘right’. The system was intended for canonical forms
and dictionary entries, rather than for signs as performed in running narrative or
conversation. It does not account for morphological adjustments to signs in utterances,
1050 IX. Handling sign language data

nor for timing (velocity, acceleration, pausing), overlap (between interlocutors),


grammatical derivation, performance errors, or ordinary rapid ‘speech’ phenomena.
Stokoe’s notation makes no attempt to account for movement of the body in space,
which an etic system would do ⫺ that is, a system aimed at the fine-grained components
of ‘phonemic’ categories. Stokoe, late in his life, made the case that it was etic, but did
not propose additional symbols or combinations of symbols to illustrate how the nota-
tion fulfilled that goal.
Idiosyncratic variants of this notation system can render some, but not all, of the
adjustments needed to capture signs in live utterances rather than as single lexical
items out of context.

2.1.1. Modifications of Stokoe Notation

An important translation of Stokoe Notation was conducted by a team at Northeastern


University in the late 1970s. Mark Mandel’s (1993) paper about a computer-writeable
transliteration system proposed a linear encoding into 7-bit ASCII. This ASCII version
was successfully used to encode the Dictionary of ASL in order to, for example, use
automatic means to tally 1628 non-compounds, by interrogating all the dictionary en-
tries. The system classified that subset of the whole Dictionary of ASL into those that
use two hands with symmetrical handshapes or those which use two hands where one
hand is the base and the other hand the actor (a dominance relationship). The advanta-
ges of an encoding system which can be parsed by machine go well beyond this simple
tally, and would allow comparison of encoded utterances at many levels of language
structure.

2.1.2. Influence on the analysis of other sign languages

Stokoe’s analysis (and dictionary) has influenced the analysis of many other sign lan-
guages, and served as a model for dictionaries of Australian Sign Language (Auslan)
and British Sign Language (BSL), among others.
Elena Raduzky (1992) in conjunction with several Italian collaborators, and with
Lloyd Anderson, a linguist who specializes in writing systems, modified Stokoe’s nota-
tion to consider the difference between absolute (geometric) space and the relative
space as performed by an individual. As might be anticipated, the Italian team found
gaps in the handshape inventory, and especially in the angle of movement. Their analy-
sis was used in the Dictionary of Italian Sign Language.

2.2. Sign Font

Developed by Emerson and Sterns, an educational software company, Sign Font was
created by a multidisciplinary team, under a small business grant from the US govern-
ment. The project’s software specialists worked in conjunction with linguists (Don
Newkirk and Marina McIntire) fluent in ASL, deaf consultants, and an artist. The font
itself was designed by Brenda Castillo (Deaf) and the late Frank Allen Paul (a hearing
43. Transcription 1051

sign language artist), with explicit design criteria (such as print density, optimality of
code, and sequential alphabetic script). Field tests with middle school age students
showed that Sign Font is usable after relatively brief instruction.
Example (5) gives the Sign Font version of the proposition presented in Stokoe
Notation in (4). In contrast to (4), in example (5), the sign for ‘give’ follows the sign
for ‘ball’ and incorporates the handshape indicating the shape of the item given. (This
handwritten example is based on the Edmark Handbook (Newkirk 1989b) and Exer-
cise book (Newkirk 1989a) which explicate the principles for using SignFont and give
a number of examples. Handwritten forms are presented here to underline the in-
tended use of Sign Font as a writing system for the Deaf, in addition to a scientific
tool. The same is true of SignWriting, discussed in the following section.)

(5)

‘Grandfather gives (or gave) the child the ball.’

In Sign Font, the order of elements is different from Stokoe notation: Handshape,
Action Area, Location, Movement. Action Area is distinctive to this system; it de-
scribes which surface or side of the hand is the focus of the action. (In Stokoe notation,
the orientation of the hands is shown in subscripts to the handshapes. The relationship
between the hands would be shown by a few extra symbols for instances in which the
two hands grasp, link, or intersect each other.)

2.3. SignWriting

While we acknowledge the attempts within Stokoe Notation to create symbols that are
mnemonic in part because of their partial pictorial representation, other systems are
much more obvious in their iconicity. Sutton’s SignWriting, originally a dance notation,
later extended from ballet to other dance forms, martial arts, exercise, and sign lan-
guage(s), was intended to be a means of capturing gestural behavior in the flow of
performance. It joins Labanotation in the goal of memorializing ephemeral performan-
ces. In its sign language variant, SignWriting looks at the sign from the viewer’s point
of view, and has a shorthand (script) form for live transcription. The system is made
up of schematized iconic symbols for hands, face, and body, with additional notations
for location and direction. Examples can be found in an online teaching course (see
section 6 for website).
Example (6) gives the SignWriting version of the proposition previously transcribed
in (4) and (5). In this example, the signs for ‘grandfather’ and ‘child’ are followed by
pointing indices noting the spatial locations assigned to the two participants, and again,
the sign for ‘ball’ precedes the sign for ‘give’, which again incorporates the ball’s shape
1052 IX. Handling sign language data

and size relative to the participants. Note that the phrasal elements are separated by
horizontal strokes of various weights (the example is also available at http://www.sign-
bank.org/signpuddle). While SignWriting is usually written in vertical columns, it is
presented here in a horizontal arrangement to save printing space.

(6)

‘Grandfather gives (or gave) the child the ball.’

Since its introduction in the mid-1970s, SignWriting has been expanded and adapted
to handle more complex sign language examples, and more different sign languages.
As of this writing, www.signbank.org shows almost 40 countries that use SignWriting,
and catalogues signs from nearly 70 sign languages, though the inventories in any one
may be only a few signs. SignWriting still is nurtured in a family of movement notation
systems from Sutton. SignWriting has its own Deaf Action Committee to vet difficult
examples and advocate for the use of this writing system with schools and communities
where an existing literacy tradition may be new to the deaf population.
Modifications to SignWriting have added detailed means of noting facial gestures.
When written in standard fashion ⫺ with vertical columns arranged from left to right ⫺
phrases or pausing structures can be shown with horizontal marks of several weights.
That is, each sign is shown as a block of symbols. The relationship of the head to the
hand or hands reflects the starting positions in physical space, and is characterized by
the relative positions within the block. The movement arrows suggests a direction,
though the distance traversed is not literally depicted. The division into handshape,
location, and movement types is augmented by the inclusion of facial gestures, and
symbols for repeated and rhythmic movements. Symbols with filled spaces (or made
with bold strokes) contrast with open (empty) versions of the same symbols to indicate
orientation toward or away from the viewer. Half-filled symbols indicate hands which
are oriented toward the centerline of the body (neither toward nor away from the
viewer). In a chart by Cheryl Wren available from the signbank.org website (see sec-
tion 6 for link), symbols used for ASL are inventoried in their variations, categorized
by handshape (for each of 10 different shapes); 39 different face markings (referring
to brows, eyes, nose, and mouth); and movements distinguishing plane and rotation as
well as internal movement (shaking, twisting, and combinations). This two-page chart
does not give examples of signs that exemplify each of the variants but the website
does permit searching for examples by each of the symbols and within a particular
sign language.
Given that the symbol set for SignWriting allows for many variations in orientation
of the sign, the writer may choose to write a more standardized (canonical) version or
may record a particular performance with all its nuanced variants of ‘pronunciation’.
The notation however does not give any hint of morphological information, and may
disguise potential relationships among related signs, while capturing a specific utter-
ance in its richness.
43. Transcription 1053

2.4. Hamburg Notation System (HamNoSys)

The Hamburg Notation System (HamNoSys) was developed by a research team at


Hamburg University. It was conceived with the idea of being useful for more than a
single sign language, but with its understanding of signs following Stokoe’s general
analysis. Moreover, it was developed in conjunction with a standard computer font,
keyboard mapping, and including new markings for specific locations (regions of the
body) at ever more detailed levels (the full system can be found online, along with
Macintosh and Windows fonts; see section 6 for website). The catalogue of movements,
for example, distinguishes between absolute (the example shows contact with one side
of the chest followed by the other) and relative movements; and path movements are
distinguished from local movements (where there is no change of location accompany-
ing the movement of the sign). Non-manual movements (especially head shakes, nods,
and rotations) are also catalogued within the movement group.
Example (7) gives the HamNoSys version of our proposition, ‘grandfather give child
ball’, with every line representing one sign. In this example again, the signs for ‘grand-
father’ and ‘child’ are followed by pointing indices noting the spatial locations assigned
to the two participants, and again, the sign for ‘ball’ precedes the ‘give’ which again
incorporates the ball’s shape and size relative to the participants.

(7)

grandfather index child index ball give-(ball)


‘Grandfather gives (or gave) the child the ball.’

2.5. Discussion and comparison

In a contribution to the Sign Language Linguists List (SLLING-L), Don Newkirk


(1997) compares the several writing systems for the audience of the list:

What most distinguishes many of the linear notations, including SignFont, HamNoSys,
[Newkirk’s] early “literal orthography”, and Stokoe, from SignWriting and DanceWriting
lies more in the degree to which the linguistic structure of the underlying sign is expressed
in the mathematical structure of the formulas into which the however iconic symbols of
the script are introduced. The 4-dimensional structure of signs is represented in SignFont,
for example, as 2-dimensional iconic symbols presented in a 1-dimensional string in the
time domain. In SignWriting, the 3 spatial dimensions are more doggedly shown in the
notation, but much of the 4th dimensional character of signing is obscured in the quite
arbitrary (but rich) movement set.
1054 IX. Handling sign language data

A longer paper describing Newkirk’s “literal orthography” (one that uses an ordinary
English typewriter character set to notate signs), that is, his analysis of ASL, appears
to no longer be available from the website.
Whereas SignWriting is used in various pedagogical settings (often as a writing
system for deaf children), HamNoSys is used for linguistic analysis, initially by German
sign language linguists, and later by others. Although SignWriting has been applied to
several languages, Stokoe notation and HamNoSys have been most deeply used in the
languages for which they were developed: ASL for Stokoe, German Sign Language
(DGS) for HamNoSys. The Dutch KOMVA Project (Schermer 2003) used a notation
system based on Stokoe’s notation. Their inventory of regionally distinct signs (for the
same meanings) established regional variants in order to work toward a national stand-
ard on the lexical level (see chapter 37, Language Planning, for details). Thus, there
remains a gap for a standard, linguistically based set of conventions for the transcrip-
tion of sign languages on the phonetic and phonological levels.
SignWriting does not account for the movement of the body in space, but has the
potential to do so given its origins as a dance notation system. It does not capture
timing information, nor interaction between participants in conversation. (As a dance
notation it ought to be able to consider at least two and probably more participants.)
The several systems we have briefly surveyed share two common challenges to ease
of use: transparency of conventions and computational implementation. Experience in
using these systems makes it clear that one cannot make proper notations without
knowledge of the language. For example, segmentation of an utterance into separate
signs is often not visually evident to an observer who does not know the language. In
addition, the fact that all of the systems mentioned here use non-roman character sets
would have prevented them from sharing common input methods, keyboard mapping,
and more importantly, compatible searching and sorting methods to facilitate common
access to materials created using these representations. Mandel’s 7-bit ASCII notation
for the Stokoe system was one attempt to surmount this problem. Creating a Unicode
representation for encoding non-ASCII character sets on personal computers and on
the web is relatively straightforward technologically, and has already been done for
the SignWriting fonts.
The notation of a sign language using any of these systems will still yield a represen-
tation of the signs for their physical forms, rather than a more abstract level of part of
speech or morphological components. That is, the signs written with Stokoe’s notation,
SignWriting, or HamNoSys give forms at a level equivalent to phonetic or phonological
representation, rather on than morphological level. In the following section, we turn
to problems of representing meaning ⫺ which can be classed as morphology, syntax,
or discourse.

3. Transcription at the level of morphology

3.1. Lack of standardization

The field of sign language linguistics is in disarray with regard to analysis of the internal
components of signs ⫺ from the points of view of both grammatical morphology and
43. Transcription 1055

lexicon. In this chapter, we do not limit ourselves to strictly ‘grammatical’ components,


because there is no consensus on the lines between syntactic, semantic, and pragmatic
uses of the many co-occurring elements of signed utterances, including handshapes,
movement, body orientation, eyes, mouth, lips, and more. Furthermore, signed utteran-
ces make use of gestural devices to varying degrees. Morphology, broadly construed,
deals with the internal structure of words as related to grammatical structure ⫺ and
this is the topic we address. Sign linguists are in agreement that most verbs are poly-
componential (to neutralize Elisabeth Engberg-Pedersen’s (1993) term, polymor-
phemic). Nouns, too, are often polycomponential, and in many instances a single root
form is used to derive both nouns and verbs. It is not always clear where to draw
boundaries between words, especially in instances where part of a sign can be main-
tained during the production of a subsequent sign. We propose here that much more
detailed descriptive data are needed, of many sign languages, before we can confidently
sort components into the levels of traditional linguistic analysis of spoken languages.
(And, indeed, those levels will be subject to reconsideration on the basis of ongoing
sign language linguistics.)
Sign language researchers are generally provided with minimal guidelines for tran-
scription (see, for example, Baker/van den Bogaerde/Woll 2005; as well as papers in
Bergman et al. 2001). Typically, lexical elements are transcribed in capital letter or
small caps glosses, using words drawn from either the spoken language of the surround-
ing community or the language of publication, with many different types of subscripts,
superscripts, and special symbols. Non-manual elements are commonly given in a line
above the capital letter glosses. And a translation is provided in parentheses or en-
closed in single quotes. Additional symbols such as hyphens, underscores, plus signs,
circumflexes, and arrows are used differently from author to author. Moreover, there
is no standardization of abbreviations (e.g., a point to self designating first person may
be indicated by index-1, ix1, pro-1, me, and other devices). Papers sometimes include
an appendix or a footnote listing the author’s notational devices. In most instances,
examples cannot be readily interpreted without the provision of some pictorial
means ⫺ photographs, line drawings, and nowadays, links to videoclips. All of this is
radically different from the well-established standard format for the representation of
components of spoken language utterances, as briefly discussed in section 1.
In the following, we provide several examples of the diversity of notational devices.
Many more can be found in the chapters of this handbook. Example (8) provides a
typical ASL example, drawn at random from “The Green Book” of ASL instruction
(Baker-Shenk/Cokely 1980, 148). The horizontal lines are useful in indicating the scope
of the topic (“top”) and negation (“neg”) non-manuals ⫺ but note that there is no
convenient way of using this sort of scope information in an automatic analysis of a
database. Note, also, that an unspecified sign directed at the self is glossed here as me.

top neg
(8) write paper, not-yet me [ASL]
‘I haven’t written the paper yet.’

The examples in (9) to (12) illustrate quite different ways of indicating person. Exam-
ples (9) and (10) are from DGS (Rathmann/Mathur 2005, 238; Pfau/Steinbach 2003,
11), (11) is from ASL (Liddell 2003, 132), and (12) is from Danish Sign Language
(DSL) (Engberg-Pedersen 1993, 57).
1056 IX. Handling sign language data

(9) first(sg)asknonfirst(pl) [DGS]


‘I asked them.’
(10) px py blume xgeby [DGS]
index index flower agr.s:give:agr.o
‘S/he is giving him/her a flower.’
(11) pro-1 look-at/y [ASL]
‘I look at him/her/it.’
(12) pronCfl deceiveCfr [fl = forward left, fr = forward right] [DSL]
‘Hej deceives himi.’

Each of these devices is transparent if one has learned the author’s conventions. Each
presents a different sort of information ⫺ e.g., first person singular versus first-person
pronoun; direct object indicated by a subscript following a verb or by an arrow and
superscript; directedness toward a spatial locus. And, again, there is no convenient
automatic way of accessing or summarizing such information within an individual
analysis or across analyses. We return to these issues below.

3.2. Problems of glossing

The line of glosses ⫺ regardless of the format ⫺ is problematic from a linguistic point
of view. In the glossing conventions for spoken languages, the first line presents a
linguistic example and the second line presents morpheme-by-morpheme glosses of
the first line. The first line is given either in standard orthography (especially if the
example is drawn from a written language) or in some sort of phonetic or phonemic
transcription. It is intended to provide a schematic representation of the form of the
linguistic entity in question. For example, in Comrie’s (1981) survey of the languages
of the then Soviet Union, Estonian examples are presented in their normal Latin or-
thography, such as (13) (Comrie 1981, 137):

(13) ma p-ole korteri-peremees [Estonian]


I neg-be apartment-owner
‘I am not the apartment owner.’

For languages that do not have literary traditions, Comrie uses IPA, as in example
(14), from Chukchi, spoken by a small population in an isolated part of eastern Siberia
(Comrie 1981, 250):

(14) tə-γətγ-əlqət-ərkən [Chukchi]


1sg-lake-go-pres
‘I am going to the lake.’

In (13) and (14), the second line consists entirely of English words and standard gram-
matical abbreviations, and it can be read and interpreted without knowledge of the
acoustic/articulatory production that formed the basis for the orthographic representa-
43. Transcription 1057

tion in the first line. (Note that in (13) ma is an independent word and is glossed as
‘I’, whereas in (14) tə- is a bound morpheme and therefore is glossed as ‘1sg-’.)
In most publications on sign languages, there is no equivalent of the first line in
(13) and (14). The visual/articulatory form of the example is sometimes available in
one of the phonological notations discussed in section 2, or in pictorial or video form,
or both. However, there is also no consensus on the information to be presented in
the line of morpheme-by-morpheme glosses. Example (8) indicates non-manual ex-
pressions of topic and negation; examples (9⫺11) use subscripts or superscripts; (11)
and (12) provide explicit directional information. Directional information is implicitly
provided in (9) and (10) by the placement of subscripts on either side of a capital letter
verb gloss. The first lines of (9), (11), and (12) need no further glossing and are simply
followed by translations. By contrast, (10) provides a second line with another version
of glossing. That second line is problematic: ‘Px’ is further glossed as ‘index’ before
being translated into ‘s/he’, and the subscripts that frame the verb in the first line of
glosses are replaced by grammatical notations in the second line of glosses. Beyond
that, nothing is added by translating German blume and geb into English ‘flower’ and
‘give’. In fact, there is no reason ⫺ in an English-language publication ⫺ to use Ger-
man words in glossing DGS (or Turkish words in glossing Turkish Sign Language, or
French words in glossing French Sign Language, etc.). DGS is not a form of German.
It is a quite different language that is used in German-speaking territory. Comrie did
not gloss Chukchi first into Russian and then into English, although Russian is the
dominant literary language in Siberia. The DGS and DSL examples in (9) and (12)
appropriately use only English words. We suggest that publications in sign language
linguistics provide glosses only in the language of the publication ⫺ that is, the descrip-
tion language. Thus, for example, an article published in German about ASL should
not use capital letter English words, but rather German words, because ASL is not a
form of English. This requires, of course, that the linguist grasp the meanings of the
signs being analyzed, just as Comrie had to have access to the meanings of Estonian
and Chukchi lexical items. The only proper function of a line of morpheme-by-mor-
pheme glosses is to provide the meanings of the morphemes ⫺ lexical and grammatical.
Other types of information should be presented elsewhere.
Capital letter glosses are deceptive with regard to the meanings of signs. This is
because they inevitably bring with them semantic and structural aspects of the spoken
language from which they are drawn. For example, in a paper written in German about
Austrian Sign Language (ÖGS), the following utterance is presented in capitals: du
dolmetscher. It is translated into German as ‘Du bist ein Dolmetscher’ (= ‘You are
an interpreter’) (Skant et al. 2002, 177). German, however, distinguishes between fa-
miliar and polite second-person pronouns, and so what is presumably a point directed
toward a familiar addressee is glossed as the familiar pronoun du and again as ‘du’ in
the translation. In English, the gloss would be you, translated as ‘you’. But ÖGS does
not have familiar and polite pronouns of address. On some analyses, it does not even
have pronouns. Glossing as index-2, for example, would avoid such problems.
More seriously, a gloss can suggest an inappropriate semantic or grammatical analy-
sis, relying on the use of words in the glossing language. Any gloss carries the part-of-
speech membership of a spoken language word, suggesting that the sign in question
belongs to the same category. Frequently, such implicit categorizations are misleading.
In addition, any spoken word “equivalent’ will be part of a range of constructions in
1058 IX. Handling sign language data

the spoken language, but not in the sign language. For example, on the semantic level,
an ASL lexical item that requires the multiword gloss take-advantage-of corresponds
to the meaning of the expression in an English utterance such as, ‘They took advantage
of poorly enforced regulations to make an illegal sale’. However, the ASL form cannot
be used in the equivalent of ‘I was delighted to take advantage of the extended library
hours to prepare for my exams’. There is definitely a sense of ‘exploit a loophole’ or
‘get one over on another’ to the ASL sign, whereas the English expression has a differ-
ent range of meanings.
On the grammatical level, a gloss can suggest an inappropriate analysis, because
words of the description language often fit into different construction types than words
of the sign language. Slobin has recently discussed this issue in detail (Slobin 2008, 124):

Consider the much-discussed ASL verb invite (open palm moving from recipient to
signer). This has been described as a “backwards” verb (Meir 1998; Padden 1988), but
what is backwards about it? The English verb “invite” has a subject (the inviter) and an
object (the invitee): “I invite you”, for example. But is this what ASL 1.SGinvite2.SG means?
If so, it does appear to be backwards since I am the actor (or subject ⫺ note the confusion
between the semantic role of actor and the syntactic role of subject) and you are the
affected person (or object). Therefore, it is backwards for my hand to move from you to
me because my action should go from me to you. The problem is that there is no justifica-
tion for glossing this verb as invite. If instead, for example, we treat the verb as meaning
something like “I offer that you come to me”, then the path of the hand is appropriate.
Note, too, that the open palm is a kind of offering or welcoming hand and that the same
verb could mean welcome or even hire. In addition to the context, my facial expression,
posture, and gaze direction are also relevant. In fact, this is probably a verb that indicates
that the actor is proposing that the addressee move towards the actor and that the ad-
dressee is encouraged to do so. We don’t have an English gloss for this concept, so we are
misled by whatever single verb we choose in English.

The problem is that signs with meanings such as ‘invite’ are polycomponential, not
reducible to single words in another language. What is needed, then, is a consistent
form of representation at the level of meaning components, comparable to morphemic
transcription of spoken languages. We use the term meaning component rather than
morpheme because we lack an accepted grammatical model of sign languages. What is
a gesture to one analyst might be a linguistic element to another; what is a directed
movement to a spatial locus in one model might be an agreement marker in another.
If we can cut loose from favorite models of spoken language we will be in a better
position to begin fashioning adequate notation systems for sign languages. Historically,
we are in a period that is analogous to the early Age of Exploration, when missionaries
and early linguists wrote grammars for colonial languages that were based on familiar
Latin grammatical models. Linguistics has broadened its conception of language struc-
tures over the course of several centuries. Sign language linguistics has had only a few
decades, but we can learn from the misguided attempts of early grammarians, as well
as the more recent successes of linguistic description of diverse languages.
To our knowledge, there is only one system that attempts to represent sign lan-
guages at the same level of granularity as has been established for morphological de-
scription of spoken languages. This is the Berkeley Transcription System (BTS), which
we describe briefly in the following section.
43. Transcription 1059

3.3. A first attempt: the Berkeley Transcription System (BTS)


The Berkeley Transcription System (BTS) was developed in the Berkeley Sign Lan-
guage Acquisition Project in the 1990s (headed by Hoiting and Slobin), in order to
deal with videotapes of child⫺caregiver interaction in ASL and Sign Language of the
Netherlands (NGT). The system was developed by teams of signers, Deaf and hearing,
in the US and the Netherlands, working with linguists and psycholinguists in both
countries. Glosses of these two sign languages in English and Dutch made comparisons
impossible, alerting the designers to the dangers of comparing two written languages
rather than two sign languages. Furthermore, glosses in either language did not reveal
the componential richness and multi-modal communication evident in the videos. In
addition, it was necessary to type transcriptions in standard ASCII characters in order
to carry out computer-based searches and summaries. The international child language
field had already provided a standard transcription format, CHAT, linked to a set of
search programs, CLAN. CHAT and CLAN are part of a constantly growing crosslin-
guistic archive of child language data, CHILDES (Child Language Data Exchange
System; see section 6 for website). One goal of BTS is to enable developmental sign
language researchers to contribute their data to the archive. BTS uses CHAT format
and it is now incorporated into the CHILDES system; the full manual can be down-
loaded from the URL mentioned in section 6. A full description and justification of
BTS, together with the 2001 Manual, can be found in Slobin et al. (2001); a concise
overview is presented by Hoiting and Slobin (2001). In addition to ASL and NGT,
BTS has been used in child sign language studies of DGS (unpublished) and BSL. In
addition, Gary Morgan has applied BTS to a developmental study of BSL (see section
6 for a link to his “End of Award Project Summary”, which also includes some sugges-
tions for improvements of BTS).
BTS aims at a sign language equivalent of the sort of morpheme-by-morpheme gloss
line established for spoken languages, as discussed at the beginning of this chapter. A
full BTS transcription can provide information on various tiers, including phonology,
translation, and notations of gesture and concurrent behavior. Our focus here is on the
level of meaning and the task of notating those components of complex signs that can
be productively used to create meaningful complex signs. These components are man-
ual and non-manual. Because we are far from consensus on the linguistic status of
all meaning components, BTS does not refer to them as “morphemes”; the eventual
theoretical aim, however, is to arrive at such consensus and provide a means of count-
ing morphemes for developmental and crosslinguistic analysis. Signs that are not made
up of recombinable elements of form, such as the juxtaposed open palms meaning
‘book’, are simply presented in traditional capital letter form, book. Although this sign
may have an iconic origin, its components are not recombined to create signs related
in meaning. Note, however, that when a sideward reciprocating movement is added,
NGT verbalizes the sign into ‘to read to’, and addition of a repeated up-and-down
movement yields a verbal sign meaning ‘to study’. It is on the plane of polycomponenti-
ality that analysis into meaning components is required.

3.3.1. Manual components of signs

BTS is especially directed at the internal structure of verbs of location, placement,


movement, transitive action, and the like ⫺ that is, verbs that are traditionally de-
1060 IX. Handling sign language data

scribed as ‘classifier predicates’. The ‘classifier’ is a handshape that identifies referents


by indicating a relevant property of that referent (see chapter 8 for discussion). The
function of such elements is not to classify or categorize, but to indicate reference. For
example, in a given sign language, a human being may be designated by several differ-
ent ‘classifiers’, which single out a discourse-relevant attribute of the person in question
(upright, walking, adversarial, etc.). Therefore, BTS treats such meaning components
as property markers, which are combined with other co-occurring components that
specify event dimensions such as source, goal, path, manner, aspect, modality, and
more (see Slobin et al. (2003) for justification of the replacement of ‘classifier’ by
‘property marker’).
Property markers are indicated in terms of meaning, rather than form, using a stand-
ardized capital-letter notation. Lower-case letters are used to indicate the (roughly)
morphological meaning of a category, with upper-case letters giving a specific meaning
within the category. For example, all property markers are indicated by pm’, all loca-
tions are indicated by loc’, and so forth. The inverted-V (W) handshape that indicates
an erect human being in some sign languages is notated as TL (two-legged animate
being), rather than “inverted-V” or other formal designations which are properly part
of phonological, rather than morphological transcription. The upper-case abbreviations
are intended to be transparent to users, similar to acc (‘accusative’) in standard mor-
phological codes. Underscores are used to create more complex semantic codes, such
as PL_VL (plane-vertical = ‘vertical plane’) or PL_VL_TOP (plane-vertical-top = ‘the
top of a vertical plane’).
The designations are abbreviations of English words, for convenience of English-
speaking users. But the language of presentation is independent of the rationale of the
transcription system. In the NGT version of BTS, these codes have Dutch equivalents,
in order to make the system accessible to Dutch users who may not be literate in
English. Similar accommodation is familiar in linguistic descriptions of spoken lan-
guages where, for example, past in English-language publications corresponds to vgh
(Vergangenheit = past) in German-language publications.
Polycomponential signs are made up of many meaning components. These are sepa-
rated by hyphens in order to allow for automatic counting of item complexity, on a
par with morpheme counts in spoken languages. The model comes from linguistics,
where a symbol such as cl (‘classifier’) is expanded by its specific lexical category.
Consider the following two examples, cited by Grinevald and Seifart (2004): water it-
cl(liquid)-fall (Gunwinggu, p. 263); dem.prox-cl(disc) ‘this one (coin, button, etc.)’
(Tswana, p. 269). The punctuation format of BTS is designed to be compatible with
CHAT, but maintaining the tradition of presenting meanings in capital letters. The
meanings, however, are generally more abstract than English lexical glosses, especially
with regard to verbs ⫺ the polycomponential heart of sign languages.
Consider our standard example, ‘grandfather give child ball’, as transcribed from
NGT in BTS format in (15). The three nouns do not require morphological analysis
and are given in small caps. The two human participants are indexed at different spatial
locations, (a) and (b), with the second location ⫺ that of the child ⫺ at a lower location
(loc’INF = location: inferior).

(15) grandfather ix_3(a) ball child ix_3(b)-loc’INF [NGT]


pm’SPHERE-src’3(a)-gol’3(b)-pth’D
43. Transcription 1061

In contrast to the nouns, the verb is polycomponential, consisting of a cupped hand


moving in a downward direction from one established locus in signing space to another.
The verb thus consists of a property marker (SPHERE), a starting point of motion
(src’3(a) = source: locus 3(a)), a goal of motion (gol’3(b) = goal: locus 3(b)), and a
downward path (pth’D = path: down). By convention, a combination of src-gol entails
directed motion from source toward goal. On this analysis, the NGT verb equivalent
to ‘give’ in this context has four meaning components: pm, src, gol, pth.

3.3.2. Non-manual components of signing

Facial cues, gaze, and body position provide crucial indications in sign languages,
roughly comparable to prosody in speech and punctuation in writing. Indeed non-
manual devices are organizers, structuring meaning in connected discourse. On the
utterance level, non-manuals distinguish topic, comment, and quotation; and speech
acts are also designated by such cues (declarative, negative, imperative, various inter-
rogatives, etc.). Non-manuals modulate verb meanings as well, adding conventionalized
expressions of affect and manner; and gaze can supplement pointing or carry out deic-
tic functions on its own. Critically, non-manuals are simultaneous with stretches of
manual signing, with scope over the meanings expressed on the hands. Generally, the
scope of non-manuals is represented by a line over a gloss, accompanied by abbrevia-
tions for functions, such as ‘neg’ or ‘wh-q’, as shown in example (8), above. Gaze
allocation, however, is hardly ever notated, although it can have decisive grammatical
implications. BTS has ASCII notational devices for indicating gaze and the scope of
non-manual components, including grammatical operators, semantic modification, af-
fect, discourse markers (e.g., agreement, confirmation check), and role shift. The fol-
lowing examples demonstrate types of non-manuals, with BTS transcription, expanding
the scenario of ‘grandfather give child ball’.
Grammatical operators ⫺ such as negation, interrogation, topicality ⫺ are tempo-
rally extended in sign languages, indicating scope over a phrase or clause. Modulations
of meaning, such as superlative degree or intensity of signing, can have scope over
individual items or series of signs. BTS indicates onset and offset of a non-manual by
means of a circumflex (^), in order to maintain linear ASCII notation for computer
analysis. For example, operators are indicated by ^opr’X …^, where X provides the
semantic/functional content of the particular operator, such as ^opr’NEG in the follow-
ing example. Here someone asserts that grandfather did not give the child a ball, negat-
ing the utterance represented above in example (15).

(16) grandfather ix_3(a) ball child ix_3(b)-loc’INF [ASL]


^opr’NEG pm’SPHERE-src’3(a)-gol’3(b)-pth’D^

Discourse markers regulate the flow of communication between participants, checking


for comprehension, indicating agreement, and so forth. In spoken languages, such
markers fall into linguistic analysis when they are realized phonologically, and are
problematic when expressed by intonation contours (and ignored when expressed by
modulations of face or body). In both speech and sign, the full range of discourse
markers deserves careful linguistic description and analysis. The BTS notation is
1062 IX. Handling sign language data

^dis’X … ^, following the standard notation convention of lower-case linguistic cat-


egory and upper-case content category, and bracketed circumflexes indicating onset
and offset of the non-manual. For example, in NGT a signer can check to see if a
message has been taken up by means of a brief downward head movement accompa-
nied by raised eyebrows and a held direct gaze at the addressee. This non-manual can
be called a “confirmation check”, indicated as ^dis’CONF^, generally executed while
the last manual sign is held. Discourse markers can be layered with other non-manuals,
requiring notation of one or more embeddings.
Gaze allocation serves a variety of communicative functions ⫺ indicating reference,
tracing a path, alternating between narrator and participant perspective, and others.
BTS takes gaze at addressee as default and uses a preposed asterisk to indicate shift
in gaze to an object or location, indicating the target of gaze in lower-case letters
(counting gaze as a meaning component is under discussion, cf. Hoiting 2009). For
example, a shift in gaze to the child would be notated as *child. In example (17),
grandfather looks at the child, points at himself and then tells the child that he (src’1)
will give her (gol’2) the ball (pm’SPHERE).

(17) *child pnt_1 pm’SPHERE-src’1-gol’2-pth’D [ASL]

Role shift is carried out by many aspects of signing that allow the signer to subtly and
quickly shift perspective from self to other participants in a narrative. The means of
role shift have not been explored in detail in the literature, and BTS provides only a
preliminary way of indicating that the signer has shifted to another role, using the
notation ‘RS …’. A simple example is presented in (18). The grandfather now shows
his grandchild how to catch the ball, pretending he himself is the ball-catching child.
He signs that he is a child and then role shifts into the child. This requires him to
dwarf himself, looking upward to the ‘pretend’ grandfather, indicated by a superior
location (loc’SUP), and lifting both his cup-shaped hands (pm’SPHERE) upward
(pth’U = upward path). The role-shifted episode is bracketed with single quotes.

(18) child ix_1 'RS *loc’SUP pm’SPHERE-pth’U ' [ASL]

3.4. Transcribing narrative discourse

Speech, unlike sign and gesture, is not capable of physically representing location and
movement. Instead, vast arrays of morphological and syntactic devices are used, across
spoken languages, to keep track of who is where and to shift attention from one protag-
onist or location to another. Problems of perspective and reference maintenance and
shift are severe in the rapidly fading acoustic medium. Therefore, grammars make use
of pronouns of various sorts, demonstratives, temporal deictics, intonation patterns,
and more. Some of these forms are represented in standard writing systems; some are
only hinted at by the use of punctuation; and many others simply are not written down,
either in everyday writing or in transcription. For example, role shift can be indicated
in speech by a layering of pitch, intonation, and rate, such as a rapid comment at lower
pitch and volume, often with a different voice quality. Sign languages, too, make use
of layered expressions, but with a wider range of options, including rate and magnitude,
43. Transcription 1063

but also many parts of the face and body. It is a challenge to systematically record and
notate such devices ⫺ a challenge that must be met by careful descriptive work before
designating a particular device as ‘linguistic’, ‘grammatical’, ‘expressive’, and so forth.
Accordingly, we include all such devices under the broad heading of the expression of
meaning in sign.
Narrative discourse makes use of dimensions that are not readily captured in linear
ASCII notation. A narrator sets up a spatial world and navigates between parts of it;
the world may contain “surrogates” (Liddell 2003) representing entities in a real-life
scale; the body can be partitioned to represent the signer and narrative participants
(Dudis 2004); part of the body can remain fixed across changes in other parts, serving
as a “buoy” (Liddell 2003) or a “referring expression” (Bergman/Wallin 2003), func-
tioning to maintain reference across clauses. Attempts have been made to notate such
complex aspects of discourse by the use of diagrams, pictures with arrows, and multilin-
ear formats. Recent attempts make use of multimedia data presentations, with multilin-
ear coding and real-time capture, as discussed in section 4. Here we only point out a
few of the very many representations that have been employed in attempts to solve
some of these problems of transcription.

3.4.1. Positioning and navigation in signing space

Traditional linear notations, as well as BTS, can notate directionality by abbreviations


of words such as left, right, forward, etc. But they can’t lay out a full space. There
are many literal or near-literal depictions in the literature, including photographs with
superimposed arrows, line drawings with arrows, and computer-generated stick figures
with some kind of dynamic notation. We exclude these here from discussion of tran-
scription, though they are very useful devices for aiding the reader in visualizing dy-
namic signing. One less literal notational technique consists in a schematized over-
head view.
For example, Liddell (2003, 106) combines linear transcriptions with diagrams to
distinguish between multiple and exhaustive recipients of a directed sign such as ask-
question. The superscript on the linear transcription distinguishes the two forms with
explicit words in square brackets along with Liddell’s invented arrow turning back on
itself, which “indicates that the hand moves along a path, such that the extent of the
path points toward entities a, b, and c” (Liddell 2003, 365). The transcription lines are
accompanied by overhead diagrams with a schematic arrow indicating two types of
hand/arm movements, as illustrated in Figure 43.1.
Morgan, in studies of narratives produced by BSL-signing children, uses two sorts
of schematic diagram, accompanied by symbolic notation. In one example, Morgan
(2005, 125) notates a description of a storybook picture in which a dog makes a beehive
fall from a tree while a boy looks on in shock (Figure 43.2). The notation >< indicates
mutual gaze between signer and addressee, contrasting with notations such as >> ‘look
right’, ^< ‘look up and left’, and others. Time moves downward in this “dynamic
space diagram”.
Morgan (2006, 331) uses a different sort of diagram to map out what each of the
two hands is doing separately, showing an overlap between “fixed referential space”
1064 IX. Handling sign language data

Fig. 43.1: A combination of linear descriptions and diagrams (Liddell 2003, 106, Fig. 4.8). Copy-
right © 2003 by Cambridge University Press. Reprinted with permission.

Fig. 43.2: Transcribing a description of a storybook picture (Morgan 2005, 125, Fig. 4). Copyright
© 2005 by John Benjamins. Reprinted with permission.

Fig. 43.3: Separate transcriptions for right and left hand (Morgan 2006, 331, Fig. 13⫺7). Copyright
© 2006 by Oxford University Press. Reprinted with permission.
43. Transcription 1065

(FRS) and “shifted referential space” (SRS) in a BSL narrative. The caption to his
original figure, which is included in Figure 43.3, explains the notational devices.
Many examples of such diagrams can be found in the literature, accompanied by
the author’s guide to specialized notational devices. This seems to be a domain of
representation of signed communication that requires more than a standardized linear
notation, but the profession could benefit from a consensus on standardized diagram-
matic representation.

3.4.2. Representing surrogates

Liddell has introduced the notion of “surrogate” and “surrogate space” (1994; summa-
rized in Liddell 2003, 141⫺175) in which fictive entities and areas are treated as if they
were present in their natural scale, rather than as miniatures in signing space. A simple
example is the direction of a verb of communication or transfer directed to an absent
third person as if that person were present. If the person is a child, for example, the
gesture and gaze will be directed downward (as in our example of grandfather giving
a ball to a child). There is no standard way of notating surrogates; Liddell makes uses
of diagrams drawn from mental space theory, in which “mental space elements” are
blended with “real space”. Taub (2001, 82) represents surrogates by superimposing an
imagined figure as a line drawing onto a space with a photograph of the signer. Both
signs in Figure 43.4 mean ‘I give to her’, but in A the surrogate is an adult and in B it
is a child. Again, a standardized notation is needed.

Fig. 43.4: Representation of surrogate space (Taub 2001, 82, Fig. 5.13). Copyright © 2001 by Cam-
bridge University Press. Reprinted with permission.
1066 IX. Handling sign language data

3.4.3. Body partitioning

Paul Dudis (2004; Wulf/Dudis 2005) has described how the signer can partition his or
her body to simultaneously present different viewpoints on a scene or different partici-
pants in an event. So far, there is no established means of notating this aspect of
signing. For example, Dudis (2004, 232) provides a picture of a signer demonstrating
that someone was struck in the face. The signer’s face and facial expression indicates
the victim, and the arm the assailant. Dudis follows Liddell’s (2003) use of the vertical
slash symbol to indicate roles of body partitions, captioning the picture: “The |victim|
and the |assailant’s forearm|”. Here, we have another challenge to sign language tran-
scription.

3.4.4. “Buoys” as reference maintenance devices

Liddell (2003) has introduced the term “buoy” to refer to a sequence of predications
in which one hand is held in a stationary configuration while the other continues pro-
ducing signs. He notes that buoys “help guide the discourse by serving as conceptual
landmarks as the discourse continues” (2003, 223). Consider the following rich example
from Janzen (2008), which includes facial expression of point of view (POV) along
with separate action of the two hands. The right hand (rh) represents the driver’s
vehicle, which remains in place as POV shifts from the driver to an approaching police
van, as represented by the left hand (lh). Janzen (2008, 137) presents several photo-
graphs with superimposed arrows, a lengthy narrative description, and the following
transcription, with a gloss of the second of three utterances (19b), presenting (19a) and
(19c) simply as translations for the purposes of this example.

(19)

Liddell simply presents his examples in series of photographs from discourse, with no
notational device for indicating buoys. What is needed is a notation that indicates the
handshapes and positions of each of the hands in continuing discourse, often accompa-
nied by rapid shifts in gaze. Bergman and Wallin (2003) provide a multilinear transcrip-
tion format for examples from Swedish Sign Language, making a similar observation
but with different terminology. They offer a format with separate lines for head, brows,
43. Transcription 1067

face, eyes, left hand, right hand, and mouth. This format is only readable with reference
to a series of photographs and accompanying textual description.
In sum, we lack adequate notation systems for complex, simultaneous, and rapidly
shifting components of signing in discourse. Various multimedia formats, as discussed
in the following section, promise to provide convenient ways to access these many
types of information, linking transcriptions and notations to video. For purposes of
comparability across studies and databases and sign languages, however, standardized
notation systems are still lacking.

4. Multimedia tools
Thus far, we have made the case for a robust transcription system that can note in a
systematic and language-neutral way the morphological (in addition to a phonological)
level for discourse, whether a narrative from a single interlocutor, or dialogue among
two or more individuals. We have discussed both handwritten and computer-supported
symbol sets, sometimes for the same transcription tools. Let us make overt just a few
of the implied challenges and advantages of computer-supported tools:

⫺ Input: consistency of input; potential for automated correction insofar as legitimate


sequences of characters can be characterized within the tools (cf. spell-checking);
note also that there is increasingly good capture from stylus input devices, which
might allow automated translation of manual coding into a standard representation
of symbols.
⫺ Searching, sorting, selecting: the potential for finding relative frequencies or simply
occurrence or co-occurrence of elements at various levels is much simplified when
the elements are machine readable, sortable, and searchable.
⫺ Output: multiple varieties of output are possible, from screen views to printed and
dynamic media formats.

The catalog of the LDC (Linguistic Data Consortium of the University of Pennsylva-
nia) offers nearly 50 options of applications to use with linguistic data stored there,
including (to choose just a few) content-based retrieval from digital video, discourse
parsing, topic detecting and analysis, speaker identification, and part of speech tagging.
While the current catalog has corpora from over 60 languages (and at least one non-
human species of animal calls), it does not include data from ASL or other sign lan-
guages. However, one can easily imagine that the proper tools would allow equally
easy access to investigate sign language data as LDC’s do today for spoken languages.

There are of course all the disadvantages of computer-supported tools which are not
specific to this domain, and just of few of which are mentioned here:

⫺ These applications may be initially limited to one operating system, a small number
of fonts, or other criteria that make early prototyping and development possible on
a budget, but also may limit the audience of possible users to a small niche among
a specialized group.
1068 IX. Handling sign language data

⫺ As with all software serving a small audience, the costs of continuous updates and
improvements may prove prohibitive. Some tools which have been well-conceived
and well-executed may find themselves orphaned by economic factors.
⫺ The lack of standards in a new arena for software may cause a project to develop
an application or product which becomes obsolete because it does not conform to
a newly accepted standard. Some of the sign language transcription tools may fall
into this trap. One recent discussion on SLLING-L got into details of what the
consequences for SignWriting or HamNoSys would be in a world where UTF-8
becomes standard for email, web, and all other renderings of fonts.

There are a number of additional features for transcription which are both desirable
and either in existence now or about to be realized for individuals and laboratories
which are devoted to sign language linguistic study.

4.1. Multitier coding capacity

At least two tools are available that serve the sign language linguistics community,
SignStream and ELAN, both multi-modal annotation and analysis tools.

4.1.1. SignStream

The SignStream application was created largely at Boston University in collaboration


with others, both computing specialists and sign language researchers, under funding
from the US National Science Foundation and other federal agencies. It is a database
tool specifically designed for managing sign language data in a multi-level transcription
system, keeping the relationships in time constant while allowing ever more granular
descriptions at each level of the coding. It displays time on a horizontal axis (items
occurring on the left prior to items occurring on the right). It permits viewing more
than one utterance at a time to allow side-by-side comparison of data. It has been
designed to handle time-based media other than audio and video.
The SignStream website (see section 6) gives May 2003 as the most recent release
of the product (version 2.2), with a promised version 3 on the way.

4.1.2. ELAN

Like SignStream, ELAN (formerly Eudico) is a linguistic annotation tool that creates
tiers for markup, can coordinate transcription at each tier for distinct attributes, and
can play back the video (or other) original recording along with the tiers. The tool is
being developed at the Max Planck Institute for Psycholinguistics (see Wittenburg et
al. 2006). ELAN is capable of coordinating up to four video sources, and of searching
based on temporal or structural constraints. It is being used for both sign language
projects, as part of ECHO (European Cultural Heritage Online), as well as for other
studies of linguistic behavior which need access to multi-modal phenomena. ELAN
also aims to deliver multimedia data over the internet with publicly available data
43. Transcription 1069

Fig. 43.5: Example of ELAN format (taken from: http://www.lat-mpi.eu/tools/elan/).

collections (see section 6 for the ECHO website and the website at which ELAN tools
are available). Figure 43.5 shows a screen shot from ELAN, showing part of an NGT
utterance. Note that the user is able to add and define tiers, delimit temporal spans,
and search at varying levels of specificity.
A project comparing the sign languages of the Netherlands, Britain, and Sweden is
based at the Radboud University and the Max Planck Institute for Psycholinguistics,
both in Nijmegen: “Language as cultural heritage: a pilot project with sign languages”
(see section 6 for website). The project is part of the ECHO endeavor, and conventions
are being developed for transcription within the ELAN system for articulatory behav-
ior in manual stream and segmented areas of the face (brows, gaze, mouth movements,
as well as mouth movements on at least two levels). Again these datasets include
glossing and translation, but as yet there is no tier devoted to morphological structure.
(We are aware of several studies in progress using BTS with ELAN to provide a
morphological level for ASL and NGT data.)

4.2. Digital futures

Sign language data which have been collected and analyzed to date are recorded at a
constant frame rate. Tools specifically designed for sign languages may have an advan-
tage which has been hinted at in this chapter, and implied by this volume, namely that
the characteristics of human language in another modality may make us more able to
see and integrate analyses which have been ignored by spoken language researchers
1070 IX. Handling sign language data

working with tools developed in the context of written language. Sign languages, like
other non-written languages, bring our attention to the dynamic dimensions of commu-
nication phenomena in general.
Bigbee, Loehr, and Harper (1991) compare several existing tools (including at least
two targeted at the sign language analysis community). They comment on the ways that
SignStream can be adapted to track tiers of interest to spoken language researchers as
well (citing a study of intonation and gesture that reveals “complementary discourse
functions of the two modalities”). They conclude with a “tentative list of desired fea-
tures” for a next generation multi-modal annotation and analysis tool (reformatted
here from their Table 3: Desired Features):

⫺ video stream(s) time-aligned with annotation;


⫺ directly supports XML tagsets;
⫺ time-aligned audio waveform display;
⫺ acoustic analysis (e.g. pitch tracking) tools included;
⫺ direct annotation of video;
⫺ hide/view levels;
⫺ annotation of different levels API and/or modular open architecture;
⫺ music-score display;
⫺ automatic tagging facilities;
⫺ easy to navigate and mark start and stop frame of any video or audio segment;
⫺ user can select current audio track from multiple available audio tracks;
⫺ segment start and stop points include absolute time values (e.g. not just frames);
⫺ user can create explicit relationships or links across levels;
⫺ can specify levels and elements (attribute / values);
⫺ inclusion of graphics as an annotation level;
⫺ support for overlapping, embedding and hierarchical structures in annotation;
⫺ easy to annotate metadata (annotator, date, time, etc.) at any given level or segment;
⫺ some levels time-aligned, others are independent but aligned in terms of segment
start/stop times;
⫺ support for working with multiple synchronized video, audio, and vector ink me-
dia sources;
⫺ import/export all annotations;
⫺ cross platform execution;
⫺ query/search annotations.

Rolfing et al. (2006) also provide a comparison of multi-modal annotation tools.


We can imagine that sign language data are being collected from blogs, from video
phone calls, and soon from mobile devices as well. Note that as data are increasingly
collected from digital originals, our transcriptions will need to account for algorithms
in the digital domain that are systematically enriching the signal or impoverishing it.
Consider the case of the Mobile-ASL development. This University of Washington
project is at present a proof of concept only, and not a product, but it is being devel-
oped with an eye to standards (in particular the H.264 video encoder). The researchers
are optimizing the signal to transmit with smart compression, showing fewer frames
from the person who is quiet, and more frames per second from the signer (Cavender
et al. 2006; Cherniavsky et al. 2007). They also work within the video standard to
43. Transcription 1071

recognize regions of the screen that transmit more information (hands, arms, face),
and ignore regions that are not contributing much (below the waist). Fingerspelling
requires more frames for intelligibility than most signs (especially on the small screen
of a mobile phone) and thus that region is given higher frame rate when fingerspelling
is detected. What other aspects of signing might need a more rich signal?

5. Conclusion
In conclusion, the study of sign language linguistics has blossomed in recent years, in
a number of countries. Along with this growth have come tentative systems for repre-
senting sign languages, using a range of partial and mutually incompatible notational
and storage devices. International conferences and discussion lists only serve to empha-
size that the field is in a very early stage, as compared with long traditions in the
linguistics and specifically, the transcription of spoken languages. There are many good
minds in play, and much work to be done. It is fitting to return to Elinor Ochs’s seminal
1979 paper, “Transcription as Theory”, which provided the epigraph to our chapter.
Ochs was dealing with another sort of unwritten language ⫺ the communicative behav-
ior of children. She concluded her chapter, some 30 years ago, with the question, “Do
our data have a future?” (Ochs 1979, 72). We share her conclusion:

A greater awareness of transcription form can move the field in productive directions. Not
only will we be able to read much more off our own transcripts, we will be better equipped
to read the transcriptions of others. This, in turn, should better equip us to evaluate particu-
lar interpretations of data (i.e., transcribed behavior). Our data may have a future if we
give them the attention they deserve.

Acknowledgements: The authors acknowledge the kind assistance of Adam Frost, who
created the SignWriting transcription for this occasion, on the recommendation of Val-
erie Sutton, and of Rie Nishio, a graduate student at Hamburg University, who pro-
vided the HamNoSys transcription, on the recommendation of Thomas Hanke.

6. Literature and web resources


Baker, Anne/Bogaerde, Beppie van den/Woll, Bencie
2005 Methods and Procedures in Sign Language Acquisition Studies. In: Sign Language &
Linguistics 8, 7⫺58.
Baker-Shenk, Charlotte/Cokely, Dennis
1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver
Spring, MD: TJ Publishers.
Bergman, Brita/Boyes-Braem, Penny/Hanke, Thomas/Pizzuto, Elena (eds.)
2001 Sign Transcription and Database Storage of Sign Information (Special Issue of Sign
Language & Linguistics 4(1/2)). Amsterdam: Benjamins.
Bergman, Brita/Wallin, Lars
2003 Noun and Verbal Classifiers in Swedish Sign Language. In: Emmorey, Karen (ed.),
Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erl-
baum, 35⫺52.
1072 IX. Handling sign language data

Bigbee, Tony/Loehr, Dan/Harper, Lisa


1991 Emerging Requirements for Multi-modal Annotation and Analysis Tools. In: Proceed-
ings, Eurospeech 2001; Special Event: Existing and Future Corpora ⫺ Acoustic, Linguis-
tic, and Multi-modal Requirements. [Available at: http://www.mitre.org/work/tech_
papers/tech_papers_01/bigbee_emerging/index.html]
Cavender, Anna/Ladner, Richard E./Riskin, Eve A.
2006 Mobile ASL: Intelligibility of Sign Language Video as Constrained by Mobile Phone
Technology. In: Assets ’06: Proceedings of the 8 th International ACM SIGACCESS Con-
ference on Computers and Accessibility.
Cherniavsky, Neva/Cavender, Anna C./Ladner, Richard E./Riskin, Eve A.
2007 Variable Frame Rate for Low Power Mobile Sign Language Communication. In: ACM
SIGACCESS Conference on Assistive Technologies. [Available at: http://dub.washington.
edu/pubs/79]
Comrie, Bernard
1981 The Languages of the Soviet Union. Cambridge: Cambridge University Press.
Dudis, Paul
2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15, 223⫺238.
Emmorey, Karen (ed.)
2003 Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erl-
baum.
Engberg-Pedersen, Elisabeth
1993 Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space
in a Visual Language. Hamburg: Signum.
Frishberg, Nancy
1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Lan-
guage 51, 696⫺719.
Frommer, Paul R./Finnegan, Edward
1994 Looking at Language: A Workbook in Elementary Linguistics. Fort Worth, TX: Har-
court Brace.
Grinevald, Colette/Seifart, Frank
2004 Noun Classes in African and Amazonian Languages: Towards a Comparison. In: Lin-
guistic Typology 8, 243⫺285.
Hoiting, Nini
2009 The Myth of Simplicity: Sign Language Acquisition by Dutch Deaf Toddlers. PhD Dis-
sertation, University of Groningen.
Hoiting, Nini/Slobin, Dan I.
2002 Transcription as a Tool for Understanding: The Berkeley Transcription System for Sign
Language Research (BTS). In: Morgan, Gary/Woll, Bencie (eds.), Directions in Sign
Language Acquisition. Amsterdam: Benjamins, 55⫺75.
Janzen, Terry
2008 Perspective Shifts in ASL Narratives: The Problem of Clause Structure. In: Tyler, An-
drea/Kim, Yiyoung/Takada, Mari (eds.), Language in the Context of Use. Berlin: Mou-
ton de Gruyter, 121⫺144.
Kim, Young-joo
1997 The Acquisition of Korean. In: Slobin, Dan I. (ed.), The Crosslinguistic Study of Lan-
guage Acquisition (Volume 4). Mahwah, NJ: Lawrence Erlbaum, 335⫺443.
Lehmann, Christian
1982 Directions for Interlinear Morphemic Translations. In: Folia Linguistica 16, 199⫺224.
Lehmann, Christian
2004 Interlinear Morphemic Glossing. In: Booij, Geert/Lehmann, Christian/Mugdan,
Joachim (eds.), Morphology/Morphologie: A Handbook on Inflection and Word Forma-
tion/Ein Handbuch zur Flexion und Wortbildung (Volume 2). Berlin: Mouton de Gruy-
ter, 1834⫺1857.
43. Transcription 1073

Liddell, Scott K.
1994 Tokens and Surrogates. In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Per-
spectives on Sign Language Structure. Papers from the Fifth International Symposium
on Sign Language Research. Durham: ISLA, 105⫺119.
Liddell, Scott K.
2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge
University Press.
Mandel, Mark A.
1993 ASCII-Stokoe Notation: A Computer-writeable Transliteration System for Stokoe
Notation of American Sign Language: http://www.speakeasy.org/~mamandel/ASCII-
Stokoe.html.
Meir, Irit
1998 Syntactic-semantic Interaction in Israeli Sign Language Verbs: The Case of Backwards
Verbs. In: Sign Language & Linguistics 1, 1⫺33.
Morgan, Gary
2005 Transcription of Child Sign Language: A Focus on Narrative. In: Sign Language &
Linguistics 8, 117⫺128.
Morgan, Gary
2006 The Development of Narrative Skills in British Sign Language. In: Schick, Brenda/
Marschark, Marc/Spencer, Patricia E. (eds.), Advances in the Sign Language Develop-
ment of Deaf Children. Oxford: Oxford University Press, 314⫺343.
Newkirk, Don E.
1989a SignFont: Exercises. Bellevue, WA: Edmark Corporation.
Newkirk, Don E.
1989b SignFont: Handbook. Bellevue, WA: Edmark Corporation.
Newkirk, Don E.
1997 “Re: SignWriting and Computers”. Contribution to a Discussion on the Sign Language
Linguistics List (SLLING-L: SLLING-L@yalevm.ycc.yale.edu); Thu, 13 Feb 1997
09:13:05 -0800.
Ochs, Elinor
1979 Transcription as Theory. In: Ochs, Elinor/Schieffelin, Bambi B. (eds.), Developmental
Pragmatics. New York: Academic Press, 43⫺72.
Padden, Carol
1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland.
Padden, Carol
2004 Translating Veditz. In: Sign Language Studies 4, 244⫺260.
Pfau, Roland/Steinbach, Markus
2003 Optimal Reciprocals in German Sign Language. In: Sign Language & Linguistics 6,
3⫺42.
Raduzky, Elena
1992 Dizionario della Lingua Italiana dei Segni [Dictionary of Italian Sign Language]. Rome:
Edizioni Kappa.
Rathmann, Christian/Mathur, Gaurav
2002 Is Verb Agreement the Same Crossmodally? In: Meier, Richard P./Cormier, Kearsy/
Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages.
Cambridge: Cambridge University Press, 370⫺404.
Rathmann, Christian/Mathur, Gaurav
2005 Unexpressed Features of Verb Agreement in Signed Languages. In: Booij, Geert/Gue-
vara, Emiliano/Ralli, Angela(eds.), Morphology and Linguistic Typology, On-line Pro-
ceedings of the Fourth Mediterranean Morphology Meeting (MMM4). [Available at:
http://pubman.mpdl.mpg.de/pubman/item/escidoc:403906:5]
1074 IX. Handling sign language data

Rolfing, Katharina et al.


2006 Comparison of Multimodal Annotation Tools ⫺ Workshop Report, Gesprächsfor-
schung ⫺ Online-Zeitschrift zur Verbalen Interaktion 7, 99⫺123. [Available at: http://
www.gespraechsforschung-ozs.de/heft2006/tb-rohlfing.pdf]
Schermer, Trude
2003 From Variant to Standard: An Overview of the Standardization Process of the Lexicon
of Sign Language of the Netherlands Over Two Decades. In: Sign Language Studies 3,
469⫺486.
Skant, Andrea/Okorn, Ingeborg/Bergmeister, Elisabeth/Dotter, Franz/Hilzensauer, Marlene/
Hobel, Manuela/Krammer, Klaudia/Orter, Reinhold/Unterberger, Natalie
2002 Negationsformen in der Österreichischen Gebärdensprache. In: Schulmeister, Rolf/
Reinitzer, Heimo (eds.), Progress in Sign Language Research: In Honor of Siegmund
Prillwitz. Hamburg: Signum, 163⫺185.
Slobin, Dan I.
2008 Breaking the Molds: Signed Languages and the Nature of Human Language. In: Sign
Language Studies 8, 114⫺130.
Slobin, Dan I./Hoiting, Nini/Anthony, Michelle/Biederman, Yael/Kuntze, Marlon/Lindert, Reyna/
Pyers, Jennie/Thumann, Helen/Weinberg, Amy
2001 Sign Language Transcription at the Level of Meaning Components: The Berkeley Tran-
scription System (BTS). In: Sign Language & Linguistics 4, 63⫺96.
Slobin, Dan I./Hoiting, Nini/Anthony, Michelle/Biederman, Yael/Kuntze, Marlon/Lindert, Reyna/
Pyers, Jennie/Thumann, Helen/Weinberg, Amy
2003 A Cognitive/Functional Perspective on the Acquisition of “Classifiers”. In: Emmorey,
Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ:
Lawrence Erlbaum, 271⫺296.
Stokoe, William
1978 Sign Language Structure. Silver Spring, MD: Linstok Press [reprinted from 1960].
Stokoe, William C./Casterline, Dorothy C./Croneberg, Carl G.
1965 A Dictionary of American Sign Language on Linguistic Principles. Washington, DC:
Gallaudet University Press. [revised edition, Silver Spring, MD: Linstok Press, 1976].
Supalla, Ted
2004 The Validity of the Gallaudet Lecture Films. In: Sign Language Studies 4, 261⫺292.
Taub, Sarah F.
2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cam-
bridge: Cambridge University Press.
Wittenburg, Peter/Brugman, Hennie/Russel, Albert/Klassmann, Alex/Sloetjes, Han
2006 ELAN: a Professional Framework for Multimodality Research. In: Proceedings of the
5 th International Conference on Language Resources and Evaluation (LREC 2006),
1556⫺1559. [Available at: http://www.lat-mpi.eu/papers/papers-2006/elan-paper-
final.pdf]
Wulf, Alyssa/Dudis, Paul
2005 Body Partitioning in ASL Metaphorical Blends. In: Sign Language Studies 5, 317⫺332.

Web resources:
BTS (Berkeley Transcription System):
http://ihd.berkeley.edu/Slobin-Sign%20Language/%282001%29%20Slobin,%20Hoit-
ing%20et%20al%20-%20Berkeley%20Transcription%20System%20%28BTS%29.pdf
CHILDES (Child Language Data Exchange System): http://childes.psy.cmu.edu
ECHO: http://echo.mpiwg-berlin.mpg.de/home
ELAN tools: http://www.lat-mpi.eu/tools/elan/
HamNoSys: http://www.sign-lang.uni-hamburg.de/Projects/HamNoSys.html.
44. Computer modelling 1075

“Language as cultural heritage: a pilot project with sign languages”; Radboud University and
Max Planck Institute for Psycholinguistics, Nijmegen: http://www.let.ru.nl/sign-lang/echo/.
Leipzig Glossing Rules (LGR): http://www.eva.mpg.de/lingua/resources/glossing-rules.php.
MobileASL: http://dub.washington.edu/projects/mobileasl
Morgan, Gary: Award Report “Exchanging Child Sign Language Data through Transcription”:
http://www.esrcsocietytoday.ac.uk/my-esrc/grants/RES-000-22-0446/read
SignStream: http://www.bu.edu/asllrp/SignStream/
SignWriting
teaching course: http://signwriting.org/lessons/lessonsw/lessonsweb.html
examples from various sign languages: www.signbank.org
ASL symbol cheet sheet by Cherie Wren: http://www.signwriting.org/archive/docs5/sw0498-
ASLSymbolCheetSheet.pdf

Nancy Frishberg, San Carlos, California (USA)


Nini Hoiting, Groningen (The Netherlands)
Dan I. Slobin, Berkeley, California (USA)

44. Computer modelling


1. Introduction
2. Computational lexicography
3. Computer corpora for sign linguistics research and grammar modelling
4. Sign capture and recognition
5. Automated signing (synthesis)
6. Machine translation and animation
7. Social challenges of automated signing
8. Conclusion
9. Literature and web resources

Abstract
The development of computational technologies in sign language research is motivated
by providing more information and services to deaf people. However, sign languages
contain phenomena not seen in traditional written/spoken languages; therefore, they are
increasingly challenging to traditional computational approaches. In this chapter, we give
an overview of the different areas of computer-based technologies in this field. We briefly
describe some current systems, also addressing their limitations and pointing out further
motivation for the development of new systems.

1. Introduction
In this chapter, we will focus on the presentation of fundamental research and develop-
ment in computer-based technology, which open up new potential applications for sign
1076 IX. Handling sign language data

language communication and human-computer interaction. The possibilities have


grown in recent years with rapid hardware development, more active linguistic re-
search, and exploitation of 3D graphics technologies, resulting in many different appli-
cations for sign languages such as multimedia dictionaries (VCom3D 2004 (see section 9
for website); Buttussi/Chittaro/Coppo 2007), teaching materials (Sagawa/Takeuchi
2002; Karpouzis et al. 2007), and machine translation systems in the ViSiCAST project
(Elliott et al. 2000, 2008) and from VCom3D (Sims 2000).
The structure of the chapter reflects the main areas of the field. Section 2 explains
differences between machine-readable dictionaries and lexicons and also mentions
tools and methodologies within computational lexicography. Section 3 explains the role
of electronic corpora for research and gives a short overview of existing corpus collec-
tions and their shortcomings and potential. Within section 3, we also describe annota-
tion tools and standards for the use of an electronic corpus. Section 4 introduces the
reader to sign language recognition techniques after a brief historical overview of the
field. Section 5 is about automated signing or synthesis. In this section, we also give an
example of how the lexicon and grammar can be modelled for computational purposes.
Section 6 briefly describes some machine translation systems for sign languages with
some aspects of generating animation. Last but not least, in section 7, we also mention
some social challenges of automated signing, which involve misunderstandings about
such research in both the hearing and deaf communities.
Some sections might seem rather technical for the general reader; however, our aim
is to give an overview of the complexity that is involved in this field and to motivate
the interested reader to further study the topic.

2. Computational lexicography

Computational lexicography is a discipline that is interconnected with linguistics and


computer science. Lexicography ⫺ as the branch of applied linguistics concerned with
the design and construction of lexicons ⫺ can benefit from linguistic research by giving
an increasingly detailed description of the lexicon paradigmatically and syntagmatically
(refinement of valences, syntactic and semantic classes, collocational restrictions, etc.).
On the other hand, computational methods and tools are needed to automate diction-
ary construction and maintenance.
However, there is a considerable difference between dictionaries/lexicons for hu-
man consumption or for machine-use. Handke (1995) distinguishes machine-readable
dictionaries and machine-readable lexicons: the former are electronic forms of book
dictionaries, that is, machine-readable versions of published dictionaries for referencing
by a human (e.g. CD-ROM), whereas components of a natural language processing
system of a machine are called lexicons. The former rely heavily on the linguistic and
world knowledge of the user, which may not be suitable to computational processing
of the language. A lexicon for machine-use has to be explicit, which means it has to
contain a formal description of data, and has to be systematic and flexible (see also
following sections). Therefore, the term ‘Computational Lexicography’ mostly refers
to gathering lexical information for use by automated natural language processing sys-
tems, that is, developing lexicons for machine-use, but the term can also be extended
44. Computer modelling 1077

to the computational techniques in the development of dictionary databases for human


use (Boguraev/Briscoe 1989).
Paper dictionaries provide static images and/or descriptions of signs. However, they
are not the best solutions, as it is not easy to represent movements on paper. Therefore,
several researchers have proposed multimedia dictionaries for sign languages of spe-
cific countries (Wilcox et al. 1994; Sims 2000; BritishSignLanguage.com 2000 (see sec-
tion 9 for website); amongst others), but there are only a few proposals for multi-
language dictionaries. Moreover, current multimedia dictionaries suffer from serious
limitations. Most of them allow only for a word-to-sign search, while only a few of
them exploit sign parameters (i.e., the basic units of signs: handshape, orientation,
location, and movement). Therefore, Buttussi, Chittaro, and Coppo (2007) propose a
sign-to-word and sign-to-sign search in an online international sign language dictionary,
which exploits Web3D technologies. The user chooses the parameters (or ‘cheremes’;
cf. Stokoe/Casterline/Croneberg 1976) and the H-Anim humanoid posture or move-
ment are updated to preview the resulting sign. With the improving sign recognition
techniques (see section 4), the sign search in dictionaries might become even more
user-friendly.
On the other hand, the function and use of a machine readable lexicon and its
structure depends on its general application area, that is, whether it is interfaced with
other modules such as a parser or a morphological component, and whether it is used
interactively. Therefore, it is often the case that such purpose built lexicons cannot be
generalised to other systems (Boguraev/Briscoe 1989). An example of a lexicon for
generation purposes will be discussed in section 5.3.
Computational methods have given rise to new tools and methodologies for build-
ing computational lexicons (iLex, as an example of such a tool, is described in section 3.2).
In order to avoid judgements based on intuitions of linguists, evidence of the lexical
behaviour of signs has to be found by analysis of corpora/unrestricted text (Ooi 1998).
Statistical analysis can be used for checking consistency, detecting categories, word and
collocation frequencies, links between grammar and the lexicon, etc. In order to gain
a greater linguistic understanding, many researchers advocate annotation (a structural
mark-up or tagging) of the corpus. In Section 3, we discuss related issues of corpus
annotation further.

3. Computer corpora for sign linguistics research and


grammar modelling
A definition of corpus provided by Sinclair (1996) in the framework of the EAGLES
project (see section 9 for website) runs as follows: “A corpus is a collection of pieces
of language that are selected and ordered according to explicit linguistic criteria in
order to be used as a sample of the language”. Atkins et al. (1991) differentiate a
corpus from a generic library of electronic texts as a well-defined subset that is de-
signed according to specific requirements to serve specific purposes. Furthermore, the
definition of a computer corpus crucially states that “[a] computer corpus is a corpus
which is encoded in a standardised and homogenous way for open-ended retrieval
tasks […]” (Sinclair 1996).
1078 IX. Handling sign language data

An electronic corpus is of the utmost importance for the creation of electronic


resources (grammars and dictionaries) for any natural language. Some methodological
and technical challenges are inherent to the nature of sign languages themselves. Lan-
guages without a written form (especially sign languages) lack even the most basic
textual input for morphological and phrasal level analysis. So even at these levels, any
leverage that corpora and statistical techniques may give is unavailable.
The significance of sign language features has been characterised informally within
sign language linguistics; however, more precise definitions and formulations of such
phenomena are required in order to build computational models that can lead to com-
puter-based facilities for deaf people. For that purpose, research needs to employ a
number of data collection and evaluation techniques. Further, a substantial corpus is
needed to drive automatic recognition and generation used as a target form to which
synthetic signing should aspire. The synthetically generated signing can also be re-
viewed with the help of the corpus to determine if inadequacies result from grammati-
cal formulation or from graphical realisation.
Several groups have worked on digital sign language corpora (see section 9 for
a list of websites), but most of them have focused on linguistic aspects rather than
computational processing (see also section 3.1 on coding conventions and re-usability).
These corpora are also either too small or too general for natural language processing
tasks, and are therefore unsuitable for training a statistical system or fail to provide
sufficiently fine-grained details for driving an avatar. While linguists try to obtain un-
derstanding of how signing is used (coarticulation, sentence boundaries, role shift, etc.),
computer scientists are interested in data for driving an avatar or for automatic recog-
nition of signing (i.e. data from tracking movements, facial expressions, timing, etc.).
Examples for such linguistic or recognition-focussed corpora are Neidle (2002, 2007)
for the former, and Bowden (2004) for the latter.
For multi-lingual research and applications, parallel corpora are basic elements, as
in the case of translation-memory applications and pattern-matching approaches to
machine translation. However, parallel corpus collection for sign languages has so far
been undertaken only on a small scale or for interpreters, and not for semi-spontaneous
signing by native signers. Most available sign language corpora contain simple stories
performed by a single signer. The non-written nature of sign language, as well as the
risk of influence from written majority languages complicate the collection of a paral-
lel corpus.
The Dicta-Sign project (Efthimiou et al. 2009) intends to construct the first parallel
corpus to support future sign language research. The corpus will inevitably cover a
limited domain but will allow for linguistic comparison across sign languages, support
for multilingual recognition and generation, and research into (shallow) translation
between sign languages. The establishment of substantial electronic corpora from
which the required information can be derived could significantly improve the produc-
tivity of sign language researchers as well.
The standard approach for parallel corpora is to translate a source text available in
one language into all the other languages and then align the resulting texts. For sign
languages, however, this approach would lead to language use not considered natural
by most signers. Instead, Dicta-Sign works with native or near-native signers interacting
in pairs in different communication settings, thus coming as close as possible to natural
44. Computer modelling 1079

conversation (given the necessity to operate in a studio and domain restrictions). The
approach taken is to design elicitation tasks that result in semantically close answers
without predetermining the choice of vocabulary and grammar.

3.1. Annotation coding standards, conventions, and re-usability

In 1998⫺2000, a survey of sign language resources worldwide conducted by the Inter-


sign network (funded by the European Science Foundation) showed that most sign
language corpora in existence at that time were small-scale and had been collected
with a single purpose in mind. In many cases, the only property that sign language
transcriptions had in common was that they used some sort of glosses. But even there,
glossing conventions differed from research group to research group and over time
also within groups, and coding of form was often limited to a bare minimum. Re-use
of corpora was an exception, as was sharing the data with other institutions. No com-
mon coding standards were followed, nor had coding conventions been documented
in every case.
This situation did not allow the possibility of exchanging data between projects,
thus preventing researchers from using others’ experiences, which implies that similar
work had to be repeated for new projects, slowing down progress. Another problem
emerging with the lack of standards is that consistency was not guaranteed even within
corpus projects. Standardization would improve methodology in research and would
ease collaboration between institutes and projects. Consistency in use of ID-glosses,
tiers, and field values (see section 3.2) would make the use of a corpus more produc-
tive: the corpus would become machine-readable, that is, it would be searchable and
automatically analysable.

3.2. Annotation tools and technical issues

Annotation tools used for sign language corpora, such as AnCoLin (Braffort et al.
2004), Anvil (Kipp 2001), ELAN (Wittenburg et al. 2006), iLex (Hanke 2002), Sign-
Stream (Neidle 2001), and syncWRITER (Hanke 2001), define a temporal segmenta-
tion of a video and annotate time intervals in a multitude of tiers for transcription.
Tiers usually hold text values, and linguistic modelling often only consists in restricting
tags to take values from a user-defined list. Some tools go slightly beyond this basic
structure by allowing complex values and database reference tags in addition to text
tags (iLex for type/token matching), image tags and selection of poster frames from
the video instead of text tags (syncWRITER), or relations between tiers (e.g. to ex-
clude co-occurrence of a one-handed and a two-handed token in ELAN and others).
Some tools support special input methods for notation system fonts (e.g. iLex for Ham-
NoSys (Hanke 2004)). However, an equivalent to the different graphical representa-
tions of sound as found in spoken language tools is not available. Also, since current
tools do not feature any kind of automation, the tagging process is completely manual.
Manual annotation for long video sequences becomes error-prone and time-consuming,
with the quality depending on the annotator’s knowledge and skills. The Dicta-Sign
1080 IX. Handling sign language data

project therefore proposes a way to integrate automatic video processing together with
the annotator’s knowledge.
Moreover, technological limitations of the annotation tools have often made it diffi-
cult to use data synchronised with video independent of the tools originally used
(Hanke 2001). Where standard tools have been used, synchronisation with video was
missing, making verification of the transcription very difficult. This situation has
changed somewhat in the last years as sign language researchers have started to use
more open tools with the greater availability of corpus tools for multimodal data.
Some projects such as the EC-funded ECHO (2002⫺2004) and the US National
Center for Sign Language and Gesture Resources at Boston University (1999⫺2002)
have established corpora each with a common set of conventions. Tools such as iLex
(Hanke 2002) specifically address data consistency issues caused by the lack of a writ-
ing system with a generally accepted orthography. The Nijmegen Metadata Workshop
2003 (Crasborn/Hanke 2003) defined common metadata standards for sign language
corpora but to date few studies adhere to these. For most of the tools currently in use
for sign language corpus collection, data exchange on a textual level is no longer a
problem. The problem of missing coding conventions, however, is still a real one.
One of the most widely used annotation tools is ELAN, which was originally created
to annotate text for audio and video. Playing video files on a time line is typical in
such programmes: the user assigns values to time segments. Annotations of various
grammatical levels are linked to the time tokens. Annotations are grouped in tiers
created by the user, which are layers of statistically analysable information represented
in a hierarchical fashion. However, glosses are text strings just like any other annota-
tion or commentary (see also chapter 43, Transcription).
iLex is a transcription database for sign language combined with a lexical database.
At the heart of transcribing with iLex is the type-token matching approach. The user
identifies candidates from the type to be related to a token by (partial) glosses, form
descriptions in HamNoSys, or meaning attributions. This method allows automatic pro-
duction of a dictionary (by lemmatisation) within a reasonable time. It also supports
consistent glossing by being linked to a lexical database that handles glosses as names
of database entities.
SignStream (see also chapter 43, Transcription) maintains a database consisting of
a collection of utterances, each of which associates a segment of video with a fine-
grained multi-level transcription of that video. A database may incorporate utterances
pointing to one or more movie files. SignStream allows the user to enter data in a
variety of fields, such that the start and end points of each data item are aligned to
specific frames in the associated video. A large set of fields and values is provided;
however, the user may create new fields or values or edit the existing set. Data may
be entered in one of several intuitive ways, including typing text, drawing lines, and
selecting values from menus. It is possible to display up to four different synchronised
video files, in separate windows, for each utterance. It is also possible to view distinct
utterances (from one or more SignStream databases) on screen simultaneously.
Anvil is a tool for the annotation of audio-visual material containing multimodal
dialogue. The multiple layers are freely definable by inserting time-anchored elements
with typed attribute-value pairs. Anvil is highly generic, platform-independent, XML-
based, and fitted with an intuitive graphical interface. For project integration, Anvil
offers the import of speech transcription and export of text and table data for further
44. Computer modelling 1081

statistical processing. While not designed specifically to handle sign language, the capa-
bilities for handling multimodal media makes it a suitable tool for some signing applica-
tions.

4. Sign capture and recognition


Computer graphics research began in the 1980s for sign language. Sign capture and
recognition work focussed initially on motion capture of manual signing (section 4.1).
Later approaches analyse signing by using much less intrusive, video-based techniques.
Current research addresses sign capture of non-manual aspects and the use of manual
and non-manual information in combination (section 4.2).

4.1. Motion capture of manual signs

Following a period of more active sign language research, Loomis, Poizner, and Bellugi
(1983) introduced an interactive computer graphic system for analysis and modelling
of sign language movement, which was able to extract grammatical information from
changes in the movement and spatial contouring of the hands and arms. The recognised
signs were presented by animating a ‘skeleton’ (see Section 5.1). The first multimedia
sign language dictionary for American Sign Language (ASL) was proposed by Wilcox
et al. (1994), using videos for sign language animations. Since a 2D image may be
ambiguous, a preliminary 3D arm model for sign language animation was proposed by
Gibet (1994), but her model did not have enough joints to be suitable for signing. In
2002, Ryan Patterson developed a simple glove which sensed hand movements and
transmitted the data to a device that displayed the fingerspelled text on a screen.
CyberGloves have a larger repertoire of sensors and are more practical for capturing
the full range of signs (see section 9 for Patterson glove and CyberGlove websites).
There is a range of motion capturing systems that have been applied to capture sign
language, including complex systems with body suits, data-gloves, and headgear that
allow for the collection of data on body movements, hand movements, and facial ex-
pressions. These systems can be intrusive and cumbersome to use but, after some post-
processing, provide reliable and accurate data on signing. The TESSA project (Cox et
al. 2002) was based on this technology: the signer’s hand, mouth, and body movements
were captured and stored, and the data were then used to animate the avatar when
needed (see section 9 for website).

4.2. Video-based recognition

4.2.1. Manual aspects

A less intrusive, but computationally much more challenging approach is to process


images from video cameras to identify signs. Starner (1996) developed a camera-based
system that required the signer to wear two different coloured gloves, but in later
1082 IX. Handling sign language data

versions no gloves were required. The image data processed by computer vision sys-
tems can take many forms, such as video sequences or views from multiple cameras.
There has been extensive work on the recognition of one-handed fingerspelling (e.g.
Bowden/Sahardi 2002; Lockton/Fitzgibbon 2002), although this is a small subset of the
overall problem. For word-level sign recognition, the most successful methods to date
have used devices such as data-gloves and electromagnetic/optical tracking, rather than
monocular image sequences, and have achieved lexical sizes as high as 250 base signs.
However, vision approaches to recognition have typically been limited to around
50 signs and even this has required a heavily constrained artificial grammar on the
structure of the sentences (Starner/Pentland 1995; Vogler/Metaxas 1998).
The application of statistical machine learning approaches based on Hidden Markov
Models (HMMs) has been very successful in speech recognition research. Adopting a
similar approach, much sign language recognition research is based on extracting vec-
tors of relevant visual features from the image and attempting to fit HMMs (Starner/
Pentland 1995; Vogler/Metaxas 1998; Kraiss 2006). To cover the natural variation in
events and effects of co-articulation, large amounts of data are required. These HMM
approaches working on 20⫺50 signs typically required 40⫺100 individual training ex-
amples of each sign.
An alternative approach based on classification using morphological features has
achieved very high recognition rates on a 164 sign lexicon with as little as a single
training example per sign. No artificial grammar was used in this approach, which has
been applied to two European sign languages, British Sign Language (BSL) and Sign
Language of the Netherlands (NGT) (Bowden et al. 2004). The classification architec-
ture is centred around a linguistic model of a sign rather than a HMM. A symbolic
description is based upon linguistically significant parameters for handshape, move-
ment, orientation, and location, similar to components used in a HamNoSys descrip-
tion.
In the Dicta-Sign project (Efthimiou et al. 2009), these techniques are extended to
larger lexicon recognition, from isolated sign recognition to continuous sign recognition
for four national sign languages, the aim being to improve the accuracy further through
the addition of natural sign language grammar and linguistic knowledge. Crucially, the
project will also take into account non-manual aspects of signing which have largely
been ignored in earlier approaches to sign language recognition (see section 4.2.2).
Although it is acceptable to use intrusive motion capture equipment where highly
accurate sign capture is needed, video-based techniques are more appropriate for cap-
ture of signing by general users. Accurate analysis of signs depends on information
about the 3D position of the arms and hands (depth information). While it is difficult
to extract 3D information from monocular video input, the Kinect peripheral for the
Microsoft Xbox 360 is a low-cost device that provides accurate realtime 3D information
on the position of a user’s arms, though less information on handshape. Experimental
sign recognition systems have been developed for a limited range of gestures, and it is
to be expected that more comprehensive systems using Kinect will develop rapidly.

4.2.2. Non-manual aspects

Early work on automatic facial expression recognition by Ekman, Friesen, and Hager
(1978) introduced the Facial Action Coding System (FACS). FACS provided a proto-
44. Computer modelling 1083

type of the basic human expressions and allowed researchers to study facial expression
based on an anatomical analysis of facial movements. A movement of one or more
muscles in the face is called an action unit (AU), and all facial expressions can then
be described by a combination of one or more of 44 AUs. Viola and Jones (2004) built
a fast and reliable face detector using a ‘boosting’ technique that improves accuracy
by tuning classifiers to deal better with difficult cases. Wang et al. (2004) extended this
technique to facial expression recognition by building separate classifiers of features
for each expression.
Sign language is inherently multi-modal since information is conveyed through
many articulators acting concurrently. In Dicta-Sign (Efthimiou et al. 2009), the com-
bined use of manual aspects of signs (e.g. handshapes, movement), non-manual aspects
(e.g. facial expressions, eye gaze, body motion), and possibly lip-reading is treated as
a problem in fusion of multiple sign modalities. Extraction of 3D information is simpli-
fied by the use of binocular video cameras for data recording. In other pattern recogni-
tion applications, combination of multiple information sources has been shown to be
beneficial, e.g. sign recognition (Winridge/Bowden 2004) and audio-visual speech rec-
ognition (Potamianos et al. 2003). The key observation is that combining complemen-
tary data sources leads to better recognition performance than is possible using the
component sources alone (Kittler et al. 1998).

5. Automated signing (synthesis)


With high-bandwidth broadband networks becoming widely available, it is practical to
use video technology to display fixed sign language content. However, where sign se-
quences are prepared automatically by computer-based techniques, an alternative is to
use 3D computer graphics technology and present signing through a virtual human
character or ‘avatar’. The displayed signing can be based on the smoothed concatena-
tion of motion captured data, as with Tessa (Cox et al. 2001), or can be synthesised
from a representation in a sign language gesture notation (Kennaway 2002).

5.1. Virtual signing using 3D avatars

The standard approach to avatar animation involves defining a ‘skeleton’ that closely
copies the structure of the human skeleton, as in the H-Anim standard. A 3D ‘mesh’
encloses the skeleton and a ‘texture’ applied to the mesh gives the appearance of the
skin and clothing of the character. Points on the mesh are associated with segments of
the skeleton so that when the bones of the skeleton are moved and rotated, the mesh
is distorted appropriately, giving the appearance of a naturally moving character. Ex-
pressions on the face are handled specially, using ‘morph targets’ which relocate points
on the facial mesh so that the face takes on a target expression. By varying the offsets
of the points from their location on a neutral face towards the location in the morph
target, an expression can be made to appear and then fade away.
Animation data for an avatar therefore takes the form of sets of parameters for the
bones and facial morphs, for each frame or animation time-step. Animation data can
1084 IX. Handling sign language data

be derived from motion capture or conventional animation techniques involving posing


the avatar skeleton by hand. For synthetic animation, the location of the hands is
calculated relative to the body and the technique of inverse kinematics is used to
compute the position of arms and elbow. For signing applications, a more detailed
skeleton may be required, paying attention to the scope for articulating the hands, and
good quality facial animation must be supported. The specification of an avatar will
include information about key locations on the body used in signing (Jennings et al.
2010).
A number of synthetic sign language animation systems have been developed over
the past decade or so:

⫺ in the ViSiCAST and eSIGN projects at the University of East Anglia (Elliott et
al. 2000, 2008; see section 9 for website);
⫺ at the Chinese Academy of Sciences, whose system also includes recognition tech-
nology (Chen et al. 2002, 2003);
⫺ in the DePaul University ASL Project (Sedgwick et al. 2001);
⫺ in the South African SASL-MT Project (Van Zijl/Combrink 2006);
⫺ for Japanese Sign Language (Morimoto et al. 2004);
⫺ the Thetos translation system for Polish Sign Language (Francik/Fabian 2002;
Suszczanska et al. 2002).
⫺ VCom3D (Sims 2000) has for some years marketed Sign Smith Studio, a signing
animation system, which was originally implemented in VRML (Virtual Reality
Modelling Language) but now uses proprietary software.

Applications include sign language instruction, educational materials, communication


tools, and presentation of sign language on websites.

5.2. Computational techniques to generate signing with avatars

Since natural sign language requires extensive parameterisation of base signs for loca-
tion, direction of movement, and classifier handshapes, it is restricted to base synthesis
on a fixed database of signs. One approach is to create a linguistic resource of signs
via motion captured data collection and to use machine learning and computational
techniques to model the movement and to produce natural looking sign language (Lu
2010). This approach echoes the use of sampled natural speech in the most successful
speech synthesis systems for hearing people.
The alternative is to develop a sign language grammar to support synthesis and
visual realisation by a virtual human avatar given a phonetic-level description of the
required sign sequence. Speech technology exploits phonological properties of spoken
words to develop speech synthesis tools for unrestricted text input. In the case of sign
languages, a similar approach is being experimented with, in order to generate signs not
by mere video recording, but rather by composing phonological components of signs.
During the production of synthesised sign phrases, morphemes with grammatical
information may be generated in a cumulative way to parameterise a base sign (e.g.
three-place predicate constructions) and/or simultaneously with base morphemes. In
the latter case, they are articulated by means of non-manual signals, in parallel with
44. Computer modelling 1085

the structural head sign performed by the manual articulatory devices, resulting in a
non-linear construction that conveys the intended linguistic message.
Sign language synthesis is heavily dependent on (i) the natural language knowledge
that is coded in a lexicon of annotated signs, and (ii) on a set of rules that allows
structuring of core grammar phenomena, making extensive use of feature properties
and structuring options. This is necessary in order to guarantee the linguistic adequacy
of the signing performed. Computer models require precise formulation of language
characteristics, which current sign language linguistics often does not provide. One of
the main objectives is a model that can be used to analyse and generate natural signing.
But with signing, it is difficult to verify that our notations and descriptions are ad-
equate ⫺ hence the value of an animation system to verify transcriptions and synthe-
sised signing, confirming (or not) that they capture the essence of sign.
In the following sections, we briefly discuss some examples for modelling sign lan-
guage lexicon and grammar which support synthetic generation and visual realisation
by avatars. We describe one model in more detail and in section 5.4.2, we provide an
example of the generation process.

5.3. Modelling the lexicon

In section 2, we mentioned that sign search in dictionaries might become even more
user-friendly with the help of computer technologies. However, such dictionaries are
not fine-grained enough for synthetic generation of signing by an avatar; therefore,
more formal description of data is required for building machine-readable lexicons.
Modelling of the lexicon is influenced by the choice of the phonetic description
model. Filhol (2008) challenges traditional, so-called parametric approaches as they
cannot address underspecification, overspecification, and iconicity of the sign. In con-
trast to traditional systems, like Stokoe’s (1976) system and HamNoSys (Prillwitz et al.
1989), he suggests a temporal representation based on Liddell and Johnson’s (1989)
descriptions, which uses the three traditional manual parameters (handshape, move-
ment, location) but defines timing units in which those parameters hold. Speers’ (2001)
work is also based on that theory. The ongoing Dicta-Sign project (Efthimiou et al.
2009) is looking to extend the HamNoSys/SiGML system with such temporal units.
In the following, we will discuss with the help of an example how a lexicon for
machine use can be constructed for synthetic sign generation purposes. The ViSiCAST
HPSG (Head-driven Phrase Structure Grammar; Pollard/Sag 1994) feature structure,
which is based on types and type hierarchies (Marshall/Safar 2004), is an example of
an approach using such a (parametric) lexicon (see section 5.4 for details on HPSG).
The type word is the feature structure for an individual sign, and is subclassified as
verb, noun, or adjective. Verb is further subclassified to distinguish fixed, directional
(parameterised by start/end positions), and manipulative (parameterised by a proform
classifier handshape) verbs. Combinations of these types are permitted, for example
‘take’ is a directional manipulative verb (see example (2) below). Such a lexicon is
aimed at fine-grained details that contain all the information about the entries for
generation above word level (and possibly could contribute to the analysis of continu-
ous signing as well).
1086 IX. Handling sign language data

The left hand side (LHS) of a HPSG entry (left to the arrow in examples (1) and (2))
is either a list of HamNoSys symbol names (in the (a)-examples) or of HamNoSys tran-
scription symbols (in the (b)-examples) for manuals and non-manuals instead of a word.
Part of the entry is the phonetic transcription of the mouthing, for which SAMPA, a ma-
chine-readable phonetic alphabet, was applied (for instance, in (1a), the SAMPA symbol
“{“ corresponds to the IPA symbol “æ”}. On the right hand side of the arrow (RHS), the
grammatical information, which will be described in more detail below, is found.

(1) a. The entry have with HamNoSys symbol names of SiGML (Signing Ges-
ture Markup Language; Elliott et al. 2010), which is used in the lexicon:

b. The same with HamNoSys transcription symbols:

(2) a. The entry take with SiGML names:

b. The same with HamNoSys symbols:

In example (2), the type of the sign (i.e. directional and capable of incorporating classi-
fiers) is reflected by the fact that many symbols are left uninstantiated, which are
represented as placeholders for SiGML symbol names or HamNoSys symbols begin-
ning with capital letters in (2ab). This contrasts with the representation of a fixed sign,
as, for instance, the sign have in example (1), where no placeholders for SiGML symbol
names or HamNoSys symbols are used.
On the right hand side (RHS), the uninstantiated values of the phonetic (PHON)
features in the HPSG feature structure are instantiated and propagated to the LHS
(for example, the handshape (Hsh) symbol in (2)) via unification and principles. In this
way, a dynamic lexicon has been created. The HPSG feature structure starts with the
standard PHON (phonetic; Figure 44.1), SYN (syntactic; Figure 44.3 in section 5.4),
44. Computer modelling 1087

Fig. 44.1: The PHON features of the verb take

Fig. 44.2: The SEM features of the verb take

and SEM (semantic; Figure 44.2) components common to HPSG. In the following, we
discuss these three components for the verb take given in (2).
The PHON component describes how the signs are formed by handshape, palm
orientation, extended finger direction (Efd), location, and movement using the Ham-
NoSys conventions. As for the non-manuals, the eye-brow movement and mouth-pic-
ture were implemented (PHON:FACE:BROW and PHON:MOUTH:PICT); see Fig-
ure 44.1.
The SYN component determines the argument structure and the conditions for
unification (see Figure 44.3). It contains information on what classifiers the word can
take (the classifier features are associated with the complements (SYN:HEAD:AGR)
1088 IX. Handling sign language data

and their values are propagated to the PHON structure of the verb in the unification
process). It also contains information on how pluralisation can be realised, and on
mode, which is associated with sentence type and pro(noun) drop. The context feature
is used to locate entities in the three-dimensional signing space. These positions are
used for referencing and for directional verbs, where such positions are obligatory
morphemes. This feature is propagated through derivation. Movement of objects in
signing space and thus the maintenance of the CONTEXT feature is achieved by asso-
ciating an ADD_LIST and a DELETE_LIST with directional verbs (Safar/Marshall
2002). For more details on these lists, see also section 5.4.2.
The SEM structure includes semantic roles with WordNet definitions for sense to
avoid potential ambiguity in the English gloss (Figure 44.2).

5.4. Modelling the grammar

Computer models of grammar often favour lexicalist approaches, which are appropri-
ate for sign languages which display less variation in their grammars than in their
lexicons. Efthimiou et al. (2006) and Fotinea et al. (2008) use HamNoSys (Prillwitz et
al. 1989) input to produce representations of natural signing. The adopted theoretical
analysis follows a lexicalist approach where development of the grammar module in-
volves a set of rules which can handle sign phrase generation as regards the basic verb
categories and their complements, as well as extended nominal formations.
The generation system in Speers (2001) is implemented as a Lexical-Functional
Grammar (LFG) correspondence architecture, as in Kaplan et al. (1989), and uses
empty features in Move-Hold notations of lexical forms (Liddell/Johnson 1989), which
are instantiated with spatial data during generation.
The framework chosen by ViSiCAST for sign language modelling was HPSG, a
unification-based grammar. Differences in HPSG are encoded in the lexicon, while
grammar rules are usually shared with occasional variation in semantic principles. A
further consideration in favouring HPSG is that the feature structures can incorporate
modality-specific aspects (e.g. non-manual features) of signs appropriately (Safar/Mar-
shall 2002). Results of translation were expressed in HamNoSys. The back-end of the
system was further enhanced during the eSIGN project with significant improvements
to the quality and precision of the manual signing, near-complete coverage of the man-
ual features of HamNoSys 4, an extensible framework for non-manual features, and a
framework for the support for multiple avatars (Elliott et al. 2004, 2005, 2008).
In sign language research, HPSG has not been greatly used (Cormier et al. 1999).
In fact, many sign languages display certain characteristics that are problematic for
HPSG, for example, use of pro-drop and verb-final word order. It is therefore not
surprising that many of the rules found in the HPSG literature do not apply to sign
languages, and need to be extended or replaced. The principles behind these rules,
however, remain intact.
Using HPSG as an example, we will show how parameterisation works to generate
signing from phonetic level descriptions. The rules in the grammar deal with sign order
of (pre-/post-)modifiers (adjuncts) and (pre-/post-)complements. In the following, we
will first introduce the HPSG principles of grammar, before providing an example of
parameterisation.
44. Computer modelling 1089

5.4.1. Principles of grammar

HPSG grammar rules define (i) what lexical items can be combined to form larger
phrases and (ii) in what order they can be combined. Grammaticality, however, is
determined by the interaction between the lexicon and principles. This interaction
specifies general well-formedness. The principles can be stated as constraints on the
types in the lexicon. Below, we list the principles which have been implemented in the
grammar so far.

⫺ Mode
The principle of MODE propagates the non-manual value for eye-brow movement
(neutral, furrowed, raised), which is associated with the sentence type in the input
(declarative, yes-no question, or wh-question).
⫺ Pro-drop
The second type of principle deals with pro-drop, that is, the non-overt realisation
of pronouns. For handling pro-drop, an empty lexical entry was introduced. The princi-
ple checks the semantic head for the values of subject and object pro-drop features.
Figure 44.3 shows SYN:HEAD:PRODRP_OBJ and SYN:HEAD:PRODRP_SUBJ
features for all three persons where in each case, three values are possible: can,
can’t, and must. We then extract the syntactic information for the empty lexical
item, which has to be unified with the complement information of the verb. If the
value is can’t, then pro-drop is not possible, in case of can, we generate both solu-
tions.
⫺ Plurals
The third type of principle controls the generation of plurals, although still in a
somewhat overgeneralised way. The principle handles repeatable nouns, non-re-
peatable nouns with external quantifiers, and plural verbs (for details on pluralisa-
tion, see chapter 6). The input contains the semantic information which is needed
to generate plurals and which results from the analysis of the spoken language
sentence. Across sign languages, distributive and collective meanings of plurals are
often expressed differently, so the semantic input also has to specify that informa-
tion. English, for example, is often underspecified in this respect; therefore, in some
cases, human intervention is required in the analysis stage. The lexical item deter-
mines whether it allows repetition (reduplication) or sweeping movement. The SYN
feature thus contains the ALLOW_PL_REPEAT and the ALLOW_PL_SWEEP
features (according to this model, a sweeping movement indicates the collective
involvement of a whole group, while repetition adds a distributive meaning). When
the feature’s value is yes in either case, then the MOV (movement) feature in
PHON is instantiated to the appropriate HamNoSys symbol expressing repetition
or sweeping motion in agreement with the SEM:COUNT:COLLORDIST feature
value. Pluralisation of verbs is handled similarly. For more on plurality, its issues,
and relation to signing space, see Marshall and Safar (2005).
⫺ Signing Space
The fourth type of principle concerns the management of the signing space. Due to
the visual nature of sign languages, referents can be located and moved in the 3D
1090 IX. Handling sign language data

Fig. 44.3: The SYN features of the verb take


44. Computer modelling 1091

Fig. 44.3A: The CONTEXT feature within SYN

signing space (see chapter 19, Use of Sign Space, for details). Once a location in
space is established, it can be targeted by a pointing sign (anaphoric relationship),
and it can define the starting or end point of a directional verb, which can be ob-
tained by propagating a map of sign space positions through derivation. The missing
location phonemes are available in the SYN:HEAD:CONTEXT feature. Verb argu-
ments are distributed over different positions in the signing space. If the verb in-
volves the movement of a referent, then it will be deleted from the ‘old’ position
and added to a ‘new’ position. Figure 44.3A, which is part of the SYN structure
depicted in Figure 44.3, shows the CONTEXT feature with an ADD_LIST and a
DELETE_LIST. These lists control the changes to the map. The CONTEXT_IN
and CONTEXT_OUT features are the initial input and the changed output lists of
the map. The map is threaded through the generation process. The final CON-
TEXT_OUT will be the input for the next sentence.

5.4.2. An example of parameterisation

We now discuss an example for a lexical entry that has uninstantiated values on the
RHS in the PHON structure. Consequently, the LHS HamNoSys representation needs
to be parameterised as well (for details, see Marshall/Safar 2004, 2005).
In the above example (2) for the entry take, the LHS contains only the HamNoSys
structure that specifies take as a directional classifier verb. The handshape (Hsh), the
extended finger direction (Efd), and the palm orientation (Plm) are initially uninstanti-
ated and are resolved when the object complement is processed.
The object complement, a noun, has the SYN:HEAD:AGR:CL feature, which contains
information on the different classifier possibilities associated with that noun. Example
(3) is a macro, that is, a pattern that shows a mapping to another sequence, which is
basically a shortcut to a more complex sequence.

(3) The nmanip macro for a noun like mug:


upright_cylindermacro
cl_ndh:hns_string,
1092 IX. Handling sign language data

cl_const:hns_string,
cl_hsh:[hamceeall],
cl_ori:(plm:[hampalml],
efd:Efd).

In the unification process, this information is available for the verb and therefore, its
PHON features can be instantiated and propagated to the LHS. Example (4) shows
the SYN:PRECOMPS feature with a macro as it is used in the lexical entry of take
(‘@’ stands for a macro below, in this example, the expansion of the nmanip macro is
example (3)). Figure 44.4 represents the same information as an attribute value matrix
(AVM) which is part of Figure 44.3:

(4) syn:precomps:
[(@nmanip(Ph, Gloss, Index2, Precomp1, Hsh, Efd, Plm, Sg)),
(@np2(W, Glosssubj, Plm2, EfdT, Index1, Precomp2, Num, PLdistr))]

Therefore, if the complement is mug like in our example (3), Hsh and Plm are instanti-
ated to [hamceeall] and [hampalml], respectively. The complements are also added to
the allocation map (signing space). The allocation map is available for the verb as well
which governs the allocation and deletion of places in the map (see SYN:HEAD:CON-
TEXT feature in Figure 44.3). CONTEXT_IN has all the available and occupied places
in signing space. CONTEXT_OUT will be the modified list with new referents and
also with the newly available positions as a result of movement (the starting point of
movement is the original position of mug, which becomes free after moving it). There-
fore, the locations for the start and end position (and potentially the Efd) can be
instantiated in PHON of the verb and propagated to the LHS. Heightobj and Distobj
stand for the location of the object, which, in the case of take, is the starting point of
the sign. Heigthsubj and Distsubj stand for the end point of the movement, which is
the location of the subject in signing space. The Brow value is associated with the
sentence type in the input and is propagated throughout.
R1 (see examples (1) and (2) above) is the placeholder for the sweeping motion of
the plural collective reading. R2 stands for the repetition of the movement for a distrib-
utive meaning. The verb’s SYN:HEAD:AGR:NUM:COLLORDIST feature is unified
with the SEM:COUNT feature values. If the SYN:ALLOW_PL_SWEEP or the
SYN:ALLOW_PL_REPEAT features permit, then R1 or R2 can be instantiated ac-

Fig. 44.4: The PRECOMPS feature of the verb take


44. Computer modelling 1093

cording to the semantics. If the semantic input specifies the singular, R1 and R2 remain
uninstantiated and are ignored in the HamNoSys output.
This linguistic analysis can then be linked with the animation technology by encod-
ing the result in XML as SiGML. This is then sent to the JASigning animation system
(Elliott et al. 2010).

6. Machine translation and animation

Within sign language machine translation (MT), we differentiate two categories: script-
ing and MT software (Huenerfauth/Lu 2012). In scripting, the user chooses signs from
an animation dictionary and places them on a timeline. This is then synthesized with
an avatar. An example is the eSIGN project, which allows the user to build sign data-
bases and scripts of sentences and to view the resulting animations. Another example
is Sign Smith Studio from VCom3D (Sims/Silverglate 2002), a commercial software
system for scripting ASL animations with a fingerspelling generator and some non-
manual components (see section 9 for eSIGN and Sign Smith Studio websites). The
scripting software requires a user who knows the sign language in use. Signs can be
created through motion capture or by using standard computer graphics tools to pro-
duce fixed gestures. These can then be combined into sequences and presented via
avatars, using computer graphics techniques to blend smoothly between signs.
Simple MT systems have been built based on such a fixed database of signs. Text-
to-sign-language translations systems like VCom3D (Sims/Silverglate 2002) and Simon
(Elliott et al. 2000) present textual information as Signed English (SE) or Sign Sup-
ported English (SSE): SE uses signs in English word order and follows English gram-
mar, while in SSE, only key words of a sentence are signed. The Tessa system (Cox et
al. 2002) translates from speech to BSL by recognising whole phrases and mapping
them to natural BSL using a domain-specific template-based grammar.
True MT for sign language involves a higher level translation, where a sign language
sentence is automatically produced from a spoken language sentence (usually a written
sentence). The translation is decomposed into two major stages. First, the English text
is manipulated into an intermediate (transfer or interlingua) representation (for expla-
nation see below). For the second stage, sign generation, a language model (for gram-
mar and lexicon; see section 5) is used to construct the sign sequence, including non-
manual components, from the intermediate representation. The resulting symbols are
then animated by an avatar. MT systems have been designed for several sign languages;
however, we only mention the different types of approaches here. Some MT systems
only translate a few sample input phrases (Zhao et al. 2000), others are more devel-
oped rule-based systems (Marshall/Safar 2005), and there are some statistical systems
(Stein/Bungeroth/Ney 2006).
The above-mentioned rule-based (ViSiCAST) system is a multilingual sign transla-
tion system designed to translate from English text into a variety of national sign
languages (e.g. NGT, BSL, and German Sign Language (DGS)). English written text
is first analysed by CMU’s (Carnegie Mellon University) link grammar parser (Sleator/
Temperley 1991) and a pronoun resolution module based on the Kennedy and Bogur-
aev (1996) algorithm. The output of the parser is then processed using λ-calculus,
1094 IX. Handling sign language data

β-reduction, and Discourse Representation Structure (DRS) merging (Blackburn/Bos


2005). The result is a DRS (Kamp/Reyle 1993) modified to achieve a more sign lan-
guage oriented representation that subsequently supports an easier mapping into a
sign language grammar. This is called the interlingual approach, when the result is the
semantic expression of a sentence (specifying how the concepts in a sentence relate to
each other). In transfer systems, by contrast, the intermediate representation of a syn-
tactic structure is usually the same as the analysed structure of input sentences. In
order to be translated, this structure of the source language has to be transferred to
the target language structure, which can be computationally expensive if several lan-
guages are involved. Since the aim in ViSiCAST was that the system be adaptable for
several language pairs, a relatively language-independent meaning representation was
required. The chosen interlingua system had an advantage over the interlingua ap-
proach of the Zardoz system (Veale et al. 1998), since the DRS-based semantic ap-
proach is highly modular, allowing the development of a grammar for the target sign
language which is independent of the source language. By building this intermediate
semantic representation, the DRSs, the first major stage of the translation was finished.
The second major stage was the translation from the DRS representation into
graphically oriented representations which can drive a virtual avatar. This sequence,
which is generated via HPSG, consists of HamNoSys for manual features and codes
for non-manual features. This linguistic analysis encoded as SiGML can then be linked
with the JASigning animation system (as mentioned in section 5.4.2). Because the
JASigning system supports almost the full range of HamNoSys, the MT system can
animate an arbitrary number of specialised versions of signs, rather than relying on a
predefined animation library.
Speers (2001) (see also section 5.4) is also a translation system, which is imple-
mented as an LFG correspondence architecture (Kaplan/Bresnan 1982; Kaplan et al.
1989). As for the conversion, three types of structural representations are assumed:

(i) f-structure (grammatical relations in the sentence = functional structure);


(ii) c-structure (phrase-structure tree = constituent structure);
(iii) p-structure (phonetic representation level, where spatial and non-manual varia-
tions are revealed)

Correspondence functions are defined that first convert an English f-structure into an
ASL f-structure, subsequently build an ASL c-structure from the f-structure, and finally
build the p-structure from the c-structure. However, the current output file created by
the ASL generation system is only viewable within this system. “Because of this, the
data is only useful to someone who understands ASL syntax in the manner presented
here, and the phonetic notation of the Move-Hold model. In order to be more gener-
ally useful several different software applications could be developed to render the
data in a variety of formats.” (Speers 2001, 83).

7. Social challenges of automated signing


The social challenges involve misunderstandings, in both the hearing and deaf communi-
ties, about research on automated signing. Hearing people often have misconceptions and
44. Computer modelling 1095

little social awareness about sign languages. The recognition and acceptance of a sign lan-
guage as an official minority language is vital to deaf people, but recognition alone will
not help users if there are insufficient interpreters and interpretation and communication
services available. Clearly, there is a need to increase the number of qualified interpreters
but, conversely, there is also a need to seek alternative opportunities to improve everyday
communication between deaf and hearing people and to apply modern technology to
serve the needs of deaf people. Computer animated ‘virtual human’ technology, the
graphic quality of which is improving while its costs are decreasing, has the potential to
help. However, in the deaf community, there is often a fear that the hard won recognition
of their sign language will result in moves to make machines take over the role of human
interpreters. The image of translation systems has been that they offer a ‘solution’ to
translation needs. In fact, however, they can only be regarded as ‘useful aids’. Generated
signing or MT quality cannot achieve the quality of human translation or natural human
signing. Yet, the quality of animated signing available at present is not the end point of
development of this approach. Hutchins (1999) stresses the importance of educating con-
sumers about the achievable quality of MT. It is important that the consumers are in-
formed about the realistic expectations of automated systems. The limitations are broadly
understood in the hearing community where MT and text-to-speech systems have a place
for certain applications such as ad-hoc translation of Web information and automated
travel announcements, but do not approach the capabilities of human language users.
Automated signing systems involve an even bigger challenge to natural language
processing than systems for oral languages, because of the different nature of sign
language and, in addition, the lack of an accepted written form. Therefore, it is impor-
tant that deaf communities are informed about what can be expected and that in the
medium term, these techniques do not challenge the provision of human interpreters.

8. Conclusion

Sign languages as the main communication means of the deaf were not always accepted
as true languages. Recognition of sign languages began in the past 50 years, once it was
realised that sign languages have complex and distinctive phonological and syntactic
structures. Advances in sign language linguistics were slowly followed by research in
computational sign language processing. In the last few years, computational research
has become increasingly active. Numerous applications were developed, most of which,
unfortunately, are not fully mature systems for analysis, recognition, or synthesis. This is
because signing includes a high level of simultaneous action which increases the complex-
ity of modelling grammatical processes. There are two barriers to overcome in order to
address these difficulties. On the one hand, sign language linguists still need to find an-
swers to questions concerning grammatical phenomena in order to build computational
models which require a high level of detail (to drive avatars, for example). On the other
hand, additional problems result from the fact that computers have difficulties, for exam-
ple, in extracting reliable information on the hands and the face from video images.
Today’s automatic sign language recognition has reached the stage where speech
recognition was 20 years ago. Given increased activity in recent years, the future looks
bright for sign language processing. Once anyone with a camera (or Kinect device)
1096 IX. Handling sign language data

and an internet connection could use natural signing to interact with a computer appli-
cation or other (hearing or deaf) users, the possibilities would be endless.
Until the above-mentioned goals are achieved, research would benefit from the
automation of aspects of the transcription process, which will provide greater efficiency
and accuracy. The use of machine vision algorithms could assist linguists in many as-
pects of transcription. In particular, such algorithms could increase the speed of the
fine-grained transcription of visual language data, thus further accelerating linguistic
and computer science research on sign language and gesture.
Although there is still a distance to go in this field, sign language users can already
benefit from the intermediate results of the research, which have produced useful ap-
plications such as multilanguage-multimedia dictionaries and teaching materials.

9. Literature and web resources

Atkins, Sue/Clear, Jeremy/Ostler, Nicholas


1991 Corpus Design Criteria. In: Literary and Linguistic Computing 7, 1⫺16.
Blackburn, Patrick/Bos, Johan
2005 Representation and Inference for Natural Language. A First Course in Computational
Semantics. Stanford, CA: CSLI Publications.
Boguraev, Bran/Briscoe, Ted (eds.)
1989 Computational Lexicography for Natural Language Processing. London: Longman.
Bowden, Richard/Sarhadi, Mansoor
2002 A Non-linear Model of Shape and Motion for Tracking Fingerspelt American Sign
Language. In: Image and Vision Computing 20(9⫺10), 597⫺607.
Bowden, Richard/Windridge, David/Kadir, Timor/Zisserman, Andrew/Brady, Michael
2004 A Linguistic Feature Vector for the Visual Interpretation of Sign Language. In: Euro-
pean Conference on Computer Vision 1, 390⫺401.
Braffort, A./Choisier, A./Collet, C./Dalle, P./Gianni, F./Lenseigne, B./Segouat, J.
2004 Toward an Annotation Software for Video of Sign Language, Including Image Process-
ing Tools and Signing Space Modelling. In: Actes de LREC 2004, 201⫺203.
Buttussi, Fabio/Chittaro, Luca/Coppo, Marco
2007 Using Web3D Technologies for Visualization and Search of Signs in an International
Sign Language Dictionary. In: Proceedings of Web3D 2007: 12 th International Confer-
ence on 3D Web Technology. New York, NY: ACM Press, 61⫺70.
Carpenter, Bob/Penn, Gerald
1999 The Attribute Logic Engine. User’s Guide (Version 3.2 Beta). Bell Labs.
Chen, Yiqiang/Gao, Wen/Fang, Gaolin/Wang, Zhaoqi/Yang, Changshui/Jiang, Dalong
2002 Text to Avatar in Multi-modal Human Computer Interface. In: Proceedings of Asia-
Pacific CHI (APCHI2002), 636⫺643.
Chen, Yiqiang/Gao, Wen/Fang, Gaolin/Wang, Zhaoqi
2003 CSLDS: Chinese Sign Language Dialog System. In: Proceedings of IEEE International
Workshop on Analysis and Modeling of Faces and Gestures (AMFG ’03). Nice, France,
236⫺238.
Cormier, Kearsy/Wechsler, Stephen/Meier, Richard P.
1999 Locus Agreement in American Sign Language. In: Webelhuth, Gert/Koenig, Jean-
Pierre/Kathol, Andreas (eds.), Lexical and Constructional Aspects of Linguistic Expla-
nation. Chicago, IL: University of Chicago Press, 215⫺229.
44. Computer modelling 1097

Cox, Stephen J./Lincoln, Michael/Tryggvason, Judy/Nakisa, Melanie/Wells, Mark/Tutt, Marcus/


Abbott, Sanja
2002 TESSA, a System to Aid Communication with Deaf People. In: ASSETS 2002: Proceed-
ings of the 5 thInternational ACM SIGCAPH Conference on Assistive Technologies, Ed-
inburgh, 205⫺212.
Crasborn, Onno/Hanke, Thomas
2003 Metadata for Sign Language Corpora. [Available on-line at: www.let.ru.nl/sign-lang/
echo/docs/ECHO_Metadata_SL.pdf]
Cuxac, Christian
2003 Une Langue moins Marquée comme Analyseur Langagier: l’Exemple de la LSF. In:
Nouvelle Revue de l’AIS (Adaptation et Intégration Scolaires) 23, 19⫺30.
Efthimiou, Eleni/Fotinea, Stavroula-Evita/Sapountzaki, Galini
2006 E-accessibility to Educational Content for the Deaf. In: EURODL, 2006/II. [Electroni-
cally available since 15. 12. 06 at: http://www.eurodl.org/materials/contrib/2006/Eleni_
Efthimiou.htm]
Efthimiou, Eleni/Fotinea, Stavroula-Evita/Vogler, Christian/Hanke, Thomas/Glauert, John R.W./
Bowden, Richard/Braffort, Annelies/Collet, Christophe/Maragos, Petros/Segouat, Jérémie
2009 Sign Language Recognition, Generation and Modelling: A Research Effort with Appli-
cations in Deaf Communication. In: Stephanidis, Constantine (ed.), Universal Access in
HCI, Part I, HCII 2009 (LNCS 5614). Berlin: Springer, 21⫺30.
Ekman, Paul/Friesen, Wallace/Hager, Joseph
1978 Facial Action Coding System. Palo Alto, CA: Consulting Psychologist Press.
Elliott, Ralph/Glauert, John R.W./Kennaway, Richard/Marshall, Ian
2000 The Development of Language Processing Support for the ViSiCAST Project. In: AS-
SETS 2000: Proceedings of the 4 th International ACM SIGCAPH Conference on Assist-
ive Technologies, New York, 101⫺108.
Elliott, Ralph/Glauert, John R. W./Jennings, Vince/Kennaway, Richard
2004 An Overview of the SiGML Notation and SiGMLSigning Software System. In: Streiter,
Oliver/Vettori, Chiara (eds.), Workshop on Representing and Processing of Sign Lan-
guages, LREC 2004, Lisbon, Portugal. Paris: ELRA, 98⫺104.
Elliott, Ralph/Glauert, John R. W./Kennaway, Richard
2005 Developing Techniques to Support Scripted Sign Language Performance by a Virtual
Human. In: Proceedings HCII 2005, 11th International Conference on Human Computer
Interaction (CD-ROM), Las Vegas.
Elliott, Ralph/Glauert, John R. W./Kennaway, Richard/Marshall, Ian/Sáfár, Eva
2008 Linguistic Modelling and Language-processing Technologies for Avatar-based Sign
Language Presentation. In: Efthimiou, Eleni/Fotinea, Stavroula-Evita/Glauert, John
(eds.), Emerging Technologies for Deaf Accessibility in the Information Society (Special
Issue of Universal Access in the Information Society 6(4)), 375⫺391.
Filhol, Michael
2008 Modèle Descriptif des Signes pour un Traitement Automatique des Langues des Signes.
PhD Dissertation, Universite Paris-11 (Paris Sud), Orsay.
Fotinea, Stavroula-Evita/Efthimiou, Eleni/Karpouzis, Kostas/Caridakis, George
2008 A Knowledge-based Sign Synthesis Architecture. In: Efthimiou, Eleni/Fotinea, Stav-
roula-Evita/Glauert, John (eds.), Emerging Technologies for Deaf Accessibility in the
Information Society (Special Issue of Universal Access in the Information Society 6(4)),
405⫺418.
Francik, Jarosław/Fabian, Piotr
2002 Animating Sign Language in the Real Time. In: Proceedings of the 20 th IASTED Inter-
national Multi-Conference Applied Informatics, Innsbruck, Austria, 276⫺281.
Gibet, Sylvie
1994 Synthesis of Sign Language Gestures. In: CHI ’94: Conference Companion on Human
Factors in Computing Systems. New York, NY: ACM Press, 311⫺312.
1098 IX. Handling sign language data

Handke, Jürgen
1995 The Structure of the Lexicon. Human Versus Machine. Berlin: Mouton de Gruyter.
Hanke, Thomas
2001 Sign Language Transcription with syncWRITER. In: Sign Language & Linguistics 4(1/2),
275⫺283.
Hanke, Thomas
2002 iLex ⫺ A Tool for Sign Language Lexicography and Corpus Analysis. In: Proceedings
of the 3rd International Conference on Language Resources and Evaluation, Las Palmas
de Gran Canaria, Spain. Paris: ELRA, 923⫺926.
Hanke, Thomas
2004 HamNoSys ⫺ Representing Sign Language Data in Language Resources and Language
Processing Contexts. In: Streiter, Oliver/Vettori, Chiara (eds.), Workshop on Represent-
ing and Processing of Sign Languages, LREC 2004, Lisbon, Portugal. Paris: ELRA, 1⫺6.
Huenerfauth, Matt/Lu, Pengfei
2012 Effect of Spatial Reference and Verb Inflection on the Usability of American Sign
Language Animations. In: Universal Access in the Information Society. Berlin: Springer.
[online: http://www.springerlink.com/content/y4v31162t4341462/]
Hutchins, John
1999 Retrospect and Prospect in Computer-based Translation. In: Proceedings of Machine
Translation Summit VII, September 1999, Kent Ridge Digital Labs, Singapore. Tokyo:
Asia-Pacific Association for Machine Translation, 30⫺34.
Jennings, Vince/Kennaway, J. Richard/Glauert, John R. W./Elliott, Ralph
2010 Requirements for a Signing Avatar. In: Hanke, Thomas (ed.), 4 th Workshop on the
Representation and Processing of Sign Languages: Corpora and Sign Language Technol-
ogies. Valletta, Malta, 22⫺23 May 2010, 133⫺136.
Kamp, Hans/Reyle, Uwe
1993 From Discourse to Logic. Introduction to Model-theoretic Semantics of Natural Lan-
guage, Formal Logic, and Discourse Representation Theory. Dordrecht: Kluwer.
Kaplan, Ronald M.
1989 The Formal Architecture of Lexical-Functional Grammar. In: Journal of Information
Science and Engineering 5, 305⫺322.
Kaplan, Ronald M./Bresnan, Joan
1982 Lexical-Functional Grammar: A Formal System for Grammatical Representation. In:
Bresnan, Joan (ed.), The Mental Representation of Grammatical Relations. Cambridge,
MA: MIT Press, 173⫺281.
Karpouzis, Kostas/Caridakis, George/Fotinea, Stavroula-Evita/Efthimiou, Eleni
2007 Educational Resources and Implementation of a Greek Sign Language Synthesis Archi-
tecture. In: Computers & Education 49(1), 54⫺74.
Kennaway, J. Richard
2002 Synthetic Animation of Deaf Signing Gestures. In: Wachsmuth, Ipke/Sowa, Timo (eds.),
Revised Papers from the International Gesture Workshop on Gesture and Sign Lan-
guages in Human-Computer Interaction. London: Springer, 146⫺157.
Kennedy, Christopher/Boguraev, Branimir
1996 Anaphora for Everyone: Pronominal Anaphora Resolution Without a Parser. In: Pro-
ceedings of the 16 th International Conference on Computational Linguistics CO-
LING ’96, Copenhagen, 113⫺118.
Kipp, Michael
2001 Anvil ⫺ A Generic Annotation Tool for Multimodal Dialogue. In: Proceedings of the
7 th European Conference on Speech Communication and Technology (Eurospeech),
1367⫺1370.
Kittler, Josef/Hatef, Mohamad/Duin, Robert P. W./Matas, Jiri
1998 On Combining Classifiers, In: IEEE Transactions on Pattern Analysis and Machine
Intelligence 20(3), 226⫺239.
44. Computer modelling 1099

Kraiss, Karl-Friedrich (ed.)


2006 Advanced Man-Machine Interaction. Fundamentals and Implementation. Berlin:
Springer.
Liddell, Scott K./Johnson, Robert E.
1989 American Sign Language: The Phonological Base. In: Sign Language Studies 64, 195⫺
277.
Lockton, Raymond/Fitzgibbon, Andrew W.
2002 Real-time Gesture Recognition Using Deterministic Boosting. In: Proceedings of Brit-
ish Machine Vision Conference.
Loomis, Jeffrey/Poizner, Howard/Bellugi, Ursula
1983 Computer Graphics Modeling of American Sign Language. In: Computer Graphics
17(3), 105⫺114.
Lu, Pengfei
2010 Modeling Animations of American Sign Language Verbs through Motion-Capture of
Native ASL Signers. In: SIGACCESS Newsletter 96, 41⫺45.
Marshall, Ian/Sáfár, Eva
2004 Sign Language Generation in an ALE HPSG. In: Müller, Stefan (ed.), Proceedings of
the 11th International Conference on Head-driven Phrase Structure Grammar (HPSG-
2004), 189⫺201.
Marshall, Ian/Sáfár, Eva
2005 Grammar Development for Sign Language Avatar-based Synthesis. In: Stephanidis,
Constantine (ed.), Universal Access in HCI: Exploring New Dimensions of Diversity
(Vol. 8 of the Proceedings of the 11th International Conference on Human-Computer
Interaction). CD-ROM. Mahwah, NJ: Lawrence Erlbaum.
Morimoto, Kazunari/Kurokawa, Takao/Isobe, Norifumi/Miyashita, Junichi
2004 Design of Computer Animation of Japanese Sign Language for Hearing-Impaired Peo-
ple in Stomach X-Ray Inspection. In: Miesenberger, Klaus/Klaus, Joachim/Zagler,
Wolfgang/Burger, Dominique (eds.), Computers Helping People with Special Needs
(Lecture Notes in Computer Science 3118). Berlin: Springer, 1114⫺1120.
Neidle, Carol
2001 SignStreamTM: A Database Tool for Research on Visual-gestural Language. In: Sign
Language & Linguistics 4(1/2), 203⫺214.
Neidle, Carol
2002, 2007 SignStream Annotation: Conventions Used for the American Sign Language Lin-
guistic Research Project and Addendum. Technical Report 11 & 13. American Sign
Language Linguistic Research Project, Boston University.
Ooi, Vincent B. Y.
1998 Computer Corpus Lexicography. Edinburgh: Edinburgh University Press.
Pollard, Carl/Sag, Ivan A.
1994 Head-driven Phrase Structure Grammar. Chicago, IL: The University of Chicago Press.
Potamianos, Gerasimos/Neti, Chalapathy/Gravier, Guillaume/Garg, Ashutosh/Senior, Andrew W.
2003 Recent Advances in the Automatic Recognition of Audio-visual Speech. In: Proceed-
ings of IEEE 91(9), 1306⫺1326.
Prillwitz, Siegmund/Leven, Regina/Zienert, Heiko/Hanke, Thomas/Henning, Jan
1989 HamNoSys Version 2.0: Hamburg Notation System for Sign Languages ⫺ an Introduc-
tory Guide. Hamburg: Signum.
Sáfár, Eva/Marshall, Ian
2002 Sign Language Translation Using DRT and HPSG. In: Gelbukh, Alexander (ed.), Pro-
ceedings of the 3rd International Conference on Intelligent Text Processing and Computa-
tional Linguistics (CICLing), Mexico, February 2002 (Lecture Notes in Computer Sci-
ence 2276). Berlin: Springer, 58⫺68.
1100 IX. Handling sign language data

Sagawa, Hirohiko/Takeuchi, Masaru


2002 A Teaching System of Japanese Sign Language Using Sign Language Recognition and
Generation. In: ACM Multimedia 2002, 137⫺145.
Sedgwick, Eric/Alkoby, Karen/Davidson, Mary Jo/Carter, Roymieco/Christopher, Juliet/Craft,
Brock/Furst, Jacob/Hinkle, Damien/Konie, Brian/Lancaster, Glenn/Luecking, Steve/Morris, Ash-
ley/McDonald, John/Tomuro, Noriko/Toro, Jorge/Wolfe, Rosalee
2001 Toward the Effective Animation of American Sign Language. In: Proceedings of the
9 th International Conference in Central Europe on Computer Graphics, Visualization
and Interactive Digital Media. Plyen, Czech Republic, February 2001, 375⫺378.
Sims, Ed
2000 Virtual Communicator Characters. In: ACM SIGGRAPH Computer Graphics Newslet-
ter 34(2), 44.
Sims, Ed/Silverglate, Dan
2002 Interactive 3D Characters for Web-based Learning and Accessibility. In: ACM SIG-
GRAPH, San Antonio.
Sinclair, John
1996 Preliminary Recommendations on Corpus Typology. EAGLES Document EAG-TCWG-
CTYP/P. [Available at: http://www.ilc.cnr.it/EAGLES/corpustyp/corpustyp.html]
Sleator, Daniel/Temperley, Davy
1991 Parsing English with a Link Grammar. In: Carnegie Mellon University Computer Sci-
ence Technical Report CMU-CS-91⫺196.
Speers, D’Armond L.
2001 Representation of American Sign Language for Machine Translation. PhD Dissertation,
Georgetown University. [Available at: http://higbee.cots.net/holtej/dspeers-diss.pdf]
Starner, Thad/Pentland, Alex
1995 Visual Recognition of American Sign Language Using Hidden Markov Models. In:
International Workshop on Automatic Face and Gesture Recognition, 189⫺194.
Stein, Daniel/Bungeroth, Jan/Ney, Herrmann
2006 Morpho-syntax Based Statistical Methods for Sign Language Translation. In: Proceed-
ings of the 11th Annual Conference of the European Association for Machine Transla-
tion, 169⫺177.
Stokoe, William/Casterline, Dorothy/Croneberg, Carl
1976 A Dictionary of American Sign Language on Linguistic Principles. Silver Spring, MD:
Linstok Press.
Suszczanska, Nina/Szmal, Przemysław/Francik, Jarosław
2002 Translating Polish Texts Into Sign Language in the TGT System, In: Proceedings of the
20 th IASTED International Multi-Conference Applied Informatics (AI 2002), Innsbruck,
Austria, 282⫺287.
Veale, Tony/Conway, Alan/Collins, Bróna
1998 The Challenges of Cross-Modal Translation: English-to-Sign-Language Translation in
the Zardoz System. In: Machine Translation 13(1), 81⫺106.
Viola, Paul/Jones, Michael
2004 Robust Real-time Face Detection. In: International Journal of Computer Vision 57(2),
137⫺154.
Vogler, Christian/Metaxas, Dimitris
1998 ASL Recognition Based on a Coupling Between HMMs and 3D Motion. In: Proceed-
ings of ICCV, 363⫺369.
Wang, Yubo/Ai, Haizhou/Wu, Bo/Huang, Chang
2004 Real Time Facial Expression Recognition with Adaboost. In: Proceedings of the
17 th International Conference on Pattern Recognition (ICPR ’04), Vol. 3, 926⫺929.
Wilcox, Sherman/Scheibman, Joanne/Wood, Doug/Cokely, Dennis/Stokoe, William C.
1994 Multimedia Dictionary of American Sign Language. In: ASSETS ’94: Proceedings of
the First Annual ACM Conference on Assistive Technologies. New York, NY: ACM
Press, 9⫺16.
44. Computer modelling 1101

Windridge, David/Bowden, Richard


2004 Induced Decision Fusion In Automatic Sign Language Interpretation: Using ICA to
Isolate the Underlying Components of Sign. In: Roli, Fabio/Kittler, Josef/Windeatt,
Terry (eds.), 5 th International Workshop on Multiple Classifier Systems (MCS04), Cagli-
ari, Italy (Lecture Notes in Computer Science 3077). Berlin: Springer, 303⫺313.
Wittenburg, Peter/Brugman, Hennie/Russel, Albert/Klassmann, Alex/Sloetjes, Han
2006 ELAN: A Professional Framework for Multimodality Research. In: Proceedings of the
5 th International Conference on Language Resources and Evaluation (LREC 2006),
1556⫺1559.
Zhao, Liwei/Kipper, Karin/Schuler, William/Vogler, Christian/Badler, Norman/Palmer, Martha
2000 A Machine Translation System from English to American Sign Language. In: Proceed-
ings of the 4 th Conference of the Association for Machine Translation in the Americas on
Envisioning Machine Translation in the Information Future (Lecture Notes in Computer
Science 1934). Berlin: Springer, 54⫺67.
Zijl, Lynette van/Combrink, Andries
2006 The South African Sign Language Machine Translation Project: Issues on Nonmanual
Sign Generation. In: Proceedings of SAICSIT06, October 2006, Somerset West, South
Africa, 127⫺134.

Web resources
BritishSignLanguage.com: http://web.ukonline.co.uk/p.mortlock/
CyberGlove: http://www.cyberglovesystems.com
EAGLES project: http://www.ilc.cnr.it/EAGLES
eSIGN project: http://www.visicast.cmp.uea.ac.uk/eSIGN/Public.htm
Patterson glove: http://www.wired.com/gadgets/miscellaneous/news/2002/01/49716
Sign language corpora:
American Sign Language (ASL): http://www.bu.edu/asllrp/cslgr/
Australian Sign Language (Auslan): http://www.auslan.org.au/about/corpus
British Sign Language (BSL): http://www.bslcorpusproject.org/
German Sign Language(DGS):
http://www.sign-lang.uni-hamburg.de/dgs-korpus/index.php/welcome.html
Sign Language of the Netherlands (NGT): http://www.ru.nl/corpusngtuk/
Swedish Sign Language(SSL): http://www.ling.su.se/pub/jsp/polopoly.jsp?d=14252
Sign Smith Studio: http://www.vcom3d.com/signsmith.php
SignSpeak project: http://signspeak.eu
TESSA project: http://www.visicast.cmp.uea.ac.uk/Tessa.htm
ViSiCAST project: http://www.visicast.cmp.uea.ac.uk/eSIGN/Public.htm

Eva Sáfár, Norwich (United Kingdom)


John Glauert, Norwich (United Kingdom)
Indexes

Index of subjects
Note: in order to avoid a proliferation of page numbers following an index entry, chapters that
address a specific topic were not included in the search for the respective entry; e.g. the chapter
on acquisition was not included in the search for the term “acquisition”.

A – number 119 – 120, 124 – 127, 129, 131 – 132,


279 – 280, 283 – 284, 336
Aboriginal sign languages also see the alphabet, manual 101 – 102, 501, 524, 532,
Index of sign languages, 517, 535 – 539, 543, 544, 659, 764, 800 – 801, 848 – 849, 913 – 914,
930 916, 951, 991, 1000, 1007
acquisition 40, 515, 557, 561, 566 – 567, 576, alternate sign language see secondary sign
580, 588 – 590, 735, 770 – 775, 777, 779 – 780, language
844, 848, 863, 874 – 879, 880, 949, 950 – 951, alternating movement see movement,
959, 963 – 967, 969, 1025, 1037 alternating
– bilingual see bilingual, bilingualism analogy 395, 830, 1009, 1011
– classifiers 172 – 174, 594 annotation also see notation, 937,
– handshape 13 990 – 991, 1033 – 1034, 1041, 1068, 1070,
– iconicity 38, 405, 408, 584, 592 – 594, 705 1077, 1079 – 1080
– non-manuals 63, 324 anthropomorphism 1007, 1012
– planning 891, 903 – 904 aperture change 6, 12, 16, 24 – 26, 29, 35, 36,
– pronoun 593 826
– word order 257 aphasia 717, 740 – 741, 743, 745 – 746, 747,
adequacy 502 – 506
763 – 765, 767 – 768, 776, 780
adverbial 105, 187, 188 – 189, 192, 228,
apraxia 764
269 – 270, 273, 323, 357 – 359, 379 – 380, 719,
arbitrariness, arbitrary
775
– form-meaning relation 22, 39, 79, 83, 392,
– non-manual 64, 69, 95 – 96, 201, 504, 526,
438, 441, 447, 532, 584 – 585, 594, 628, 659,
671, 991
671, 717, 825, 875, 920, 936
affix, affixation 32, 44, 81, 82 – 85, 90,
– location 229, 413 – 414, 416 – 417, 456, 527,
91 – 96, 102 – 105, 107, 128, 130 – 131, 146,
543, 768, 823
165, 168 – 169, 172, 176 – 177, 287, 322, 326,
332 – 333, 335, 519 – 520, 521, 579, 586, articulation also see coarticulation,
827 – 828 4 – 7, 9 – 14, 15 – 17, 46, 106, 118, 178, 189,
agreeing verbs see verb, agreeing 222, 253, 271 – 272, 503 – 504, 526, 576 – 579,
agreement 44 – 46, 91, 96, 165 – 168, 177, 229, 594, 637, 651 – 653, 661, 697 – 698, 730 – 732,
237, 250, 256, 266 – 267, 273, 285, 328, 348, 742, 769, 777 – 779, 822 – 823, 830, 835, 1047
354, 371 – 372, 379, 382 – 383, 447 – 448, – non-manual 56 – 57, 63 – 65, 69 – 70, 326,
453 – 458, 471, 521 – 522, 543 – 544, 564 – 565, 330, 344, 750, 847
567, 569, 586 – 588, 633, 638, 642, 661 – 662, articulatory suppression 694 – 696
718 – 719, 744, 771, 807, 853, 868, 873 – 874, aspect, aspectual marker/inflection 96, 106,
929, 1058 132, 139, 170, 206, 213, 216 – 217, 256 – 257,
– acquisition 593 – 594, 666 – 669, 674 280, 285, 301, 318, 320, 403, 586, 663,
– auxiliary 146, 150 – 151, 229, 522, 538, 588 718 – 719, 827, 867, 869, 870 – 872, 937, 1060
– double 139, 141, 145, 147, 206, 213, 217, – completive 91, 96, 186, 191 – 193, 195, 200,
223 542, 820, 828 – 829, 867, 871
– non-manual 70, 268, 300, 376, 707, 1061 – conative 193
1104 Indexes

– continuative, continuous 82, 91, 96, 193 – body part 505, 562, 585 – 586, 934, 1012,
194, 196, 719, 871 – 872 1049
– durative, durational 82, 90 – 91, 96, 106, body partitioning 375 – 376, 822, 1066
194, 196 borrowing 97, 193, 221, 435 – 437, 537, 575,
– habitual 91, 96, 105, 193, 195, 719, 871 – 612, 806, 825, 827, 966 – 967, 986, 1012
872 Broca’s area 516, 630, 740 – 743, 746, 748,
– intensive 86 763, 765
– iterative 82, 91, 96, 194 – 195, 451, 871
– lexical 191, 442 – 447, 448 – 451, 453, 458
– perfective 191 – 193, 196, 200, 322, 349, C
542, 820, 828
case marker/marking, case inflection/
– protractive 91, 96, 194
assignment 84, 149 – 152, 234, 341, 350,
– situation 191, 193, 434, 442 – 448, 451,
447, 457, 538, 817 – 818
453 – 454, 456
categorization 160, 163, 176, 178, 434, 670,
assessment 771, 773, 777, 961, 988 – 989
1060
assimilation, phonological 14 – 15, 59, 128,
causative, causativity 105, 207, 211, 220
214, 231, 321, 503, 533, 537, 544, 578, 654,
cerebellum 753 – 754, 769
789, 794, 809, 831, 1049
change
attention 164, 463 – 464, 473 – 474,
– aperture 25 – 26, 29, 35, 826
494 – 495, 576 – 577, 590 – 591, 612, 696,
– demographic 560, 960, 972
747 – 748, 766, 789, 1062
– diachronic/historical 103, 277, 406 – 407,
attrition 842, 855 792, 795, 803, 923, 1001 – 1002, 1036
auditory perception see perception – handshape 12, 37, 82, 234, 452, 525, 733,
autism 592, 774 – 775 769, 809
automated signing 1075 – 1076, 1083, 1094 – – language 39, 198, 215, 277, 388 – 389, 395,
1095 406 – 407, 433, 505, 639, 789, 791, 795 – 796,
auxiliary also see agreement, auxiliary, 88, 801 – 802, 816 – 819, 821, 826 – 827, 830,
187, 196 – 197, 229, 336, 818 834 – 836, 841 – 843, 855 – 866, 865 – 866,
avatar 1078, 1083 – 1085, 1088, 1093 879 – 881, 891, 924, 953, 1001, 1023, 1036
– phonological 99 – 100, 198, 791 – 792, 802,
821
B – semantic 821
– sociolinguistic 953
babbling 27 – 28, 589, 648 – 650, 653, 925 – stem(-internal) 118, 128, 130 – 132, 873
back-channeling 469, 505, 527, 804 channel see perception
backward reduplication see reduplication, chereme, cheremic model also see
backwards phonology, 30, 38, 689, 1077
backwards verbs see verb, backwards child-directed signing 590 – 591, 653, 655,
beat see gesture, beat 668, 672
bilingual, bilingualism 495, 507, 560, 660, classifier (construction) 32, 41 – 44, 95, 101,
676, 698, 747, 789, 841 – 845, 846 – 847, 849, 119, 124 – 127, 131, 217, 234, 248 – 249,
855, 982, 984 – 986 256 – 257, 278, 285, 347 – 348, 360, 374,
– education 897, 899, 903, 922, 957 – 963 392 – 393, 396 – 397, 401 – 403, 405 – 407,
bimodal bilingualism/bilinguals 635, 676, 415 – 418, 420 – 426, 448 – 449, 470, 499,
789, 845 – 847, 950, 953 – 955, 963, 971, 986 564 – 565, 567, 587, 594, 639, 659, 669 – 671,
birdsong 514 – 515 674 – 675, 718, 745, 749, 765 – 767, 773, 776,
blend also see code, blending and error, 780, 821 – 823, 835, 929 – 931, 1003 – 1004,
blend 1012 – 1013, 1026, 1030, 1060, 1084 – 1087
– of (mental) space 142, 144 – 145, 147, – body (part) 42 – 43, 161 – 162
373 – 375, 390, 394, 405, 417, 425, 638, 1065 – (whole) entity 42 – 43, 161 – 164, 166 – 169,
– of signs 99, 101, 171, 819, 825 – 826, 1000, 172 – 173, 177, 235 – 236, 418, 420 – 426, 449,
1013 636, 639, 670, 675 – 676, 807, 1011
Index of subjects 1105

– handling/handle 41, 43, 161 – 164, 166 – communicative interaction see interaction,
168, 172, 177 – 178, 257, 418, 420 – 424, 426, communicative
594, 636, 639 – 640, 669 – 670, 727 – 728, 822 community see deaf community
– instrument 160 – 161, 171, 399, 663 complement also see clause, complement,
– numeral 125, 129, 131, 175, 178 188, 252, 273, 309, 376, 533, 1087 – 1088,
– semantic also see classifier, entity, 1091 – 1092
160 – 161, 670 complement clause see clause, complement
– size and shape specifier (SASS) 94, 96, complementizer 252, 297 – 299, 304,
104, 160 – 162, 173, 398, 639, 669 – 670, 728 341 – 342, 350 – 351, 360, 465
– verbal/predicate 160, 175, 176 – 180 completive see aspect, completive
classifier predicate/verb see verb, classifier completive focus see focus, completive
clause 57, 246 – 247, 252, 255, 273, 294, 328, complexity
330, 343, 454, 468, 471, 474, 480 – 482, 538, – grammatical/structural 146, 514,
615, 673, 808, 831 – 832, 872, 1063 517 – 519, 543, 552, 567, 740, 774, 826, 853,
– complement, complementation 188, 309, 1007, 1060
340, 350 – 357, 376 – 377, 380, 534, 542 – morphological 7, 33, 81, 159, 161, 163,
– conditional 61, 63, 65 – 66, 246, 295, 300, 165 – 166, 169 – 170, 415, 433, 467, 519, 533,
671, 673 593, 670, 710, 817
– embedded 278, 354 – 357, 366, 376 – 377, – phonetic/articulatory 651, 778
381, 611 – phonological 41 – 42, 81, 659
– interrogative see question complex movement see movement, complex
– relative, relativization 56, 61, 63, 65, 238,
complex sentence see sentence, complex
278 – 279, 295, 300, 308 – 309, 350, 357 – 361,
compound, compounding 29, 35, 59 – 61,
470, 476, 522, 542, 671, 1032
81 – 82, 96 – 104, 107, 171 – 172, 179, 277,
– subordinate 255, 340 – 341, 350 – 357, 575,
322, 407, 433, 437, 440, 443, 530, 532 – 533,
872
537, 542 – 544, 575, 793, 796, 818 – 819,
clause type/typing also see sentence, type,
824 – 826, 848, 851, 869
56, 304 – 305
comprehension 173 – 174, 406, 416, 469, 516,
clitic, cliticization 59 – 60, 94, 96, 196, 219,
667 – 668, 687 – 688, 699, 703, 705, 707, 730,
271, 274, 321, 333 – 334, 371, 480, 538, 580
740 – 741, 743 – 745, 748 – 749, 752 – 753,
coarticulation 9, 14 – 15, 302, 317, 325, 332,
765 – 768, 773 – 779, 967 – 968, 991
847, 986, 1078, 1082
code computer corpus see corpus, computer
– blending 842, 845 – 847, 986 conditional see clause, conditional
– mixing 676, 842, 844 – 847, 848, 852, conceptual blending 396, 417
965 – 966, 970 conjunction 340 – 344, 349 – 350, 809,
– switching 842, 844 – 847, 851 – 852, 856, 828 – 829
966, 969 – 970, 986, 1035 constituent
codification 896 – 898, 905, 1025 – order also see syntax, word order,
cognition, cognitive 83, 210, 220 – 221, 251, 248 – 249, 251, 254 – 256, 286, 520, 533, 807,
259, 516, 590, 630 – 632, 638, 641, 712, 763, 1030 – 1031, 1038
770 – 775, 777, 779 – 781, 878, 922, 937, 989 – prosodic 56 – 61, 62 – 64, 67 – 70, 341
– deficit/impairment 741, 745, 768, 770, 772, – syntactic 57 – 58, 62, 69, 246, 248, 252,
777 258, 294, 303 – 305, 325, 330 – 331, 340 – 342,
– development 771, 952, 957, 967 344, 353, 356, 358 – 359, 464, 466 – 468,
– visual-spatial 83, 772 – 774, 873 471 – 474, 478 – 479, 611
cognitive linguistics 251, 255, 374 constructed action/dialogue also see point of
coherence 58, 422, 499 – 500 view and role shift, 162, 230, 499, 637, 674,
cohesion 417 – 418, 429, 499 – 500, 766 991
collective plural see plural, collective contact see language contact and eye contact
color term/sign 102, 433, 436 – 439, 441, 562, content question see question, content
591 – 592, 790, 797 – 798, 800 contrastive focus see focus, contrastive
1106 Indexes

conventional, conventionalization 78, dementia 769 – 770


80 – 81, 170, 316, 390, 393 – 396, 398, demonstrative 112, 175, 228, 238 – 239,
405 – 406, 433, 448, 455 – 456, 543, 584 – 585, 270 – 271, 273 – 274, 277, 284, 286, 309, 358,
627 – 628, 634, 637, 651, 689, 705, 803, 819, 360 – 361, 475, 533, 1062
851, 869, 1026 derivation
– language model 602, 610, 863, 875 – 877, – morphological 81, 83, 89, 91 – 92, 103 –
879 – 880 107, 170, 322, 335, 407, 533, 538, 575, 718,
– (sign) language 588, 602 – 604, 607 – 609, 826, 864, 924
614, 620, 651, 914 – syntactic 252, 305, 332, 342, 348, 381
coordination 63, 341 – 350, 354, 359, 575 determiner 96, 119, 129, 175, 178, 209, 228,
corpus 267, 269 – 275, 279 – 280, 283, 287, 301, 323,
– computer 1077 – 1081 358, 360 – 361
– linguistics 937 – 938, 1033 – 1034 development see acquisition and cognition,
– planning 891, 896 – 898, 904 development
– sign language 259, 798, 800, 802, 848, 902, diglossia 843, 856
937, 1024, 1034 – 1036, 1080 directional verb see verb, directional
co-speech gesture see gesture, co-speech dislocation
creole, creolization 40, 85, 219, 317, 561, – left 295, 471 – 472, 481
566 – 567, 577, 586, 842 – 844, 852, 935 – 936 – right 252, 275 – 276, 480 – 481
cross-linguistic 84, 87 – 88, 145 – 146, 151, discourse 144 – 145, 166, 177, 229, 251, 271,
180, 195, 209 – 216, 222 – 223, 250, 253 – 254, 309, 366, 371 – 373, 375, 377 – 379, 381 – 382,
256, 259, 305, 347, 357, 426, 433, 436, 613, 413 – 414, 417 – 418, 424, 426, 455, 463 – 464,
656, 796, 826, 871, 876, 929, 937 467 – 470, 493, 497 – 501, 527, 664, 674 – 675,
cross-modal/modality 47, 87, 128, 137, 153, 705, 745, 747, 766, 769, 773, 795, 808 – 809,
215, 218, 521, 746, 774, 842 – 843, 950, 962, 989, 1061 – 1063, 1066 – 1067
966 – 967, 970 – 971, 986 – marker 342, 500, 502, 641, 809, 820,
culture see deaf culture 1061 – 1062
disjunction 62, 343 – 344, 349
distalization of movement see movement,
D distalization
distributive plural see plural, distributive
deaf activism/movement 950, 953, 954, 956 double, doubling 106, 118, 121, 257,
deafblind, deafblindness also see tactile sign 297 – 299, 307, 317, 329 – 330, 474, 478,
language, 499, 523 – 525, 527, 576, 808 482 – 483, 526, 664 – 666, 790, 870
deaf community 40, 439, 494, 502, 504, double agreement see agreement, double
554 – 555, 559 – 560, 564 – 569, 604, 798 – 799,
803, 806 – 807, 810, 842 – 844, 852, 854 – 855,
866 – 868, 892 – 894, 897 – 899, 905, 910 – 911,
914 – 919, 920, 922, 926, 935, 937, 950, E
952 – 955, 957, 971, 981 – 982, 984 – 987,
1000, 1002 – 1007, 1009, 1037, 1094 – 1095 education
deaf culture 439, 501, 505 – 506, 528, 892, – deaf 554 – 555, 560, 566, 568, 803, 805,
918, 953 – 954, 961, 1006 – 1007, 1038 854, 868, 891 – 894, 903, 911 – 914, 916,
deaf education see education, deaf 918 – 920, 934, 936, 981, 1002
deaf identity 565, 892, 954, 985, 1002, 1004, – mainstreaming, mainstream school 876,
1009 956, 958 – 960, 962 – 963
deaf school see education, school for the – oral also see oralism, 604, 619, 866 – 867,
deaf 892 – 893, 909 – 911, 913, 915 – 916, 918 – 920,
definite, definiteness 80, 236, 252, 267, 922, 925, 950, 952 – 953, 955 – 956, 962 – 963,
269 – 274, 280, 283, 360, 463, 471 972, 985, 1005, 1032, 1037
deictic, deixis also see gesture, deictic, 228, – deaf school, school for the deaf 40, 506,
274, 403, 527, 587, 593, 667, 1061 541, 566, 568, 799, 803, 853, 899, 901 – 902,
Index of subjects 1107

910 – 911, 914, 919, 950 – 952, 956, 960, 983, 501 – 502, 527, 577, 666, 668, 674, 750,
986, 1011, 1037 1003, 1011, 1018, 1061 – 1063
EEG 712, 734, 777 eye tracking 7, 70, 139, 231, 268
elicitation 172, 253 – 254, 256, 258, 670, 775,
792, 1024, 1026 – 1031, 1038 – 1039, 1041,
1079 F
ellipsis 213 – 214, 277, 336, 342, 346 – 347
embedding see clause, embedded facial expression/articulation/signal also see
emblems see gesture, emblems gesture, facial, 5, 12, 56, 61 – 67, 70 – 71, 94 –
emergence 40, 150, 234, 513, 545, 594, 641, 96, 106, 268, 272, 298, 310, 324, 327, 341,
743, 805, 817 – 818, 834, 910 – 911, 934 – 935, 368, 372 – 374, 381, 397, 425, 500, 503, 526,
950 534, 579, 583, 640, 651, 689, 707, 726, 728,
emphatic, emphasis 55, 114, 207, 217, 739, 748, 750, 765 – 766, 769, 775, 781, 827,
234 – 235, 293, 297, 319, 321, 326 – 327, 833, 851, 1003, 1008, 1011, 1061, 1066,
329 – 330, 334, 403, 474, 482 – 483, 792, 847 1078, 1081 – 1083
entity classifier see classifier, (whole) entity feature
ERP see EEG – agreement 206, 218, 266, 268, 273, 328,
error 173, 583, 704, 712 – 713, 716, 719, 587
721 – 722, 724 – 733, 773, 775, 855 – grammatical 119, 206, 356, 358, 713, 868
– in acquisition 589 – 590, 592 – 594, – inherent 24, 26 – 27
651 – 659, 662, 667 – 670, 672, 675 – linguistic 540, 543 – 544, 557, 561, 634,
641, 842, 986, 1035
– anticipation 722, 724, 726, 728, 733, 809
– non-manual 94 – 95, 106, 190, 218 – 219,
– aphasic 741 – 743, 747, 766, 768 – 770, 780
239, 247, 259 – 260, 293 – 294, 330, 520, 666,
– blend 716, 719 – 720, 724 – 725, 728 – 729
673, 707, 809, 843, 937, 1003, 1094
– fusion 719, 724 – 725, 728 – 729
– number 119, 138, 140 – 141, 143, 146,
– morphological 727, 729
151 – 153, 279 – 280
– perseveration 300, 326, 330, 724,
– person see person
726 – 728, 809
– phi 44, 141, 266 – 268, 273 – 275, 278, 283,
– phonological 592, 651, 712, 721 – 722,
440, 713
728 – 729, 770, 775
– phonological 23 – 24, 26 – 27, 29 – 31, 35,
– phrasal 728 – 729
37, 43, 45, 80, 82, 91, 97, 102, 104, 106,
– substitution 590, 650, 652, 656 – 658, 716, 114 – 116, 121, 128, 132, 138 – 139, 144 – 145,
719 – 721, 724 – 725, 741 – 742, 768 151, 168, 171, 178, 237, 336, 438, 658,
– syntagmatic 726 717 – 718, 720, 728 – 729, 790, 799, 826
event 84, 87, 96, 166, 188, 191, 343, 370, – plural see plural
375 – 376, 392, 395 – 396, 418 – 426, 442 – 447, – prosodic 25 – 26, 28, 37, 83, 467
450 – 453, 456 – 457, 612, 635 – 636, 744, 822 – referential 266 – 267, 270, 274 – 276, 280
– schema 222, 835 – semantic 87, 214, 219, 360
– structure 442 – 445, 450 – 452, 454 – syntactic (e.g. wh, focus) 298, 300, 310,
event visibility hypothesis 39, 444 329 – 330, 358, 716, 990
evolution 38, 205, 207, 221, 514 – 517, 552, feature geometry 25, 30 – 31
565 – 567, 735, 817, 820, 823, 835, 847, 919, feedback 498, 527
980 – auditory 583
exclusive see pronoun, exclusive – in language production 713, 715, 730 – 731
exhaustive, exhaustivity 91, 125, 140, 143, – proprioceptive 583
465 – 467, 474, 483 – visual 17, 583, 650, 659, 732, 755, 779
eye contact, visual contact 294, 361, 494, figurative, figurative language 105, 999,
505, 523, 674 1008 – 1001
eye gaze, gaze 6 – 7, 45, 70, 139, 216, 231, fingerspelling also see alphabet, manual,
268, 273, 275, 293, 341, 356, 368, 370, 15, 28 – 29, 102, 453, 499, 501, 518, 527,
373 – 374, 377, 397, 470, 495 – 496, 499, 533 – 534, 717, 763 – 764, 776, 778, 800 – 801,
1108 Indexes

804, 826 – 827, 847 – 849, 959, 969 – 971, 986, – non-manual 221, 268, 324 – 327, 639 – 640,
991, 1082, 1093 672, 831 – 833, 851, 1052
fMRI 712, 734, 750 – 752 – pointing also see gesture, deictic, 70,
focus 68, 114, 119, 163, 175, 246, 256, 268, 141 – 142, 198, 208 – 209, 217, 227 – 234,267,
282, 295, 297 – 298, 300, 306, 310, 329, 416, 269, 274, 277, 373, 414, 418, 424, 505, 530,
462 – 468, 471 – 473, 478 – 483, 663 – 666, 870 584, 588, 592 – 594, 604 – 605, 607, 611 – 614,
– completive 467, 474 – 476, 479, 484 627, 629 – 629, 637, 658, 667, 771 – 773, 809,
– contrastive 68, 351, 418, 464 – 467, 470, 832, 835, 851, 969, 1062 – 1063
472 – 473, 475 – 479, 483 – 484, 665 – 666 – representational 628 – 630, 634 – 642
– emphatic 330, 482, 665 – 666 goal see thematic role, goal
– information 474, 476, 482, 665 – 666 grammatical category also see part of speech
– marker/particle 268, 369, 467, 473, 475 and word class, 91, 112, 171 – 172, 186 – 187,
– narrow 466 196, 200, 220, 231, 342, 434, 613, 699, 790,
folklore 501, 1000, 1004, 1007, 1014, 1017 818 – 819, 827, 834 – 836, 929
function words, functional element 84, 88, grammaticalization, grammaticization 103,
94 – 95, 166 – 167, 191 – 192, 210, 214, 217, 146, 170, 187, 192, 198, 200, 204 – 205,
219, 223, 269, 327, 579, 844, 850, 957 207 – 211, 215 – 216, 219 – 224, 337, 360, 500,
fusion also see error, fusion, 103, 230, 306, 634, 639 – 641, 671, 719, 789
423, 519 – 520, 537, 729
future (tense) 188 – 191, 222, 270, 320,
611 – 612, 820, 829 – 831
H

haiku 998, 1005, 1007 – 1008, 1010,


G 1012 – 1013, 1017
handling classifier see classifier, handling
gapping also see ellipsis, 277, 341, handshape 6 – 8, 12 – 16, 22 – 25, 27, 29 – 31,
346 – 349, 361 33, 35 – 37, 39, 79 – 80, 82, 99, 102, 107,
gating 107, 700, 717 – 719 122 – 124, 137, 146, 158, 160, 168, 170 – 171,
gaze see eye gaze 173, 178, 215 – 217, 221 – 222, 230 – 236, 239,
generative 38, 252 – 253, 328, 350, 664, 732, 254, 318, 321 – 322, 335 – 336, 390 – 396,
876 401 – 402, 415, 420, 437 – 438, 444, 448 – 449,
gestural source/basis/origin 123, 198, 451 – 453, 455 – 457, 492, 501, 503 – 505, 519,
200 – 201, 221, 224, 638, 820, 823 – 824, 525, 532, 561 – 562, 567, 575, 579 – 580, 586,
827 – 829, 832 – 833, 836 606 – 607, 615 – 616, 629, 635, 638, 640, 649,
gestural theory of language origin 654 – 656, 658, 688 – 690, 697, 700 – 704,
514 – 516 706 – 707, 712, 717 – 718, 720 – 722, 727 – 729,
gesture 5, 39 – 41, 70, 142 – 145, 198, 733, 742 – 744, 768 – 769, 772, 776, 788,
220 – 221, 251, 366, 368 – 369, 373 – 376, 790 – 791, 794 – 795, 799, 804, 807, 809,
381 – 382, 393 – 396, 398, 405 – 406, 419, 500, 821 – 824, 826, 831, 835, 849, 878, 913, 931,
505, 514, 516, 542, 556, 593 – 594, 602 – 607, 1000 – 1001, 1007 – 1008, 1010 – 1011,
611 – 612, 614 – 619, 629, 631, 636, 639 – 642, 1013 – 1014, 1016 – 1018, 1026, 1047,
649 – 651, 661, 668, 752 – 753, 766 – 767, 1049 – 1052, 1055, 1060, 1066, 1077,
823 – 824, 827 – 831, 833, 851, 853, 871, 1082 – 1087, 1091
875 – 876, 878 – 879, 912 – 914, 1001, 1048, handshape, classifier 41 – 43, 101, 119,
1058 – 1059 125 – 126, 174, 360, 397 – 398, 403, 406,
– beat 5, 629 564 – 565, 569, 669, 671, 807, 821, 823, 835,
– deictic also see gesture, pointing, 143, 231, 1026, 1084 – 1085
267, 851 headshake also see negative head movement,
– emblems 374, 393, 533, 628, 634 – 635, 70, 316, 318, 325 – 327, 330 – 332, 342, 349,
637, 851 355 – 357, 492, 521, 526, 611 – 612, 641,
– facial 516, 827, 831 – 833, 1052 671 – 672, 733, 831 – 832, 851
Index of subjects 1109

hearing signer 5, 439, 507, 517 – 518, 528, index, indexical 38, 60, 88, 94, 140 – 141,
536, 540 – 541, 552, 554, 556, 144 – 145, 188, 192 – 193, 196 – 197, 205,
559 – 560, 565 – 569, 578, 676, 698, 746, 748, 207 – 222, 229, 233, 236, 267, 269 – 270, 273,
741, 749 – 750, 752, 754, 774, 776 – 777, 899, 275, 280, 294, 304 – 305, 352, 360 – 361,
986, 1005 365 – 366, 377 – 383, 435, 471 – 472, 482, 494,
hemisphere 577, 739 – 742, 745 – 753, 755, 503, 526, 534, 538, 584, 587, 593, 663, 766,
763 – 768, 780 – 781, 876 794, 809, 873, 1030, 1057, 1092
– left 739 – 742, 745 – 749, 755, 763 – 765, index finger 13, 121, 123 – 124, 137, 163, 197,
767 – 768, 780 – 781 232, 269, 272, 318, 390 – 391, 395, 398 – 299,
– right 577, 739 – 740, 745, 748 – 750, 401 – 402, 415, 420, 506, 533 – 534, 593, 612,
752 – 753, 755, 765 – 767, 780 – 781 618, 628, 634, 658, 700, 742, 764, 773
historical, historical relation 34, 40, 150, 198, indicating verb see verb, indicating
205, 209 – 210, 215 – 216, 220, 245, 251, 259, indirect report 365 – 366, 371, 380, 493
350, 399, 405 – 406, 438 – 439, 456, 540, 566, inflection 13, 77, 80, 81, 83 – 86, 88, 90 – 91,
586, 743, 791 – 792, 800, 805, 827, 830, 95 – 96, 104 – 107, 113, 119 – 120, 126,
854 – 855, 864 – 867, 891, 896, 1001, 1047, 128 – 132, 139, 145, 166, 172, 186 – 188,
1076 190 – 191, 193, 200 – 201, 205 – 206, 210 – 213,
hold-movement model 30, 38, 733 215 – 217, 219 – 220, 250, 256 – 257, 270 – 271,
homesign 40 – 41, 407, 517, 543, 545, 565, 274, 279, 283 – 285, 287, 320, 328, 336,
577, 594, 651, 863, 867 – 868, 875 – 877, 403 – 404, 406, 542, 564, 587, 609 – 610, 613,
879 – 880, 910 – 911, 913 – 914, 918, 925, 1028
662, 667 – 668, 670, 713, 716, 718 – 719, 771,
homonym, homophone 128, 533 – 534, 537,
828, 847 – 848, 864, 871, 873
1012
informant, informant selection 530, 990,
HPSG 141, 147, 1085 – 1089, 1094
1023 – 1034, 1036 – 1042
human action 580, 749, 752 – 753
information structure 56, 64, 246, 520, 664,
humor 502, 505 – 506, 1000, 1003, 1048
870
hunter, hunting also see evolution, 100, 517,
inherent feature see feature, inherent
528, 535 – 536, 540, 545
initialize, initialization 101 – 102, 438, 444,
449, 453, 586, 847, 849, 969
interaction, communicative 5, 40, 55, 68,
I
457, 468, 524 – 525, 528, 544, 565, 628, 790,
icon, iconic, iconicity 21 – 22, 38 – 42, 44, 46, 804, 810, 823, 832 – 833, 843, 845, 853 – 854,
78 – 79, 85, 88 – 90, 102, 105, 107, 150, 164, 868, 893, 910, 936, 961 – 962, 965, 970 – 971,
170, 173 – 174, 194, 248, 250 – 251, 260, 980 – 982, 984, 986, 989 – 990, 1027, 1030,
269 – 270, 414, 417, 419, 421, 426, 433 – 435, 1034, 1054, 1059, 1076
441, 444, 458 – 459, 503, 517, 530, 536, 542, interface 132, 143 – 145, 310, 341, 630, 688,
545, 562, 575 – 576, 584 – 588, 592 – 594, 711, 713, 715, 732, 734, 780, 1077, 1080
604 – 605, 611, 614 – 615, 628, 632, 636 – 637, internal feedback see feedback, internal
639 – 641, 647 – 648, 650 – 651, 655, 659, International Sign see the Index of sign
667 – 671, 673 – 674, 688 – 689, 705, 717 – 719, languages
743 – 744, 774, 822, 824, 828, 835 – 836, 851, interpret, interpreter, interpreting 498 – 499,
853, 873, 918, 920 – 921, 924, 934, 936 – 937, 525, 527, 589, 703, 847, 853 – 854, 895,
990 – 992, 1051, 1053, 1059, 1085 902 – 904, 953, 955, 958 – 959, 962 – 963,
imperative 292 – 293, 311, 324, 478, 561, 1038, 1078, 1095
1061 – language brokering 980 – 985
inclusive see pronoun, inclusive interrogative see question
incorporation also see numeral incorpo- interrogative non-manual marking see
ration 101 – 102, 112 – 113, 121 – 123, 171, question, non-manual marking
232 – 235, 256, 271, 284 – 285, 320, 519, 847 intonation also see prosody, 55 – 71,
indefinite, indefiniteness 227 – 228, 234 – 239, 295 – 296, 310, 326, 341, 481, 502, 1048,
269 – 274, 276, 287 1061 – 1062, 1070
1110 Indexes

introspection 991, 1023 – 1024, 1026, 818 – 819, 821, 823, 825 – 826, 835 – 836,
1033 – 1034 1023, 1026
IS see the Index of sign languages lexical access 687 – 688, 690, 699, 701 – 704,
iterative, iteration 59, 67, 82, 91, 96, 706, 713 – 714, 716, 721, 724, 747
105 – 106, 193 – 195, 453, 871 lexical development see acquisition, lexical
lexicalization, lexicalized 29, 59, 81, 98 – 99,
107, 122, 143, 146, 151, 170, 172, 190, 198,
J 209, 221, 327, 336, 397 – 398, 402, 505, 610,
640 – 641, 671, 706, 719, 789, 851
joint, articulatory 9 – 13, 15 – 16, 24, 26, 28, lexical modernization 898 – 891, 896, 903,
45 – 46, 190, 578, 580 – 581, 590, 652 – 653, 905
656, 1081 lexical negation see negation, lexical
lexical variation see variation, lexical
lexicography 798, 895, 898, 1023, 1030,
K 1075 – 1076
lexicon 7, 9, 11 – 14, 16, 38 – 39, 69, 78 – 81,
kinship, kinship terms 80, 102, 276, 84 – 86, 88, 97, 140, 142 – 145, 147, 152, 170,
432 – 433, 436, 438 – 441, 458, 667, 929, 1027 172, 198, 326, 401, 406, 426, 432, 434 – 435,
442, 515, 518, 530, 532 – 533, 536, 541, 543,
545, 556, 575, 585, 602 – 605, 632, 648, 655,
L 659, 688, 696, 703, 705, 711, 713, 716 – 719,
721, 724, 735, 774, 777, 789, 797, 800, 803,
language acquisition see acquisition 806, 817 – 821, 823 – 826, 836, 847 – 849,
language brokering see interpretation, 853 – 854, 864, 875, 889, 896 – 903, 905, 927,
language brokering 930, 957, 991, 1010, 1012, 1017, 1038, 1049,
language change see change, language 1055, 1076 – 1077, 1082, 1085 – 1086,
language choice 506 – 507, 796, 807, 845, 1088 – 1089, 1093
894, 953, 958 – 959, 964, 968, 970 – 972, 989, – frozen 101, 169 – 172, 179, 216, 269 – 270,
991, 1079 398, 587, 718 – 719, 836
language contact 215, 518, 528, 540, – productive 38, 40, 81, 100, 164, 170 – 172,
557 – 558, 560 – 561, 789, 801, 806, 863, 180, 403, 459, 688 – 689, 705, 718, 819,
868 – 869, 911, 934, 936, 949, 953, 963, 965, 822 – 823, 825, 835, 1011 – 1013, 1015, 1059
968 – 971, 980. 986, 990, 1035 linguistic minority 789, 806, 841 – 842,
language development see acquisition 892 – 895, 911, 938, 949 – 950, 953 – 954,
language evolution see evolution 956 – 957, 960 – 961, 967, 971, 980, 984, 986,
language family 80, 107, 148, 221, 233, 1034, 1039, 1095
933 – 934, 936 little finger 13, 15, 123 – 124, 322, 391, 440,
language planning 713, 715, 951, 953, 955, 792
957, 961 – 962, 971 location also see place of articulation, 4,
language policy 889 – 890, 894, 920, 6 – 16, 24, 42, 44, 60, 78 – 80, 82, 86, 91, 95,
949 – 950, 952, 954 – 955, 957, 960 – 961 99 – 102, 105, 107, 117 – 119, 121 – 122,
language politics 889 – 890, 895 124 – 125, 130, 141, 143, 148, 151, 160 – 161,
language processing see processing 164 – 166, 168 – 171, 173 – 174, 177 – 180, 194,
language production see production 213, 219, 228 – 234, 238, 255, 266 – 267, 280,
lateralization see hemisphere 320, 358, 396, 401 – 403, 406 – 407, 412 – 424,
left hemisphere see hemisphere, left 426 – 427, 435, 438, 448, 454 – 456, 459, 465,
leftwards movement see movement, 470, 495, 499, 503, 519, 525, 527, 537, 543,
leftwards 563 – 565, 569, 578, 584, 586 – 588, 593 – 594,
legal recognition 889 – 890, 891 – 896, 899, 637 – 639, 649 – 650, 652, 654 – 656, 659,
903 – 904, 926, 950, 953 – 955, 1095 661 – 662, 666 – 668, 670, 687 – 690, 692, 697,
lexeme 22 – 24, 26, 29 – 31, 80 – 81, 85, 107, 700 – 704, 706, 728, 739, 742, 768 – 769, 773,
433 – 434, 459, 638, 640, 716 – 717, 719, 775 – 776, 781, 788, 790 – 791, 794 – 796, 799,
Index of subjects 1111

804, 821 – 822, 831, 874, 924, 991, 1001, 400, 404 – 405, 412 – 414, 418, 426 – 427, 442,
1011 – 1014, 1016, 1018, 1026, 1029 – 1030, 490, 494, 499, 502, 513, 520, 522, 527, 564,
1049, 1051 – 1053, 1059 – 1060, 1062, 1077, 569, 604, 607, 616, 618, 620, 626 – 627,
1082 – 1085, 1087, 1091 – 1092 632 – 633, 636 – 642, 647 – 650, 676 – 677, 687,
705, 707, 711 – 713, 715 – 716, 719, 730,
732 – 734, 744, 746 – 747, 754 – 755, 762 – 764,
M 767 – 770, 772, 774, 776 – 780, 789, 806, 817,
836, 841, 843, 847, 851, 854, 856, 863 – 864,
machine-readable 1033 – 1034, 1050, 1067, 869, 880 – 881, 910, 924, 936 – 937, 950, 961,
1076, 1079, 1085, 1086 967 – 968, 986, 1023, 1059 – 1060,
machine translation 751, 1075 – 1076, 1078, 1069 – 1070, 1083, 1088
1085, 1093 – 1095 – grammatical category 94, 187 – 188,
mainstreaming see education, mainstreaming 196 – 200, 269 – 297, 301, 306, 320, 323, 329,
manual alphabet see alphabet, manual 332, 336, 478 – 479, 482, 483, 494, 502, 513,
manual code 517, 545, 911 820, 833, 929, 1060
manual communication system 499, 915, 956 – speaker attitude 11, 369, 371 – 372, 417
manual dominant see negation, manual modal verb see verb, modal
dominant modulation 55, 71, 86 – 87, 90 – 91, 95 – 96,
manual negation see negation, manual 106 – 107, 186 – 187, 189, 191, 193 – 195, 413,
memory 405, 415, 463, 469, 668, 698, 705, 522, 587, 662, 1061
739, 753, 781, 878, 879, 1078 monitoring 583, 711, 713, 715, 730 – 732, 735,
– short-term 690, 693 – 694, 698 – 699, 753 747, 779, 968
– span 694, 698 – 699 morpheme 7, 13, 32 – 33, 45, 78 – 79, 91 – 92,
– working 687 – 688, 693 – 694, 696, 699, 704, 101, 103, 105, 117 – 120, 128, 132, 142 – 146,
1031 149 – 152, 158, 163, 165 – 166, 168, 171,
mental lexicon 432, 434, 436, 703, 711, 713, 175 – 176, 178, 186 – 187, 193 – 195, 200, 223,
716 – 719, 721, 724, 735, 821 230, 249, 306, 321 – 322, 340, 348 – 349, 354,
mental image, mental representation 142, 358, 361, 392, 405, 424, 433, 442 – 443,
147, 373, 390, 394, 396, 405, 406, 638, 452 – 453, 491, 518 – 520, 526, 575, 594, 615,
687 – 688, 690, 693, 699, 779, 835 – 836 670, 706, 713, 718, 727 – 730, 774, 816,
mental space 142, 144 – 145, 373, 395, 412, 819 – 821, 827, 831, 848, 867, 877, 986, 1046,
416 – 417, 835, 1065 1056 – 1060, 1084, 1088
metadata 1035 – 1036, 1042, 1070, 1080 morphological operation 77, 81 – 82, 87, 91,
metalinguistic 958, 968, 970 112, 115 – 116, 128, 131 – 132, 143, 170 – 171,
metaphor 38, 105, 179, 189 – 190, 217, 221, 234, 520
433 – 435, 437 – 438, 441 – 442, 454, 458, 532, morphological realization 32, 84, 113, 115,
648, 717, 800, 820, 825, 854, 991, 998, 1000, 138, 144, 146, 175, 872
1003 – 1004, 1007 – 1010, 1014, 1018 morphology 13, 30 – 33, 38 – 40, 42 – 43, 45,
middle finger 13, 121, 123 – 124, 390, 420, 247, 256 – 257, 266 – 267, 278 – 279, 281 – 284,
440, 524, 537, 576, 634, 700, 764, 1049 287, 296, 306, 309, 316 – 317, 321 – 322,
Milan Congress 866, 920, 952 – 953 335 – 337, 341, 360, 380, 389, 392, 403, 405,
minority see linguistic minority 407, 415, 447 – 448, 453, 455, 457, 517,
mirror neurons 516, 735 519 – 521, 526, 533, 537, 539, 564, 574 – 576,
modality 579, 586, 593, 595, 602 – 603, 606 – 607, 613,
– communication channel 4 – 7, 17, 21 – 22, 616, 633, 635, 647 – 648, 667, 669 – 670, 711,
31 – 34, 36 – 39, 46, 68 – 70, 77 – 78, 80, 715 – 716, 718 – 719, 721, 727 – 732, 734 – 735,
82 – 83, 85 – 88, 90, 95 – 97, 101, 105, 754, 770 – 772, 774, 777, 807, 809, 817, 819,
112 – 113, 118, 122, 127 – 128, 131 – 132, 824, 835, 845, 849, 852 – 853, 864, 868 – 869,
137 – 138, 150, 153, 177, 188, 205, 210, 216, 873 – 874, 877 – 878, 924, 928 – 930, 937 – 938,
219, 221 – 222, 238, 240, 245 – 246, 248, 250, 986, 1014, 1023, 1026, 1035, 1045 – 1047,
252 – 254, 257, 259, 265 – 267, 293, 302, 316, 1049, 1052 – 1055, 1058, 1060, 1062, 1067,
337, 340, 348, 352, 354, 361, 368, 395, 398, 1069, 1077 – 1078, 1082
1112 Indexes

– sequential 81 – 83, 85, 89, 91, 92, 189 – 191, 193 – 196, 199 – 200, 232, 235, 271,
95 – 97, 102 – 103, 107, 128, 131, 321, 322, 276, 281, 285, 322, 404, 406, 444, 453,
335 – 336, 873 455 – 456, 519 – 520, 586, 717 – 719, 831,
– simultaneous 23, 30 – 33, 59, 77, 81 – 83, 1059, 1089, 1092
86, 91, 96 – 97, 101 – 107, 168, 171, 195, 249, – non-manual 69, 121, 195 – 196, 317, 325,
254, 257, 321 – 322, 335, 873 327, 396, 452, 472, 500, 520, 582, 583, 640,
morphophonology 33, 38, 744 666, 750, 752
morphosyntax 58 – 59, 65, 84, 112, 114, – path 12, 16, 26 – 28, 37, 44 – 45, 106,
130 – 131, 143, 205 – 206, 211, 256 – 257, 118 – 119, 121, 128, 137, 139, 149, 151,
340 – 341, 349 – 350, 361, 413, 418, 443, 445, 173 – 174, 189 – 190, 194 – 195, 205 – 207, 211,
565, 569, 663, 718, 770, 774, 780774, 780, 217, 222, 269 – 270, 322, 348, 396, 398,
807, 874, 923, 968, 1023, 1031, 1047 420 – 421, 438, 445, 447, 452, 454 – 456, 458,
motion capture 1081 – 1084, 1093 525, 589, 591, 617 – 619, 631, 641, 655, 657,
mouth 7, 69, 361, 451, 562, 639, 641, 651, 670, 692, 696, 700, 722 – 723, 739, 744, 781
656, 748, 751, 846, 849, 1012, 1052, 1055, – phonological 4, 8, 10, 11, 16, 22 – 31, 33,
1067, 1069, 1081, 1087 35 – 39, 59, 78, 104, 107, 114, 122 – 125, 131,
mouth gesture 327, 525, 728, 751, 132, 168, 172, 230, 232, 448, 575, 579 – 580,
849 – 850, 991 688 – 690, 692 – 693, 697, 700 – 704, 706,
mouthing 69, 94 – 96, 114, 211, 214 – 215, 717 – 719, 721 – 723, 728 – 729, 733 – 734, 742,
218 – 219, 319, 327, 358, 437, 440, 525, 768 – 769, 775 – 776, 799, 804, 821, 915, 921,
530 – 531, 539, 544, 562, 751, 789, 800 – 801, 1001, 1016, 1049 – 1053, 1055, 1077,
806, 841, 847, 849 – 850, 873, 898, 969 – 971, 1081 – 1085, 1087
986, 991, 1086 MT see machine translation
movement
– alternating 105 – 106, 118 – 119, 121, 437,
722 N
– complex 114 – 119, 516, 753, 764, 689, 769
– formal operations 252, 257, 296 – 309, narrative 166, 179 – 180, 228, 368, 373, 375,
328 – 329, 333, 344 – 346, 349, 353, 358, 459, 418, 421, 425, 443, 448, 456, 483, 489,
466, 471, 478 – 479, 499, 664, 665, 677 501 – 502, 527, 626, 630, 635, 667, 674 – 675,
– iconic 79 – 80, 90, 105, 171, 186, 189, 198, 705, 747, 754, 790, 793, 806 – 807, 871,
270, 336, 390, 394, 396 – 399, 401 – 402, 966 – 968, 998, 1000, 1005, 1007, 1010, 1012,
437 – 438, 449, 500, 530, 532, 537, 628, 648, 1015 – 1016, 1027, 1029, 1032, 1048 – 1049,
668, 822, 824, 1000, 1010 – 1011 1062 – 1063, 1065 – 1067
– in child language acquisition 576, 578, negation 64, 68, 70, 94, 130, 188, 192, 196,
580, 589 – 591, 649 – 659, 668 – 670 223, 246, 268, 294, 300 – 301, 344, 348 – 350,
– in classifier constructions 42 – 43, 82, 125, 354 – 355, 357, 361, 478 – 479, 482, 519 – 521,
160, 162, 165, 166, 168, 172 – 173, 398, 526 – 527, 534, 538, 541, 603, 611 – 612, 641,
406 – 407, 415, 420, 422, 424, 448, 455 – 456, 671 – 673, 720, 766, 771, 773, 790, 820, 828,
458 – 459, 584, 638 – 639, 706, 718, 765, 823, 832, 853, 872, 929, 937, 1031 – 1033, 1047,
1088, 1092 1057, 1061
– in compounds and word formation – adverbial 323
98 – 103, 105, 826, 849 – concord 316 – 317, 319, 332 – 335
– in discourse and poetry 500, 503, – head movement also see headshake, 70,
808 – 809, 1014 – 1016, 1018, 1063 349, 357, 521, 526
– in verb agreement 13, 44 – 45, 82, – manual 223, 316 – 319, 324, 330, 766
137 – 139, 145, 149, 205 – 206, 208, 210 – 213, – manual dominant 318, 521
215, 217 – 218, 221 – 222, 280, 453, 456, 499, – non-manual also see headshake, 330, 333,
521, 537, 543, 593, 749, 772 349 – 350, 354 – 355, 357, 766
– local see aperture change – non-manual dominant 318, 333, 521
– morphological 82 – 83, 89, 91, 96, 105, 107, negative particle 92, 94, 96, 103 – 104, 192,
114 – 119, 121 – 122, 128, 130, 140, 186, 293, 318 – 319, 324, 521, 832
Index of subjects 1113

negator 94, 96, 317 – 319, 323 – 324, 326, 328, 820 – 823, 826, 828, 830 – 831, 836, 844, 854,
330 – 331, 333 – 335, 340, 348 – 349, 355, 521, 863, 866 – 868, 874, 895 – 896, 901, 903 – 904,
527, 851 910, 912, 914, 918, 923 – 924, 927, 929,
neologism 219, 705, 806, 1011 – 1012, 933 – 935, 937 – 938, 960, 962 – 964, 967, 970,
1014 – 1016, 1019 972, 981, 987 – 992, 1000, 1006 – 1007, 1014,
nominal 29, 86, 88, 90, 95, 113 – 114, 119 – 1016 – 1017, 1025 – 1026, 1028, 1030, 1037,
120, 126, 128, 132, 148, 151, 205, 233, 271, 1045, 1047 – 1049, 1051, 1067 – 1068, 1071,
278, 323, 354, 360 – 361, 380, 469, 471, 476, 1078, 1084, 1094 – 1095
527, 537 – 538, 611, 727, 793, 825, 872, 1088 number agreement see agreement, number
non-manual also see feature, non-manual, 7, number feature see feature, number
12, 22, 24, 55 – 57, 62 – 64, 68 – 70, 94, 106, number sign 14, 102, 112 – 113, 121,
114, 117 – 120, 132, 139, 171, 187, 190 – 191, 123 – 124, 530, 585, 866
194, 196 – 197, 199 – 201, 209, 216, 218, 221, numeral 28, 84, 113, 119 – 122, 125 – 126,
239, 245 – 247, 252, 259 – 260, 266, 268, 273, 129 – 132, 160, 175 – 176, 178, 232, 235 – 236,
275, 278 – 279, 292 – 295, 297, 302, 309, 272, 283 – 286, 482, 541, 795, 802, 803
316 – 319, 322 – 327, 330 – 331, 333 – 335, numeral incorporation also see incorpo-
340 – 341, 344, 349 – 350, 354 – 361, 376 – 377, ration, 101 – 102, 112 – 113, 121 – 123, 284 –
379 – 380, 424, 440, 450 – 452, 462, 472, 285
477 – 478, 483 – 484, 492 – 493, 501, 504 – 505,
518, 520 – 521, 525 – 527, 530, 539, 544, 562,
579, 626, 633 – 634, 637, 639, 641, 648, 661, O
664, 666, 670 – 675, 707, 726, 729, 733 – 734,
765 – 766, 808 – 809, 829, 833, 843, 851, 924, object, grammatical 13, 44 – 45, 94, 96, 125,
931, 937, 991, 1003, 1011 – 1012, 1015, 1031, 138 – 139, 142 – 143, 148 – 151, 176 – 177,
1040 – 1041, 1045, 1053, 1055, 1057, 1059, 205 – 206, 208, 211 – 212, 215 – 219, 221, 234,
1061 – 1062, 1081 – 1084, 1086 – 1089, 1094 246, 248, 251 – 252, 254, 267 – 268, 273, 280,
– adverbial see adverbial, non-manual 297 – 298, 301 – 302, 304, 308, 331, 345 – 348,
– agreement see agreement, non-manual 350, 354, 356, 359, 372, 376, 401 – 402, 416,
– dominant see negation, non-manual 443, 448, 454, 467, 469, 472, 480, 520 – 522,
dominant 526, 542, 587 – 588, 603, 610, 662 – 666, 673,
– negation see negation, non-manual 744, 832, 877, 1058, 1089, 1091 – 1092
– simultaneity 245 – 247, 260, 501, 520 – direct 139, 148, 212, 248, 297, 301, 467,
notation also see annotation, 8, 12, 62, 143, 469, 832, 1047, 1056
370 – 371, 381 – 382, 586, 895 – 896, 915 – 916, – indirect 45, 139, 148 – 149, 1047
921, 926 – 927, 1079, 1083, 1085, 1088, 1094 onomatopoeia 395, 400, 441 – 442, 586
noun phrase 44, 119 – 120, 129 – 132, operator 295, 317, 346, 348 – 349, 358 – 359,
140 – 141, 144, 171, 175, 227, 239, 265 – 269, 376 – 380, 383, 465 – 466, 478 – 479, 1061
271, 273 – 279, 283 – 287, 293, 331, 342, 345, oral education see education, oral and
347, 358, 360, 371, 382, 466 – 467, 471, 476, oralism
480, 613, 675, 766, 807 – 808, 832, 835 oralism also see education, oral, 911, 913,
noun-verb pair 83, 88 – 90, 95, 106, 807, 826 916, 919 – 920, 922, 952 – 953, 955 – 956
number, grammatical 84, 95, 101 – 102, orientation
112 – 113, 119 – 125, 129 – 132, 136, 138, – morphological process 42 – 43, 45 – 46, 80,
140 – 141, 143 – 146, 151 – 153, 212, 216, 137 – 139, 145, 150, 165, 168, 171, 176,
231 – 234, 265 – 268, 279 – 285, 287, 398, 413, 205 – 206, 215, 322, 336, 416, 420, 444, 451,
440, 501, 518 – 523, 525, 530 – 532, 538, 453, 456, 457, 593, 720, 744, 765, 767, 780,
540 – 541, 544, 552 – 553, 555, 557, 559 – 562, 781, 821
566 – 568, 577 – 579, 585, 590, 604, 608, 610, – phonological 6 – 8, 10, 13, 17, 22, 24,
615, 634, 638, 647 – 648, 653, 656, 658, 26 – 27, 39, 42 – 43, 80, 99, 118, 171, 176,
661 – 662, 668, 694 – 696, 698, 703 – 704, 713, 180, 197 – 198, 231, 235 – 236, 277, 394, 505,
718, 728 – 730, 746, 754, 770, 775, 790, 521, 525, 537, 575, 592, 650, 688 – 690, 728,
792 – 793, 795, 797 – 805, 808 – 810, 817, 739, 767, 769, 775, 780 – 781, 788, 821, 949,
1114 Indexes

952, 957, 960, 971, 1013, 1049, 1051 – 1052, 820, 824, 852, 926, 929, 932, 1023, 1040,
1055, 1077, 1082, 1087, 1091 1046, 1049, 1054, 1056, 1084 – 1086, 1088,
1094
– notation 8, 12, 929, 1094
P – transcription 5, 926, 1046, 1086
– variation 4 – 5, 9, 14, 17
pantomime 392, 627 – 630, 634 – 635, 637 phonology 4 – 5, 7 – 9, 11 – 17, 57, 59 – 60, 62,
parameter/parameterize 8, 22, 27, 30 – 31, 69, 71, 78, 80 – 83, 91, 97 – 98, 100 – 102,
36, 45, 101 – 102, 104, 107, 165, 169, 171, 105 – 107, 112, 114 – 117, 119 – 121, 127 – 128,
172, 230, 413, 648, 650, 652, 655, 658, 661, 131 – 132, 138 – 141, 144, 146, 150, 168 – 171,
688 – 692, 694, 696, 699 – 707, 768, 788, 795, 177 – 178, 193, 195, 198, 212 – 215, 219,
849, 855, 1001, 1003, 1010, 1013, 1016 – 222 – 223, 230 – 231, 257, 310, 341, 392, 395,
1017, 1077, 1082 – 1085, 1088, 1091 413, 438, 444 – 445, 452 – 453, 456 – 457, 459,
paraphasia 741, 765, 768, 780 515, 521, 525, 530, 533, 537, 544, 561,
part of speech also see word class and 575 – 576, 579, 580, 585 – 587, 592, 606,
grammatical category, 91 – 92, 95, 741, 750, 633 – 635, 647 – 651, 655, 659, 676 – 677,
1054, 1067 711 – 722, 724, 726 – 735, 743, 747, 765, 768,
passive (voice) 251, 259, 542, 867, 874, 877 770 – 771, 774 – 776, 794, 848 – 849, 852,
past tense 33, 92, 188 – 192, 196 – 197, 495, 915 – 916, 921, 923 – 925, 928 – 930, 932, 935,
536, 557, 611 – 613, 669, 677, 705, 828, 877, 938, 986, 1016, 1023, 1034, 1040, 1045 –
1027, 1046 1047, 1054, 1057, 1059 – 1061, 1067, 1084,
perception 4 – 7, 17, 21 – 22, 39, 46, 61, 69,
1095
266, 452, 457, 507, 523, 574, 576, 582, 715,
– assimilation see assimilation, phonological
728, 732, 734 – 735, 746, 749 – 750, 752 – 753,
– development 647, 650 – 651, 925
755, 780, 1014
– change see change, phonological
perseveration see error, perseveration
– oral component see mouth gesture
person 13, 33, 43, 121 – 122, 125, 136, 138,
– similarity 454, 459, 530, 690, 694 – 695, 698
140 – 141, 143 – 146, 150 – 153, 207, 211 – 214,
– slip see error, phonological
216 – 219, 234, 237, 240, 266 – 267, 269,
– spoken component see mouthing
279 – 280, 287, 320, 336, 348, 354, 370, 378,
– variation 788 – 793, 795 – 796, 798 – 799,
413, 440, 456, 501, 518, 521 – 522, 534, 565,
809 – 810, 831, 1035
662, 713, 808, 874, 1055, 1058, 1089
– first 13, 122, 125, 143 – 144, 150, 213, 218, phonotactic 22, 28 – 29, 35, 37, 52, 396, 650,
229 – 233, 275, 277, 365, 370 – 372, 376, 379, 704, 848 – 849
382 – 383, 518, 588, 608, 808, 1055 – 1056 pidgin 40, 85, 561, 567, 842 – 844, 852 – 854,
– non-first 122, 143 – 145, 153, 218, 228, 862 – 865, 874 – 876, 878, 936, 970, 991
230 – 233, 275, 808 pinky see little finger
– second 121, 214, 230 – 231, 266, 269, 275, place of articulation also see location, 7,
336, 527, 608, 1057 22 – 25, 27, 30 – 31, 33, 35, 43, 45, 114 – 115,
– third 121, 125, 230 – 231, 266, 269, 275, 168, 413, 437, 442, 448, 575, 578 – 579, 583,
280, 456, 527, 608 – 609, 1065 586, 591, 649, 720 – 721, 728 – 729, 733, 747,
perspective 167, 368, 371, 374 – 376, 397, 791, 795, 830
412 – 413, 415, 418 – 427, 499 – 501, 587, plain verb see verb, plain
590 – 592, 635, 640, 671, 674, 707, 774 – 775, planning see language planning
854, 1062 plural, plurality 13, 81 – 82, 91, 96, 105 – 106,
PET 712, 734, 749, 752 140, 143 – 144, 200, 211, 230 – 234, 268, 270,
phi-feature see feature, phi 279 – 284, 287, 336, 534, 537, 544, 773 – 774,
phonetic(s) 21 – 22, 31, 37 – 38, 44 – 46, 57 – 872, 937, 1030 – 1031, 1088 – 1089, 1092
59, 61, 68, 71, 107, 123, 125, 143, 145 – 146, – collective 121, 124 – 125, 140, 143, 279,
150, 178, 348, 351, 361, 390 – 391, 395 – 396, 872, 1089, 1092
401, 403 – 404, 406, 561, 578, 586, 649 – 650, – distributive 117, 121 – 122, 124 – 125, 140,
656, 668, 700, 714 – 715, 728, 732, 742, 776, 143, 1089, 1092
Index of subjects 1115

poetry 406 234, 275, 279, 322, 336, 403, 456, 459,
point, pointing, pointing sign also see deictic 662 – 663, 668, 688, 705, 718 – 719, 734, 744,
and pronoun 767, 771 – 772, 819 – 823, 825, 835 – 836, 849,
– gesture see gesture, pointing 903, 1011 – 1015, 1059, 1078
– linguistic 45, 88, 92 – 94, 104, 121 – 122, proform also see pronoun, 166,
124, 139, 140 – 142, 190, 208 – 215, 217 – 218, 227 – 228, 234, 240, 254, 822, 1085
221 – 222, 238, 267 – 269, 271 – 274, 276 – 277, prominence 5, 55 – 57, 59, 67 – 71, 119, 276,
279 – 280, 304, 351 – 353, 355, 414, 418, 424, 282, 370 – 371, 462, 464, 474, 478, 480 – 481,
426 – 427, 471, 503, 505, 522, 526 – 527, 530, 870 – 871, 958, 1032
533, 537 – 539, 564 – 565, 580, 584 – 585, pronominal, pronoun 59, 69, 84, 86, 88, 94,
587 – 588, 592 – 594, 613 – 614, 663, 667, 674, 96, 101, 112 – 113, 121 – 122, 124, 139, 141,
771 – 773, 809, 851, 934, 1013, 1051, 1053, 146, 175, 205, 207 – 211, 214 – 217, 219, 252,
1055, 1061 – 1063, 1091 – 1092 267, 271 – 280, 287, 309, 323, 340, 348,
point of view also see role shift and 350 – 354, 357 – 361, 370 – 372, 376 – 380,
constructed action 69, 365, 367, 369 – 372, 382 – 383, 388, 403, 405, 408, 413, 417, 440,
376 – 377, 380 – 383, 417, 502, 637, 674, 463, 469, 470 – 473, 480, 482 – 483, 501,
1049, 1051, 1054, 1066 533 – 534, 541, 543, 584 – 585, 587, 588, 591,
polar question see question, polar 593, 594, 604, 610, 663, 666 – 667, 674, 794,
politeness 229, 491, 494, 502 – 504, 810, 1026 808 – 809, 851, 934, 1056 – 1057, 1062, 1089,
possessive see pronoun, possessive 1093
pragmatics 38, 62, 175, 253, 388, 412 – 413, – collective 121
417, 483, 489, 771, 1023 – deictic 143, 198, 227 – 228, 231, 267, 403,
predicate 42 – 44, 77, 84, 87, 91, 95 – 96, 104, 527, 587, 593, 667, 851, 1061 – 1062
119, 160, 164, 175, 212 – 213, 219, 254 – 255, – distributive 121 – 122
279, 281, 287, 298, 309, 320, 322 – 325, – exclusive 215, 233, 285, 416, 474, 565, 648,
331 – 332, 335 – 336, 348, 353, 374, 376, 380, 754, 779, 865
412 – 427, 432 – 434, 442 – 459, 476, 517, 694, – first 122, 229 – 234, 266, 370 – 372, 376,
608 – 611, 636 – 639, 641, 660 – 661, 669 – 670, 378 – 379, 382 – 383, 501, 587, 1055 – 1056
718, 745, 767, 793, 835, 872, 1060, 1984 – inclusive 233, 285
priming 405, 700 – 703, 711, 717 – 719 – non-first 122, 143 – 145, 228, 230 – 233
processing 4 – 7, 22, 31 – 32, 34, 38, 172, 324, – possessive 129, 233, 267, 269 – 270, 273,
355, 393, 415 – 416, 427, 582, 584, 626, 276, 278 – 280, 287, 538, 591
632 – 633, 670, 687 – 690, 696, 699 – 707, – reciprocal 212, 218, 223 228, 234,
715 – 718, 724 – 725, 730, 734 – 735, 739 – 740, 236 – 237, 239
744 – 755, 763, 765 – 768, 771 – 772, 774 – 781, – reflexive 228, 234, 236 – 237, 267, 277 –
848, 856, 879, 923 – 924, 929, 933, 936, 938, 280, 287, 466, 476
989, 992, 1033 – 1034, 1045, 1076, 1078, – relative 227 – 228, 234, 238 – 240, 309,
1080 – 1081, 1095 357 – 361
production 7, 17, 21 – 22, 39, 101, 114, 172, – second 121, 230 – 231, 1057
173, 228, 248, 249, 258, 276, 325, 373, 406, – third 121, 230 – 231
412, 416, 469, 514, 516, 574 – 576, 577 – 580, prosodic, prosody 4, 35, 55 – 58, 61 – 64,
583, 589 – 590, 592 – 594, 608 – 611, 618, 626, 67 – 71, 268, 293, 317, 324, 326, 341, 367,
628, 630, 632, 636, 638, 649 – 660, 662, 665, 468, 473 – 474, 478, 483 – 484, 981, 990, 1061
667 – 670, 674, 676, 687 – 688, 699, 705, – constituent see constituent, prosodic
740 – 742, 746 – 747, 749 – 750, 753, 755, 765, – feature see feature, prosodic
769, 773, 775 – 779, 788, 804, 845 – 847, 853, – hierarchy 56, 58
855, 964 – 970, 989, 991, 1023, 1026, 1027, – model 22 – 27, 30 – 31, 37 – 38, 444, 677
1030, 1032 – 1033, 1037 – 1041, 1055 – 1056 – structure 22, 24, 27, 59, 61 – 62, 114, 395,
productive, productivity 38 – 41, 81, 83, 91, 463
100, 103 – 105, 119, 164 – 166, 170 – 172, 180, protolanguage 515 – 516, 545, 874 – 875
1116 Indexes

Q reference 42, 44, 77, 84, 86 – 88, 90 – 91, 94,


118, 121 – 125, 127, 140, 144 – 146, 149 – 151,
question 56, 62 – 67, 71, 80, 83 – 84, 89 – 90, 160 – 166, 169, 171 – 178, 188 – 190, 200, 212,
237 – 239, 246 – 247, 251, 259, 268, 292 – 302, 222, 227 – 233, 236, 238 – 240, 253 – 254,
304 – 311, 325, 327, 345 – 354, 356, 359, 361, 266 – 272, 274 – 280, 285, 287, 346, 348,
464 – 465, 475, 479, 481, 491 – 493, 522, 526, 351 – 352, 360, 366, 371 – 372, 376 – 379, 382,
530, 534, 539, 541, 543 – 544, 603, 611 – 612, 390 – 395, 399, 412 – 418, 420 – 427, 447,
641, 648, 661, 663 – 665, 671 – 673, 707, 453 – 457, 469, 470, 489, 491, 527, 530 – 532,
722 – 723, 726, 808, 832 – 833, 851, 1030, 536 – 537, 543, 565, 584, 586 – 588, 591 – 593,
1033, 1036, 1089 604 – 605, 609, 629, 638, 663, 666 – 671,
– content 63 – 67, 237, 246 – 247, 292 – 294, 674 – 675, 688, 705, 717, 743, 766, 768, 773,
296 – 297, 299 – 302, 304 – 306, 308 – 311, 807, 823, 835, 847, 853, 1004, 1018, 1039,
345 – 346, 349, 353 – 354, 526, 534, 648, 661, 1047, 1060, 1062 – 1063, 1066 – 1067, 1079
663 – 665, 671 – 672, 1089 reflexive see pronoun, reflexive and verb,
– non-manual marking 246 – 247, 292 – 297, reflexive
309, 526 register 435, 502 – 503, 505, 580, 788 – 790,
– polar 63, 246 – 247, 251, 293 – 296, 300, 792, 808 – 809, 955, 969, 987, 1026 – 1027,
348 – 350, 354, 356, 361, 493, 526, 543, 641, 1039
671, 673, 722 – 723, 726, 808, 832 – 833, 1089 relative clause see clause, relative
– particle 247, 292, 296, 522, 543 relative pronoun see pronoun, relative
– pronoun see question, sign relativization see clause, relative
– rhetorical 62, 308, 325, 479, 481 repair see error, repair
– sign 223, 292, 296 – 299, 301, 304 – 309, representational gesture see gesture,
526, 534, 539, 541, 544, 660, 664, 672 – 673, representational
794, 870 rhetorical question see question, rhetorical
– wh- see question, content rhyme 128, 406, 701, 998 – 999, 1008, 1014,
– word 80, 84, 247, 304, 307, 534, 541 1016 – 1017
– yes-no see question, polar – poetic 406, 998 – 999, 1008, 1014, 1016 –
quotation 230, 365 – 374, 377 – 382, 629, 633, 1017
750, 1061 – syllable 128, 701
rhythm, rhythmic 28, 34 – 35, 55 – 57, 61, 105,
576, 578, 580, 629, 649, 650, 752, 998 – 1001,
R 1003 – 1004, 1007, 1014 – 1016, 1052
right hemisphere see hemisphere, right
rate of signing 193, 578, 594 rightwards movement see movement,
reciprocal see verb, reciprocal rightwards
recognition ring finger 13, 123 – 124, 232, 524
– automatic 1075, 1076 – 1078, 1081 – 1084 role shift also see constructed action and
– error 1025 point of view, 152, 365, 368 – 373, 376 – 384,
– interpretation 984, 989 397, 489, 500, 633, 638, 640, 674, 808 – 809,
– legal 889 – 896, 899, 903 – 904, 926, 950, 1001, 1003, 1007, 1012, 1019, 1061 – 1062,
953 – 955, 1095 1078
– linguistic 889 – 896, 899, 920, 926, 950, root 22 – 23, 25, 30 – 31, 33, 37, 42, 46, 79, 88,
954 – 955, 1047, 1095 101, 150, 165 – 166, 168 – 169, 171 – 172, 179,
– psycholinguistic 31, 107, 388, 408, 493, 194, 207, 209, 321 – 322, 335, 341, 406, 432,
699, 701, 704, 718, 732, 734, 750, 752 – 753, 454, 504, 620, 817, 915, 1004, 1055
929
recreolisation 862, 864, 879 – 881
recursion, recursive 517, 542, 610, 871 S
reduplication 29, 39, 77, 81, 96, 100,
104 – 106, 112 – 121, 123 – 128, 130 – 132, 140, school for the deaf see education, school for
143, 193 – 196, 200, 217, 257, 277, 280, the deaf
282 – 284, 287, 306, 403, 537, 539, 544, 589, secondary sign language 513 – 514, 517, 528,
809, 869, 871 – 873, 1089 539 – 540, 543 – 544, 567, 867, 869
Index of subjects 1117

segment, segmentation 5, 7, 21, 23, 27, sign language acquisition see acquisition
29 – 32, 34 – 37, 46, 60, 81, 83, 98 – 99, sign language planning see language planing
104 – 105, 519, 578, 580, 582, 616 – 617, 619, sign system 209, 513, 519, 535, 568, 578, 588,
629, 631, 657, 700, 724, 728 – 729, 796, 802, 614, 866, 868, 911, 982 – 983
809, 878, 1046, 1054, 1070, 1079 – 1080, simultaneity, simultaneous 4 – 5, 13, 16, 23,
1083 26 – 34, 59 – 60, 64, 70, 77 – 83, 86, 91,
semantic role see thematic role 96 – 97, 101 – 107, 128, 164, 168, 171, 173,
semantics 22, 29, 44, 58, 63 – 64, 66, 68 – 69, 195, 218, 245 – 250, 252 – 257, 260, 273,
71, 80 – 81, 84 – 85, 87, 91 – 92, 96 – 97, 321 – 322, 335, 343, 374, 403, 412 – 413,
100 – 105, 117 – 118, 120, 126, 128, 132, 138, 422 – 427, 470, 493, 496, 501, 516, 519 – 520,
141, 149, 151, 158, 160 – 163, 170, 175 – 178, 544, 564 – 565, 569, 574, 576, 579, 582, 584,
191, 193 – 196, 200, 205, 207, 211 – 214, 586, 595, 629, 635, 637 – 641, 657, 666, 669,
217 – 222, 236, 253, 255 – 259, 268, 270, 320, 672, 675 – 676, 697, 707, 711, 715 – 718, 725,
340, 356, 365, 380 – 383, 405, 407, 412 – 417, 729, 732, 734, 772, 778 – 779, 792, 795, 801,
421, 425, 427, 466, 467, 475, 478, 483, 492, 804, 807, 810, 845 – 846, 849, 873, 922, 957,
514 – 515, 538, 564, 586, 611, 613, 615, 969, 990, 1001, 1016 – 1017, 1039, 1045,
626 – 632, 659, 670, 689, 694, 715 – 721, 1047, 1049, 1061, 1066 – 1067, 1080, 1084,
724 – 725, 741 – 743, 747, 753, 765, 768, 776, 1095
797, 799, 806, 818, 820 – 821, 834, 844, 854, – communication 778, 801, 922, 957
929, 938, 1010, 1013, 1023, 1026, 1031, – construction 255 – 257, 412 – 413, 422, 424,
1049, 1055, 1058, 1060, 1076, 1087, 1089, 427, 564 – 565, 569, 807
1093 – 1094
– morphology 77, 82 – 83, 86, 520, 595
semantic change see change, semantic
slip also see error
sentence, sentential
– of the hand 38, 575, 713, 719 – 732, 735
– complex 63 – 65, 255, 293, 309, 340, 342,
– of the tongue 712, 716, 729 – 731, 1024
347, 357 – 361, 376 – 377, 479, 522, 534,
sonority 28, 1016
610 – 611, 767, 774, 1032
space, spatial also see sign space
– type 56, 64, 245, 251, 256, 726,
– coding 669, 694, 697 – 698
1088 – 1089, 1092
– gestural 143 – 146, 150
– complement see clause, complement
– mapping 418, 489, 499, 502, 874
– negation 60, 188, 316 – 320, 323 – 324,
– referential 266, 268, 412 – 414, 587
327 – 336, 349
sequential 7, 15, 27, 29 – 32, 34, 36, 43, 60, – semantic 412 – 413, 417 – 418, 412, 427
81 – 85, 89, 91 – 92, 95 – 97, 102 – 103, 107, – topographic 118, 131, 217, 412 – 416, 418,
128, 131, 173, 218, 249, 321 – 322, 335 – 336, 427, 749, 773
343, 374, 519 – 520, 574, 576, 579, 582, 586, – use of 173, 230, 266, 268, 279,
595, 610, 629, 632, 637, 657, 669, 732, 769, 412 – 417, 424, 426, 499, 518, 558, 563,
873, 1014, 1016 – 1017, 1039, 1051 587 – 588, 666, 749, 766, 773, 843, 854, 868,
shared sign language also see village sign 874, 936 – 937, 968, 991, 1003, 1039, 1041
language and the Index of sign languages, spatial syntax see syntax, spatial
146, 190, 423, 439, 552 – 553, 560 – 569, 603, spatial verb see verb, spatial
616, 789, 843, 893, 911, 937, 971, 981 speech act 228, 324, 489 – 493, 1026, 1061
short-term memory see memory, short-term speech error see error
sign space, signing space 105, 117 – 118, speech reading 582, 916, 918
121 – 132, 139 – 143, 164, 167 – 169, 177, 189, spreading 317, 325 – 326, 330 – 331, 722
197, 210, 217, 221 – 222, 228 – 230, 240, standardization 800, 803, 889 – 891, 896 – 902,
266 – 267, 269, 276, 304, 403, 405, 407, 438, 905, 955, 1023
455 – 456, 492, 495, 521 – 522, 524 – 525, 527, storytelling 501, 506, 540 – 541, 544, 961,
542, 563 – 565, 569, 579, 587, 591, 594, 635, 1010, 1036
637 – 639, 667, 697, 705, 744, 749, 791, 796, stress 14, 56 – 58, 64, 67 – 69, 97, 106,
804, 806, 809, 991, 1040, 1048, 1061, 1063, 271 – 272, 274 – 275, 282, 293, 462, 466, 471,
1065, 1088 – 1089, 1091 – 1092 473 – 475, 479 – 481, 483 – 484, 792, 850
1118 Indexes

style 415, 435, 502, 766, 789, 804, 808 – 810, thematic role, theta role 44, 141, 148 – 149,
968, 1001, 1004, 1026 – 1027, 1034, 1039 246, 254, 453 – 454, 587, 608 – 609, 613, 716,
stylistic 114, 298, 788 – 790, 793, 807, 1058, 1088
808 – 810, 970, 972, 985 – actor 220, 253, 256, 293, 367, 375,
subcategorization 350, 356, 716 414 – 415, 607 – 610, 744, 807, 1050, 1058
subject 44 – 45, 65, 138 – 139, 142 – 143, – agent 42, 44, 79, 103, 148, 161, 164, 167,
148 – 151, 176 – 177, 188, 205 – 206, 208, 212, 205, 221, 246, 253, 255 – 256, 370, 382, 420,
215 – 218, 221, 234, 246, 249 – 250, 252 – 254, 613, 662, 771 – 772
267 – 268, 275, 278, 302, 304, 306, 308, 340, – goal 44 – 45, 149, 205 – 206, 211 – 213,
345, 347 – 348, 350 – 352, 359, 369, 371, 376 –
220 – 221, 372, 826, 1060 – 1061
377, 448, 454, 468 – 469, 471 – 472, 480 – 483,
– patient 148, 205, 220, 246, 254 – 255,
520 – 522, 533, 538, 575, 587 – 588, 603, 613,
607 – 611, 613, 807
661 – 666, 675, 741, 744, 788, 790, 807 – 808,
– source 44 – 45, 149, 205 – 206, 211 – 213,
832, 867, 872, 875, 1047, 1058, 1089, 1092
subordination see clause, subordinate 220, 255, 448, 454 – 456, 1060 – 1061
syllable 7, 21, 23, 27 – 29, 31 – 34, 37, 46, 57, – theme 44, 149, 172, 254, 382, 420, 450,
59, 64, 83, 105, 114, 127, 131, 140, 281, 395, 613, 662 – 663
575, 578 – 580, 582, 590, 648, 696, 713, 720, theme also see thematic role, theme, 406,
726, 731, 733, 741, 826, 850, 1008 418, 463, 466, 468, 470, 1001 – 1002, 1008 –
synonym 435 – 436, 900 – 901 1009
syntactic, syntax 22, 40, 44, 55 – 59, 61 – 63, thumb 11 – 13, 123 – 124, 269, 277, 336, 391,
65 – 66, 69 – 71, 141, 145, 162, 171 – 172, 210, 393, 399, 525, 530, 658, 742, 744, 794, 1011,
216, 218, 245, 250, 253, 257, 287, 310, 317, 1049
320, 341, 367, 383, 468, 473 – 474, 478, 483 – tip of the finger 711, 717 – 718
484, 514 – 515, 520, 526, 539, 542, 564, 579, topic 56, 61, 63, 65, 245 – 246, 248,
607, 633, 648, 661, 666 – 667, 674 – 675, 677, 250 – 252, 254, 259 – 260, 294 – 295, 299, 304,
715, 734 – 735, 745, 766, 768, 770, 773, 777, 325, 330, 345 – 346, 248 – 249, 354 – 355, 359,
780, 820, 835, 843, 868, 875, 877, 928, 931, 403, 424, 462 – 473, 476, 478 – 479, 481 – 484,
938, 990, 1007, 1023, 1029 – 1030, 1047, 495, 497 – 498, 502, 520, 522, 542, 641, 648,
1054, 1094 661, 663 – 664, 666, 671, 673, 807, 809, 820,
– constituent 57 – 58, 62 – 63, 69, 246 – 258, 829, 832 – 833, 847, 867, 869 – 870, 970,
276, 282, 286, 294, 303 – 305, 310, 322, 325,
1055, 1057, 1061, 1067
330 – 331, 340 – 345, 353, 356, 358 – 359, 434,
topicalization 245, 248, 250 – 252, 260, 809
442 – 448, 454 – 456, 464, 466 – 468, 471 – 476,
478 – 481, 520, 533, 611, 807, 818, 921, topic-comment 246, 250 – 251, 259, 424, 468,
1030 – 1031, 1038, 1094 520, 832 – 833, 867, 870
– spatial 648, 661, 666 – 667, 674 – 675, 745, topographic use of space see space,
766, 777 topographic
– word order also see constituent, order, 67, Total Communication 803, 925, 957
146, 234, 239, 245 – 260, 265 – 266, 268 – 269, transcription see notation
271, 277, 279, 284 – 287, 293, 296 – 297, 301, transitive, transitivity 149, 166 – 169, 172,
305 – 306, 308, 341 – 342, 347 – 348, 355, 359, 177, 207, 210, 212 – 213, 222, 253, 256, 259,
462 – 464, 469 – 470, 474, 478, 480 – 481, 484, 273, 420, 426 – 427, 468, 564 – 565, 609, 1059
519 – 520, 530, 538, 542, 544, 575, 588, 594, translation 103, 530, 847, 850 – 851, 895, 969,
633, 648, 661 – 664, 675, 843, 853, 864, 867, 981 – 982, 984 – 986, 990, 1001, 1013, 1029,
869 – 870, 873, 911, 922, 929, 1030, 1039, 1031, 1049 – 1050, 1055, 1057, 1059,
1088, 1093 1066 – 1067, 1069, 1075 – 1076, 1078, 1084,
synthesis 715, 1075 – 1076, 1083 – 1085, 1095 1088, 1093 – 1095
triplication 113 – 115, 117, 128, 130 – 132
turn, turn-taking 70, 368, 489 – 490,
T
493 – 499, 507, 527, 790, 1023, 1027
taboo 502 – 505, 536, 543, 930 typological, typology 22, 32 – 34, 38, 85,
tactile sign language also see deafblind, 112 – 113, 117, 120, 123, 133, 151, 160, 200,
513 – 514, 523 – 525, 527, 545 210, 222, 245 – 246, 248 – 253, 258 – 260, 276,
Index of subjects 1119

280, 292 – 297, 306, 311, 316 – 317, 340, 350, 347 – 348, 447 – 449, 452 – 454, 522, 537 – 538,
353, 357, 361, 413, 423, 426 – 427, 436, 446, 588, 807
467, 474, 476 – 477, 479 – 481, 484, 513 – 514, – spatial 44, 95, 138 – 139, 143, 147 – 152,
517, 519 – 523, 542, 545, 577 – 579, 587, 594, 164, 215, 414, 434, 447 – 448, 454 – 456, 537,
617, 619, 650, 660, 713, 734, 771, 828, 831, 543 – 544, 588, 648, 663, 771, 773, 874
836, 852, 937 – 938, 1023, 1046 – 1047 – reciprocal 91, 96, 106, 116, 205, 212, 218,
223, 237, 543, 719
– reflexive 277 – 280
U village sign language also see shared sign
language and the Index of sign languages,
urban sign languages see the Index of sign 146, 229, 259, 423, 518 – 519, 522 – 523, 543,
languages 545, 552, 586, 588, 603, 789, 854, 864,
use of space see space, use of 867 – 868, 910, 971, 982
Usher syndrome 523 – 524 vision 4, 32, 37, 131, 494, 507, 523 – 524,
582 – 583, 765, 779, 933, 1082
visual contact see eye contact
V visual perception see perception
visual salience 131
variation
– grammatical 545, 788, 790, 807
– lexical 788, 790, 796 – 805, 889, 898, 902, W
905
– regional 797 – 799, 899 – 903, 955, 1038 Wernicke’s area 740, 743, 748, 754, 767
– sociolinguistic 788 – 789, 791 – 792, 795 – wh-cleft 310, 467, 473 – 474, 478 – 479, 481,
796, 798 – 802, 805, 807, 810, 902, 930, 1035 484, 809
verb whole entity classifier see classifier, (whole)
– classifier 43 – 44, 112 – 113, 124 – 127, entity
131 – 132, 158 – 159, 164 – 166, 168 – 175, wh-question see question, content
176 – 180, 347 – 348, 374, 412 – 413, 415 – 418, word class also see part of speech,
420 – 423, 425 – 427, 432, 434, 448 – 449, 77 – 78, 81, 83 – 97, 433 – 434, 533, 807,
564 – 565, 594, 636 – 639, 661, 669 – 670, 718, 825 – 826, 834, 848
745, 767, 835, 1060, 1091 word formation 40, 77, 96, 101, 104,
– agreeing also see verb, directional, 44 – 46, 106 – 107, 179, 533, 543, 579, 606, 729, 816,
82, 91, 96, 112, 124 – 125, 131 – 132, 136 – 139, 818 – 819, 824 – 826, 836
142, 144 – 146, 148, 150 – 153, 205 – 206, word order see constituent, order and
216 – 217, 218, 220, 229, 231, 254, 267, 279 – syntax, word order
280, 328, 336, 348, 354, 371 – 372, 379, 383, working memory see memory, working
413, 447 – 449, 452 – 455, 457, 470, 499, 522,
543, 584, 586 – 587, 593 – 594, 610, 642, 661,
666 – 669, 674, 707, 771, 853, 873, 929 Y
– backwards 149 – 150
– directional also see verb, agreeing, 413 – yes-no question see question, polar
414, 418, 426, 868, 1088, 1091
– indicating 229, 807
– modal 94, 187 – 188, 196 – 200, 301, 534, Z
818
– plain 44, 95, 138 – 139, 150, 168, 204 – 206, zero marking 97, 113 – 115, 117 – 120, 128,
212 – 213, 216 – 217, 222, 256, 322, 328, 130 – 132, 143 – 145, 522
Index of sign languages

A Argentine Sign Language 189, 196, 199 –


200, 207, 210 – 211, 218, 223, 248, 285, 439,
Aboriginal sign languages also see Warlpiri 929
Sign Language, North Central Desert Sign ASL see American Sign Language
Language, and Yolngu Sign Language, 517, Auslan see Australian Sign Language
528, 535 – 539, 543 – 544, 551, 930, 947 Australian Aboriginal sign languages see
ABSL see Al-Sayyid Bedouin Sign Aboriginal sign languages
Language Australian Sign Language 13, 56, 80, 90 – 91,
95, 98 – 99, 102, 106, 142, 161, 163, 253 –
Abu Shara Bedouin Sign Language see
254, 259, 274, 278, 293 – 294, 307, 342 – 343,
Al-Sayyid Bedouin Sign Language
351, 388, 397, 399 – 400, 405, 495 – 496, 575,
Adamarobe Sign Language 158, 258, 423,
590, 638, 670, 788, 790, 792, 795 – 798,
426, 439, 560 – 565, 567, 869
801 – 803, 806 – 808, 810, 821 – 823, 826 – 827,
AdaSL see Adamarobe Sign Language
836, 842, 849 – 850, 852, 894, 898, 930, 934,
Al-Sayyid Bedouin Sign Language 40,
990, 1050
92 – 94, 98, 102, 104, 146, 216, 258, 558,
Austrian Sign Language 36, 90, 94 – 95, 109,
564 – 565, 569, 588, 616, 788, 867 – 869, 874
113, 118, 120, 128, 161, 233, 283, 294, 302,
American Sign Language 6, 8, 12 – 15, 26,
305, 307, 342, 449, 482 – 483, 1057
28 – 29, 33, 35 – 36, 38, 42 – 45, 56, 60 – 61,
63 – 64, 68 – 69, 80, 86, 89, 91, 95 – 96,
98 – 100, 102 – 104, 106 – 107, 113, 117 – 120,
B
122 – 125, 137 – 142, 147, 149 – 150, 158 – 164,
173 – 175, 187 – 190, 192 – 194, 196 – 199, 214, Ban Khor Sign Language 557, 560, 936
216, 218, 227 – 228, 230 – 240, 246 – 247, Brazilian Sign Language 137, 151, 196,
249 – 252, 257 – 259, 265 – 287, 293 – 310, 317, 198 – 200, 207, 209 – 210, 217 – 220, 231, 237,
322 – 323, 326, 329 – 332, 334, 342 – 343, 239, 246 – 247, 257, 293 – 294, 304 – 308,
345 – 355, 357 – 361, 368 – 370, 373 – 374, 327 – 329, 334, 474, 476, 482, 518, 588, 593,
376 – 377, 388 – 389, 391 – 395, 397 – 399, 655 – 659, 661, 663, 665 – 666, 668, 674, 676,
401 – 407, 415, 418, 425, 427, 432 – 435, 797, 929, 1003
437 – 442, 444, 449, 458, 465, 467, 469, British Sign Language 56, 61, 80, 85, 90, 95,
471 – 483, 489, 492, 494 – 496, 498 – 507, 98, 101 – 102, 104, 106, 113, 117, 122, 127,
518 – 521, 524 – 527, 532 – 533, 536, 541, 554, 137, 161, 171, 174, 189 – 190, 193 – 194, 216,
561 – 562, 575 – 581, 583 – 594, 605, 609, 218, 228 – 239, 246, 249 – 250, 253, 256, 259,
637 – 638, 640 – 641, 649, 651 – 658, 660 – 677, 276, 285, 294, 325, 334, 342, 372, 399, 401,
689 – 690, 692, 694 – 699, 701, 703, 705 – 706, 489, 494 – 497, 499, 588 – 590, 593, 653,
712, 717 – 719, 721, 728, 742 – 745, 747 – 750, 655 – 660, 668 – 669, 675, 694, 696 – 697,
752 – 753, 765, 769 – 770, 775, 778, 788, 700 – 701, 703 – 704, 749, 751 – 752, 765 – 767,
790 – 805, 807 – 810, 817, 820 – 821, 823, 770 – 777, 780 – 781, 788, 790 – 792, 795,
825 – 826, 828 – 833, 835, 843 – 850, 853 – 855, 797 – 798, 800 – 801, 804, 806 – 807, 810, 821,
863, 866 – 867, 869 – 871, 873, 877, 880, 825, 821, 827, 848 – 852, 854 – 855, 869, 871,
892 – 893, 898, 911, 918 – 919, 921 – 924, 929, 896 – 898, 924, 926, 934 – 935, 983 – 984,
935 – 938, 954, 961, 967 – 968, 970, 981, 1000 – 1003, 1005 – 1007, 1009 – 1013, 1015,
990 – 991, 1000 – 1004, 1007 – 1010, 1017 – 1018, 1029, 1050, 1059, 1065, 1082,
1012 – 1015, 1017, 1030, 1035, 1049 – 1050, 1093
1052, 1054 – 1059, 1061 – 1062, 1067, 1069, BSL see British Sign Language
1081, 1084, 1093 – 1094 Bulgarian Sign Language 936
Index of sign languages 1121

C French Sign Language 85, 158, 198, 251,


259, 524, 564, 577, 791, 797, 829 – 831,
Cambodian Sign Language 229, 929 853 – 854, 866, 915 – 918, 924, 926 – 927,
Catalan Sign Language 70, 137, 151, 198, 935 – 936, 967 – 970, 1014, 1057
209 – 210, 217, 285, 294, 307, 318, 320 – 321,
323 – 324, 326, 331 – 335, 372, 379 – 380,
833 – 834, 894 G
Chilean Sign Language 929
Chinese Sign Language 325, 329 – 330, 334, German Sign Language 13, 61, 70, 79, 87,
336, 388, 928, 970 102, 106, 113 – 118, 120 – 127, 137, 158, 161,
CisSL see Cistercian Sign Language 163, 166, 181, 188, 193, 197 – 200, 207 – 212,
Cistercian Sign Language 532 – 534, 215, 217 – 220, 236 – 239, 246 – 247, 251, 280,
543 – 544 283 – 284, 309, 317, 326, 331 – 336, 357 – 361,
Colombian Sign Language 929 – 930 372, 379, 421, 423, 425 – 426, 477, 521,
Congolese Sign Language 931 635 – 636, 668, 711, 719 – 722, 724 – 734, 754,
Croatian Sign Language 36, 231, 233, 257, 820, 831 – 832, 850, 854, 869, 871 – 872, 891,
285, 294, 296, 302, 305, 307 – 308, 469 – 470, 898, 926, 933, 935, 966, 1054 – 1057, 1059,
482 1093
CSL see Chinese Sign Language Greek Sign Language 137, 192, 196, 207,
Czech Sign Language 894, 936 – 937 207 – 209, 212 – 213, 217 – 220, 222 – 223,
325 – 326, 593, 851
GSL see Greek Sign Language
D Guyana Sign Language 930

Danish Sign Language 56, 91, 163, 209, 233,


246, 274, 371 – 372, 378, 388, 500, 898, 925, H
935, 1055 – 1057
DGS see German Sign Language Hai Phong Sign Language 936
DSGS see Swiss-German Sign Language Ha Noi Sign Language 936
DSL see Danish Sign Language Hausa Sign Language 113, 189, 283, 566,
Dutch Sign Language see Sign Language of 931
the Netherlands HKSL see Hong Kong Sign Language
Ho Chi Minh Sign Language 936
homesign 40 – 41, 407, 517, 543, 545, 565,
E 577, 594, 601 – 603, 605 – 620, 651, 867 – 868,
875 – 876, 879, 911, 914, 1028
Egyptian Sign Language 931 Hong Kong Sign Language 158, 163, 233,
Ethiopian Sign Language 931 253 – 254, 294, 296, 307, 320, 322, 335 – 336,
341, 343 – 349, 351 – 357, 359 – 361, 432 – 434,
437 – 441, 443 – 444, 446, 448 – 452, 456 – 458,
F 669, 928
HZJ see Croatian Sign Language
Filipino Sign Language 497, 797, 929
Finnish Sign Language 28 – 29, 51, 233, 237,
294, 307, 315, 319, 321 – 322, 336, 339, 468, I
486, 652, 654 – 656, 925 – 926, 945, 988
FinSL see Finnish Sign Language Icelandic Sign Language 251, 935
Flemish Sign Language 113, 118, 122, 207, Indian Sign Language 311, 551, 558, 928
209 – 211, 220, 253 – 254, 257, 259, 294, Indo-Pakistani Sign Language 79, 86, 98,
325 – 326, 334, 778, 797, 898, 1038 – 1039 102, 113, 118 – 119, 122, 130, 137, 193 – 194,
French-African Sign Language 935 206 – 210, 213 – 214, 217 – 219, 223, 232, 237,
1122 Indexes

293 – 294, 304 – 305, 307 – 309, 319, 518, 521, L


640, 797, 821, 823 – 824, 829, 928
International Sign 567, 841, 852 – 854, 925, Lebanese Sign Language 326
932, 935 – 936, 980, 990 – 992, 1009 – 1010, LIBRAS see Brazilian Sign Language
1018, 1077 Libyan Sign Language 931
Inuit Sign Language 554, 564 LIL see Lebanese Sign Language
IPSL see Indo-Pakistani Sign Language LIS see Italian Sign Language
Irish Sign Language 102, 200, 251, 253 – 254, LIU see Jordanian Sign Language
256, 259, 326, 351, 477, 500, 803 – 807, 849, LSA see Argentine Sign Language
852, 854, 935, 1042 LSB see Brazilian Sign Language
IS see International Sign LSC see Catalan Sign Language
ISL see Irish Sign Language or Israeli Sign LSE see Spanish Sign Language
Language LSF see French Sign Language
Israeli Sign Language 56, 59 – 61, 64 – 65, LSFA see French-African Sign Language
67 – 70, 71, 78 – 80, 82, 85, 88 – 92, 94 – 95, LSQ see Quebec Sign Language
98, 100 – 106, 113, 120, 122, 137, 140, 171,
192, 234, 251, 294, 307, 310, 321 – 322,
327 – 328, 335, 350, 580, 639, 666, 817, 820,
M
828, 852, 854 – 855, 867, 869, 874, 898, 928, Malian Sign Language 566
935 Malinese Sign Language 931
Italian Sign Language 28, 106, 113, 117 – Manually-coded English 518, 843 – 844
118, 120, 127, 161, 187, 190 – 192, 238 – 239, Mardin Sign Language 981, 993
246 – 247, 253 – 255, 279, 286, 293, 295 – 296, Maritime Sign Language 855, 934
301 – 303, 305, 307 – 309, 318, 323, 327, 335, Martha’s Vineyard Sign Language 40, 554,
355 – 359, 361, 378 – 379, 382, 390, 476, 492, 560, 843, 893, 918, 971, 981 – 982
520 – 521, 524, 655, 851, 854, 927, 1010, Mauritian Sign Language 867 – 869, 871,
1030, 1050 874, 898
MCE see Manually-coded English
Mexican Sign Language 935
Moldova Sign Language 936
J
Monastic sign languages also see Cistercian
Sign Language, 528, 531, 544
Jamaican Sign Language 555
Moroccan Sign Language 931
Japanese Sign Language 137, 146, 161, MSL see Mauritian Sign Language
209 – 210, 214, 217, 219, 223, 229, 234, 237, MVSL see Martha’s Vineyard Sign
294, 307, 324, 336, 439, 518, 585, 587 – 588, Language
590, 653, 661, 898, 928, 934, 1084
Jordanian Sign Language 158, 248 – 249, 258,
318 – 319, 326 – 327, 334, 336, 425, 499 – 500, N
928
NCDSL see North Central Desert Sign
Language
Nederlands met Gebaren see Sign-
K supported Dutch
New Zealand Sign Language 98, 102, 233,
Kata Kolok see Sign Language of Desa 294, 307, 319, 325, 329, 336, 500, 788, 790,
Kolok 792, 795 – 797, 800 – 803, 806 – 808, 810, 847,
Kenyan Sign Language 931 849, 930, 934, 988
KK see Sign Language of Desa Kolok NGT see Sign Language of the Netherlands
Korean Sign Language see South Korean Nicaraguan Sign Language 40, 85, 194, 209,
Sign Language 268, 372, 395, 407, 427, 518, 543, 545, 564,
KSL see South Korean Sign Language 566, 577, 619, 641, 817, 863, 867, 950
Index of sign languages 1123

North American Indian Sign Language also secondary sign languages 513 – 514, 517, 528,
see Plains Indian Sign Language 539 – 540 539 – 540, 543 – 544, 567, 867, 869
North Central Desert Sign Language 535, SGSL see Swiss-German Sign Language
537 – 540, 543 – 544 shared sign languages also see village sign
Norwegian Sign Language 40, 659, 926 languages, 146, 190, 423, 439, 552 – 553,
NS see Japanese Sign Language 560 – 569, 603, 616, 789, 843, 893, 911, 937,
NSL see Norwegian Sign Language 971, 981
NZSL see New Zealand Sign Language Signing Exact English 578, 588, 957
Sign Language of Desa Kolok 87, 146, 158,
181, 189, 229, 522, 557 – 560, 562 – 565,
O 567 – 568, 573, 893, 981
Sign Language of the Netherlands 8 – 9, 11,
ÖGS see Austrian Sign Language
14 – 16, 66, 68, 113, 117, 120, 127, 137, 158,
Old French Sign Language 198, 564, 915
163, 169, 171, 181, 189, 193 – 195, 200,
Original Bangkok Sign Language 936
Original Chiangmai Sign Language 936 209 – 210, 214 – 215, 217 – 219, 222 – 223, 236,
248 – 249, 252 – 256, 258 – 259, 294, 304 – 305,
307, 343, 350 – 353, 355, 357, 388, 397 – 400,
P 471, 477, 490 – 491, 493, 495, 498, 504,
518 – 521, 524 – 527, 561, 580, 586, 588, 590,
Paraguayan Sign Language 930 660 – 662, 670, 676, 704, 797, 845 – 846,
PISL see Plains Indian Sign Language or 850 – 851, 873, 889, 895 – 903, 905, 925, 927,
Providence Island Sign Language 933, 1010, 1013, 1017 – 1018, 1029, 1059 –
Plains Indian Sign Language 229, 439, 528, 1062, 1069, 1082, 1093
539 – 544, 554 – 555 Sign-supported Dutch 518
Polish Sign Language 1084 SKSL see South Korean Sign Language
Portuguese Sign Language 935 Slovakian Sign Language 936
Providence Island Sign Language 439, 555, South African Sign Language 253, 374, 388,
561 – 562, 564 – 565, 567, 569 425, 797, 931, 954 – 955
Puerto Rican Sign Language 495 South Korean Sign Language 137, 388, 929,
934
Spanish Sign Language 188 – 189, 195, 294,
Q 307, 653, 656, 703, 854, 894
SSL see Swedish Sign Language
Quebec Sign Language 102, 246, 250, 278,
Swedish Sign Language 193 – 194, 196, 230,
294, 326, 372, 587, 652, 676, 849, 924 – 925,
251, 259, 372, 518, 524, 526 – 527, 698, 808,
935, 967 – 968
893, 896, 925, 935, 954, 1029, 1066
Swiss-German Sign Language 61, 79, 253,
850, 898, 905, 927, 940, 1009
R
RSL see Russian Sign Language
Russian Sign Language 87, 90, 253, 257,
T
326, 854 – 855, 927, 935 – 936
Rwandan Sign Language 931
Tactile American Sign Language 524 – 527
Tactile French Sign Language 524
S Tactile Italian Sign Language 524
Tactile Sign Language of the Netherlands
SASL see South African Sign Language 524, 526 – 527
Sawmill Sign Language 528, 530 – 531, tactile sign languages 499, 513 – 514,
543 – 545 523 – 528, 576
1124 Indexes

Tactile Swedish Sign Language 524, V


526 – 527
Taiwan Sign Language 163, 209 – 210, 214 – Venezuelan Sign Language 505, 930
220, 222 – 223, 234, 286, 405, 587 – 588, 638, VGT see Flemish Sign Language
849, 928 – 929, 934 village sign languages also see shared sign
TİD see Turkish Sign Language languages, 146, 229, 259, 423, 518 – 519,
Thai Sign Language 163, 557, 936 522 – 523, 543, 545, 552, 586, 588, 603, 789,
TSL see Taiwan Sign Language 854, 864, 867 – 868, 910, 971, 982
Turkish Sign Language 113, 118 – 119, 122,
127, 158, 161, 163, 181, 193, 195, 294,
318 – 319, 321, 326 – 327, 334, 423, 426, 521, W
928, 1057
Warlpiri Sign Language 535 – 539
WSL see Warlpiri Sign Language
U
Ugandan Sign Language 931 Y
Ukrainian Sign Language 936
urban sign languages 439, 519, 568, 789, 796, Yolngu Sign Language 535 – 537, 539, 544,
802, 812, 929, 982, 985, 994 869, 874
Uruguayan Sign Language 895, 930 YSL see Yolngu Sign Language
Index of spoken languages

A 530, 534, 537, 541, 578 – 579, 584, 586 – 587,


610, 616 – 617, 630 – 632, 635 – 638, 640,
Austronesian languages 32, 872 659 – 660, 667, 676, 690, 694, 698 – 699,
716 – 718, 725, 733, 744, 747 – 749, 751, 770,
772 – 774, 776 – 777, 790, 800 – 801, 806, 808,
B 820 – 821, 830, 842 – 849, 874 – 876, 894 – 897,
911, 916, 921 – 922, 935, 967 – 968, 970, 980,
Bainouk 140 982 – 983, 985, 987 – 991, 1001 – 1002,
Burmese 175, 929 1006 – 1007, 1009 – 1010, 1012 – 1013, 1029,
1041 – 1042, 1046, 1056 – 1060, 1088 – 1089,
1093 – 1094
C
Cantonese 342, 349 – 351, 432 – 434, 440 – F
441, 458
Cape Verde Creole 865 French 58, 60, 269, 272, 275 – 276, 278,
Catalan 67, 480 – 482, 894 435 – 436, 521, 586, 664, 676, 870 – 872, 875,
Chinese 104, 188, 191 – 192, 222, 246, 342, 891, 897, 904, 915, 935, 952, 967 – 970, 1041,
437, 521, 616, 970 1057
Creoles 85, 561, 567, 586, 842 – 844, 852,
865, 870 – 872, 874, 880, 935
Croatian 36, 676
G

D German 33, 36, 124, 128 – 129, 206, 211, 219,


721, 729 – 733, 763, 843, 876, 897, 965 – 966,
Dagaare 39 891, 1057, 1060
Djambarrpuyngu 537 Greek 221, 272, 360, 542, 897
Dutch 193, 211, 214 – 215, 218 – 219, 272, Gunwinggu 176 – 177, 1060
521, 676, 845 – 846, 850, 897, 899, 903, 1029,
1039, 1059 – 1060
H
E Hawaiian 33, 865, 874
Hawaiian Creole 865
Emmi 175 Hmong 32 – 33
English 29, 32 – 33, 36, 59, 63, 67, 80, 83, Hungarian 129, 246, 283, 466, 473
88 – 89, 92 – 93, 97, 100, 102 – 103, 121,
128 – 129, 172, 174, 189, 191, 196 – 197, 207,
221, 233, 236, 238 – 239, 258, 271 – 272, 278,
284, 286, 295, 298, 310, 347 – 348, 351, 353, I
360, 369 – 370, 378, 382, 392, 398, 400, 403,
432 – 436, 438, 440 – 441, 446 – 447, 449, 456, Ilokano 127 – 128
458 – 459, 466 – 467, 472 – 473, 475 – 476, Italian 37, 191, 253, 272, 274, 293, 295, 390,
478 – 482, 492 – 493, 495 – 496, 504, 507, 521, 717, 772, 851, 891, 897
1126 Indexes

J R
Jamaican Creole 866 Reunion Creole 865
Japanese 129 – 130, 252, 302, 308, 347, 471 Romanian 466
Russian 442, 482, 1057

K
S
Koyukon 177
Kwa languages 872 Saramaccan 872 – 873
Seychelles Creole 872 – 873
Shona 32
L Spanish 272, 482, 498, 616 – 618, 808, 868,
894
Latin 83, 198, 891, 897, 1056, 1058

T
M
Tagalog 129 – 130
Mandarin Chinese 222 Tashkent 175
Mauritian Creole 870 – 871, 875, 880 Terena 176
Miraña 178 – 179 Thai 175
Mundurukú 176 Tok Pisin 85
Tonga 222
Turkish 33, 128, 283, 308, 521, 616 – 619, 632,
N 667, 1046 – 1047

Navajo 33, 175, 1014


Ngukurr Creole 871, 875 W
Norwegian 269, 275, 278, 659, 889, 926
Warlpiri 128, 246, 537 – 539
West Greenlandic 32 – 33
P
Palikur 177 Y
Pidgins 85, 561, 567, 842 – 844, 852 – 853,
864, 874 – 875, 936 Yidin 32

S-ar putea să vă placă și