Sunteți pe pagina 1din 8

Assessment of Pragmatics

CARSTEN ROEVER

The assessment of second language pragmatics is a relatively recent enterprise. This entry
will briefly review the construct of pragmatics, discuss some major approaches to testing
pragmatics, and highlight some of the challenges for pragmatics assessment.

The Construct

The concept of pragmatics is far reaching and is commonly understood to focus on language
use in social contexts (Crystal, 1997). Subareas include deixis, implicature, speech acts, and
extended discourse (Mey, 2001). In terms of the psycholinguistic structure of pragmatic
ability, Leech (1983) distinguishes between sociopragmatics and pragmalinguistics. Socio-
pragmatic ability describes knowledge and ability for use of social rules, including mutual
rights and obligations, social norms and taboos, and required levels of appropriateness
and politeness. Pragmalinguistic ability includes knowledge and ability for use of linguistic
tools necessary to express speech intentions, for example, semantic formulae, hedging
devices, and pragmatically relevant grammatical items.
Assessment instruments in second language (L2) pragmatics do not usually cover all
possible subareas of pragmatics, and usually focus on either sociopragmatics or pragma-
linguistics. A small number of tests have been developed as general proficiency tests, which
are intended to compare test takers to each other and discriminate between test takers of
different ability. These tests tend to provide the broadest (though not complete) coverage
of the construct.
A second category of instruments are those that have been designed for research studies.
They may serve as one-shot instruments to compare participants from different backgrounds
(learning in the second vs. foreign-language context), or they may be used to establish
a baseline of participants’ knowledge of the feature of interest followed by a treatment
(an instructional sequence or other exposure to the target feature) and concluding with
a measurement of the effect of the intervening treatment phase. These tests tend to be
narrower in their construct coverage as they are focused on a specific aspect of pragmatic
competence, commonly a speech act (e.g., Bardovi-Harlig & Dörnyei, 1998; Matsumura,
2001; Rose & Ng, 2001; Takahashi, 2005) but sometimes also another feature, like implicature
(Bouton, 1994; Taguchi, 2007, 2008b), discourse markers (Yoshimi, 2001), routine formulae
(House, 1996; Roever, 1996; Wildner-Bassett, 1994) or recognition of speech styles (Cook,
2001).
The next section will concentrate on proficiency tests, but also discuss some tests from
research settings that can inform future developments in the assessment of L2 pragmatics.

Proficiency Tests

The first large-scale test development project for L2 pragmatics was Hudson, Detmer, and
Brown’s (1992, 1995) test battery. They focused sociopragmatic appropriateness for the
speech acts request, apology, and refusal by Japanese learners of English, and designed
their instruments around binary settings of the context variables power, social distance,

The Encyclopedia of Applied Linguistics, Edited by Carol A. Chapelle.


© 2013 Blackwell Publishing Ltd. Published 2013 by Blackwell Publishing Ltd.
DOI: 10.1002/9781405198431.wbeal0049
2 assessment of pragmatics

You are sharing a house with several people. Today you want to rearrange your room.
You need help moving a heavy desk and decide to ask one of your housemates. You go to
the living room where your housemate Jack is reading the newspaper.
You say: _______________________________________________________
_______________________________________________________________

Figure 1 DCT item

and imposition (Brown & Levinson, 1987). Hudson et al. (1992, 1995) compared several
different assessment instruments, but like many studies in interlanguage pragmatics
(Kasper, 2006) relied heavily on discourse completion tests (DCTs). A DCT minimally
consists of a situation description (prompt) and a gap for test takers to write what they
would say in that situation. Optionally, an opening utterance by an imaginary interlocutor
can precede the gap, and a rejoinder can follow it. Figure 1 shows a DCT item intended
to elicit a request.
Hudson et al.’s (1995) instrument included traditional written discourse completion tests
(DCTs), spoken DCTs, where the task input was in writing but test takers spoke their
response, multiple choice DCTs, role plays, and two types of self-assessment questionnaires.
Test taker performance was rated on a five-step scale for use of the correct speech act,
formulaic expressions, amount of speech used and information given, formality, directness,
and politeness. This pioneering study led to several spin-offs. Yamashita (1996) adapted
the test for native-English-speaking learners of Japanese, Yoshitake (1997) used it in its
original form, and Ahn (2005) adapted it for Korean as a target language. In a review,
Brown (2001, 2008) found good reliability for the role plays, as well as the oral and writ-
ten DCTs and self-assessments, but the reliability of the multiple-choice DCT was low.
This was disappointing as the multiple-choice DCT was the only instrument in the battery
that did not require raters, which made it the most practical of all the components. Liu
(2006) tried to develop a multiple-choice DCT for first language (L1) Chinese-speaking
learners of English and reports high reliabilities but McNamara and Roever (2006) question
whether test takers may actually have reacted to the idiomaticity of the response options
rather than their appropriateness. Tada (2005) followed the tradition of investigating speech
acts but used video prompts to support oral and multiple-choice DCTs and obtained reli-
abilities in the mid .7 range. Using conversation analysis as his theoretical basis, Walters
(2004, 2007) developed DCTs, role plays and listening comprehension items to investigate
learners’ ability to comprehend and complete pre-sequences, compliments, and assessments.
While innovative, his instrument was hampered by very low reliabilities.
While speech acts have been a feature of focal interest in the assessment of L2 pragmatics,
not all work has focused exclusively on them. Bouton (1988, 1994, 1999) did pioneering
work in the assessment of implicature, that is, how speakers convey additional meaning
beyond the literal meaning of the words uttered. He distinguished two types of implicature,
idiosyncratic and formulaic, with the former encompassing conversational implicature
(Grice, 1975), whereas the latter includes some specific types of implicature, such as indir-
ect criticism, variations on the Pope Q (“Is the Pope Catholic?”) and irony. Bouton’s test
items consisted of a situation description, a brief conversation with an implicature, and
multiple-choice response options offering possible interpretations of the implicature. Using
this test, Bouton found that idiosyncratic implicature is fairly easy to learn on one’s own
but difficult to teach in the classroom, whereas the reverse is the case for formulaic impli-
cature. Taguchi (2005, 2007, 2008a, 2008b) employed a similar instrument and took a
psycholinguistic perspective on implicature, investigating learners’ correct interpretation
in conjunction with their processing speed.
assessment of pragmatics 3

Jack is talking to his housemate Sarah about another housemate, Frank.


Jack: “Do you know where Frank is, Sarah?”
Sarah: “Well, I heard music from his room earlier.”
What does Sarah probably mean?
1. Frank forgot to turn the music off.
2. Frank’s loud music bothers Sarah.
3. Frank is probably in his room.
4. Sarah doesn’t know where Frank is.

Figure 2 Implicature item from Roever (2005)

Jack was just introduced to Jamal by a friend. They’re shaking hands.


What would Jack probably say?
1. “Nice to meet you.”
2. “Good to run into you.”
3. “Happy to find you.”
4. “Glad to see you.”

Figure 3 Routines item from Roever (2005)

A small number of studies have combined assessment of different aspects of pragmatic


competence. In the largest study to date that covers multiple features, Roever (2005, 2006)
developed a Web-based test of implicature, routine formulae, and speech acts, and validated
it using Messick’s (1989) validation approach. Unlike Hudson et al.’s (1995) test, Roever’s
(2006) instrument focused on pragmalinguistic rather than sociopragmatic knowledge.
Roever adapted Bouton’s implicature test and a multiple-choice test of routine formulae
from Roever (1996). Figure 2 shows an implicature item from Roever’s (2005) test. Figure 3
shows a routines item from Roever (2005).
Roever also incorporated a section testing the speech acts request, apology, and refusal.
He used rejoinders (responses by the imaginary interlocutor) in his speech act items, which
Hudson et al. had discarded in their final version, but Roever argued that rejoinders do
not detract from the tasks’ authenticity in a pragmalinguistically oriented test because such
a test assesses the breadth of test takers’ pragmatic repertoire rather than their politeness
preferences. Roever’s test was Web-delivered with randomized presentation of items,
capture of response times, automatic scoring of the implicature and routines section, and
rater scoring of the speech act section. He obtained an overall reliability of .91 with a
particularly high inter-rater reliability for the speech acts section where the rejoinders made
it very easy for raters to assess responses dichotomously as correct (fitting the situation
and the rejoinder) or incorrect. The more the score approaches 1.0, the greater the agree-
ment between raters in their judgment of test taker performance. Roever’s test covered the
construct of L2 pragmatic knowledge in quite some breadth, and had a high degree of
practicality due to its web-based delivery. However, like previous tests, it ignored the
many other discursive abilities that language users need for successful communication in
real time, such as reading and producing contextualization cues.
As part of the development of a comprehensive computer-based test battery, Roever et al.
(2009) designed a “social language” section, which included a spoken DCT with requests,
apologies, refusals and suggestions, and an implicature section with spoken implicature
stimuli.
The instruments discussed above are the only ones to date that were designed as general
proficiency tests of second language pragmatics, and none of them have been used on a
large-scale operational basis. It is noticeable that they have overwhelmingly focused on
speech acts (primarily request and apology) with implicature a distant second, which is
probably due to the preoccupation with speech acts in interlanguage pragmatics research.
4 assessment of pragmatics

This focus goes back to the work of Blum-Kulka, House, and Kasper (1989), who developed
a comprehensive coding approach for requests and apologies in the Cross-Cultural Speech
Act Realization Project (CCSARP). However, their work has recently come under severe
criticism (Kasper, 2006) as it was based strongly on the discourse-external context factors
identified by Brown and Levinson (1987), atomized speech acts rather than consider them
in their discursive context, and used DCTs, which have been shown to be highly prob-
lematic (Golato, 2003). This focus on isolated, atomized speech acts is a construct problem
for proficiency tests of L2 pragmatics, which cannot be used to make claims about learners’
ability to comprehend or produce pragmatically appropriate extended discourse.
A further curious effect of the DCT-based speech act tradition is the prevalence of
measurement of productive rather than receptive abilities in L2 pragmatics testing, which
is normally the reverse in other areas of second language assessment. Tests designed
for research purposes offer some interesting approaches for expanding measurement of
receptive skills and larger sequences of discourse.

Different Construct Emphases: Tests in Research Settings

Many tests in research settings used similar instruments as the proficiency tests discussed
above, commonly discourse completion tests or role plays. Some projects, however, provide
useful extensions to the construct as it has been assessed so far. Taguchi (2007, 2008a,
2008b) added the measurement of processing speed to interpretation of implicature, which
is not part of the construct of most pragmatics tests but it is clearly an important aspect
of real-time communication.
Other research instruments have examined learners’ sociopragmatic judgment. Bardovi-
Harlig and Dörnyei (1998) as well as their replications by Niezgoda and Roever (2001) and
Schauer (2006) used videotaped scenarios in which the last utterance contained a pragmatic
error, a grammatical error, or no error. Learners were asked to judge the severity of the
error, and the goal of the study was to establish differential awareness of pragmatic and
grammatical norms depending on language learning setting. Some aspects of this study
could well be used for more formal assessment purposes: learners’ ability to detect that
something is pragmatically “off” about an utterance is an important part of metapragmatic
awareness that has not been part of the proficiency tests discussed above, and the instru-
ment could be extended to include a productive component where test takers correct or
improve the perceived error. This, however, poses a problem of item dependence: if the
same items are used for error detection and correction tasks, a correct response on the
detection task is a precondition for a correct response on the correction task. Simply put,
if a learner does not detect the error, they cannot correct it, and will lose two points.
An example of a metapragmatic judgment item is shown in Figure 4.
A completely different approach combining extended discourse and assessment of com-
prehension was taken by Cook (2001). Instead of concentrating on a specific speech act,

It’s 4:30 pm, and Sharon is getting ready to go home. Her boss comes up to her desk.
Boss: “Sharon, I’m sorry, but we have an important project due tonight, and I need
you to stay late today. Is that okay?”
Sharon: “No, I can’t today.”
How appropriate was Sharon’s utterance?
|---------------------|---------------------|---------------------|---------------------|
totally mostly somewhat mostly totally
appropriate appropriate appropriate inappropriate inappropriate

Figure 4 Metapragmatic judgment item


assessment of pragmatics 5

she worked in an interactional sociolinguistic framework and assessed test takers’ com-
prehension of Japanese as a foreign language (JFL) contextualization cues in extended
monologic discourse. As part of a midterm exam, she had 120 second-year JFL students
at the University of Hawai’i listen to three recordings from simulated self-introductions
by job applicants for a position as a bilingual (English/Japanese) store clerk in a retail
shop in Honolulu. This is a realistic situation in Hawai’i where many stores rely on
revenue from Japanese tourists and require their staff to speak some Japanese. Test takers
were also given a copy of the (simulated) job advertisement and then asked to decide
which of the three job applicants was most suited for the position. Two applicants met
most of the criteria, and their Japanese was adequate in terms of sociolinguistic features
for a job interview setting. One applicant, however, seemed to be a slightly better match
to the criteria but used a highly inappropriate speech style, characterized by casualness,
self-exaltation, underuse of honorifics, and overuse of emphatic markers. All this is strongly
contrary to Japanese conventions, and Cook as well as a group of native and non-native
JFL instructors felt that this applicant’s blatant inappropriateness would immediately dis-
qualify her. To Cook’s surprise and dismay, over 80% of the test takers chose the inap-
propriate applicant as best suited for the position, citing her enthusiasm, self-assuredness
and her own claim that she speaks Japanese well. Cook’s test is interesting in its coverage
of a different aspect of the construct of pragmatic ability, namely contextualization cues
signaling speakers’ understanding of the sociopragmatic norms of a given speech situ-
ation. While it also concerns sociopragmatic appropriateness, it assesses learners’ socioprag-
matic judgment of extended discourse, which contains a variety of cues, rather than of an
isolated individual speech act. Cook’s instrument is, however, essentially a one-item test,
and would need to be extended to be used in any larger-scale assessment setting.

Challenges in Testing L2 Pragmatics

Fundamentally, tests of L2 pragmatics have the same requirements and pose the same
development challenges as other language tests. They must be standardized to allow
comparisons between test takers, they must be reliable to ensure precise measurement,
they must be practical so that they do not overtax resources, and above all, they must
allow defensible inferences to be drawn from scores that can inform real-world decisions
(Messick, 1989).
From a test design perspective, it is also important to know what makes items difficult
so they can be targeted at test takers at different ability levels. Findings on this aspect are
limited by the small number of existing proficiency tests, but Roever (2004) shows some
tendencies, such as formulaic implicature being more difficult than idiosyncratic, high-
imposition speech acts being more difficult than low-imposition ones, and low-frequency
routine formulae being more difficult than high-imposition ones.
The biggest challenge for pragmatics tests, however, is to extend their construct cover-
age. As Roever (2010) argues, the focus on atomized speech acts ignores essential features
of language use in a social context, like contextualization cues, talk management devices,
and Mey’s (2001) “pragmatic acts.” This means that testing of pragmatics should become
more discursive and interactive but a significant challenge is to ensure standardization in
discourse settings: How can task demands be kept the same for different test takers under
interactive conditions, where context is dynamic and endogenous to interaction (Heritage,
1984; Kasper, 2006)? Additionally, interaction is a co-constructed endeavor, so how can
the contribution of the interlocutor be subtracted from the entire conversation to only rate
the test taker’s contribution? Hudson et al. (1995) avoided this issue in their role plays by
eliciting focal speech acts and only rating test taker performance on them but eventually
tests of L2 pragmatics will have to seriously tackle interactive language use.
6 assessment of pragmatics

A second challenge that impacts tests using sociopragmatic judgment is establishing


a baseline. Put simply, testers need a reliable way to determine correct and incorrect
test taker responses. The usual way to do so is to use a native-speaker standard and this
has been shown to work well for binary judgments of correct/incorrect, appropriate/
inappropriate, and so on. For example, Bardovi-Harlig and Dörnyei (1998) and Schauer
(2006) found high agreement among native speakers for the judgment of pragmatic
performance as being correct/incorrect, and so did Roever (2005, 2006) for implicature
interpretation and situational routines.
However, native-speaker benchmarking is much more problematic when it comes to
preference judgments. For example, in Matsumura’s (2001) benchmarking of his multiple-
choice items on the appropriateness of advice, there was not a single item where 70% of
a native-speaker benchmarking group (N = 71) agreed on the correct response, and only
2 items (out of a pre- and posttest total of 24) where more than 60% of native speakers
agreed. On 10 items, the most popular response option was chosen by less than half the
native-speaker group.
Such a lack of a clear correct response may be acceptable in a survey instrument that
elicits preference but would be a serious flaw in a dichotomously scored assessment instru-
ment where one answer must be clearly correct and the others clearly wrong. It might be
an interesting approach to use partial credit scoring where scores are assigned based on
the proportion of native speakers that chose a particular response option, but this has not
yet been done in pragmatics assessment.
Liu (2006) faced a similar problem in the development of his test. Based on responses
to a DCT from native and non-native speakers, he created a pilot version with four to
eight response options per item, which he gave to two small groups of NS (N = 7 for
apology items and N = 6 for request items). For every item, he retained the three options
with the greatest native-speaker agreement as to their correctness/incorrectness, of which
one was the key and the other two were distractors. He then created a 24-item DCT, which
he gave to 5 native speakers. They showed perfect agreement on the key for 12 situations,
80% agreement on 7 further situations, and less agreement on the remaining 5 situations.
Liu revised the 5 situations with the least agreement, and gave them to another group of
3 native speakers, who all chose the same key. Liu shows how to develop multiple-choice
DCT items with clearly correct/incorrect response options but it is notable that he used
very small native-speaker groups, and accepted 70% agreement. Native-speaker bench-
marking remains a thorny issue for multiple-choice tests of L2 pragmatics.
Finally, tests of pragmatics have often been designed contrastively for a pair of languages,
for example, native-Japanese speakers learning English (Hudson et al., 1995), native-English
speakers learning Japanese (Yamashita, 1996), native-English speakers learning Korean
(Ahn, 2005), or native-Chinese speakers learning English (Liu, 2006). This necessarily lowers
the practicality of tests, as well as the likelihood that they will eventually become part of
large-scale international test batteries (like TOEFL or IELTS). Roever (2005) did not limit
his test taker population to a specific L1, and used differential item functioning to show
that there were some L1 effects but that they were generally minor (Roever, 2007), indicat-
ing that limiting pragmatics tests to a specific population is not a necessity.

Conclusion

Tests of L2 pragmatics have seen a great deal of development and focused research in the
last two decades. They offer a promising addition to the traditional language tests, which
tend to focus on grammar, vocabulary, and skills. However, they pose significant challenges
for test design if a complex construct like pragmatics is to be assessed comprehensively
and reliably. Research in testing L2 pragmatics is clearly still in its early stages.
assessment of pragmatics 7

SEE ALSO: Assessment of Speaking; Language Assessment Methods; Language Testing


in Second Language Research; Paired and Group Oral Assessment

References

Ahn, R. C. (2005) Five measures of interlanguage pragmatics in KFL (Korean as a foreign language)
learners (Unpublished PhD thesis). University of Hawai’i at Manoa.
Bardovi-Harlig, K., & Dörnyei, Z. (1998). Do language learners recognize pragmatic violations?
Pragmatic versus grammatical awareness in instructed L2 learning. TESOL Quarterly, 32,
233–62.
Blum-Kulka, S., House, J., & Kasper, G. (Eds.) (1989). Cross-cultural pragmatics: requests and
apologies. Norwood, NJ: Ablex.
Bouton, L. (1988). A cross-cultural study of ability to interpret implicatures in English. World
Englishes, 17, 183–96.
Bouton, L. F. (1994). Conversational implicature in the second language: Learned slowly when
not deliberately taught. Journal of Pragmatics, 22, 157–67.
Bouton, L. F. (1999). Developing non-native speaker skills in interpreting conversational impli-
catures in English: Explicit teaching can ease the process. In E. Hinkel (Ed.), Culture in second
language teaching and learning (pp. 47–70). Cambridge, England: Cambridge University Press.
Brown, J. D. (2001). Six types of pragmatics tests in two different contexts. In K. Rose &
G. Kasper (Eds.), Pragmatics in language teaching (pp. 301–25). New York, NY: Cambridge
University Press.
Brown, J. D. (2008). Raters, functions, item types and the dependability of L2 pragmatics tests.
In E. Alcón Soler & A. Martínez-Flor (Eds.), Investigating pragmatics in foreign language learn-
ing, teaching and testing (pp. 224–48). Clevedon, England: Multilingual Matters.
Brown, P., & Levinson, S. D. (1987). Politeness: Some universals in language usage. Cambridge,
England: Cambridge University Press.
Cook, H. M. (2001). Why can’t learners of Japanese as a Foreign Language distinguish polite from
impolite style? In K. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 80–102).
New York, NY: Cambridge University Press.
Crystal, D. (1997). A dictionary of linguistics and phonetics. Oxford, England: Blackwell.
Golato, A. (2003). Studying compliment responses: A comparison of DCTs and recordings of
naturally occurring talk. Applied Linguistics, 24(1), 90–121.
Grice, H. P. (1975). Logic and conversation. In P. Cole & J. Morgan (Eds.), Syntax and semantics
(Vol. 3, pp. 41–58). New York, NY: Academic Press.
Heritage, J. (1984). Garfinkel and ethnomethodology. Cambridge, England: Polity.
House, J. (1996). Developing pragmatic fluency in English as a foreign language: Routines and
metapragmatic awareness. Studies in Second Language Acquisition, 18(2), 225–52.
Hudson, T., Detmer, E., & Brown, J. D. (1992). A framework for testing cross-cultural pragmatics
(Technical report #2). Honolulu: University of Hawai’i, Second Language Teaching and
Curriculum Center.
Hudson, T., Detmer, E., & Brown, J. D. (1995). Developing prototypic measures of cross-cultural
pragmatics (Technical report #7). Honolulu: University of Hawai’i, Second Language Teaching
and Curriculum Center.
Kasper, G. (2006). Speech acts in interaction: Towards discursive pragmatics. In K. Bardovi-
Harlig, J. C. Felix-Brasdefer, & A. S. Omar (Eds.), Pragmatics & language learning (Vol. 11,
pp. 281–314). University of Hawai’i at Manoa: National Foreign Language Resource Center.
Leech, G. (1983). Principles of pragmatics. London, England: Longman.
Liu, J. (2006). Measuring interlanguage pragmatic knowledge of EFL learners. Frankfurt, Germany:
Peter Lang.
Matsumura, S. (2001). Learning the rules for offering advice: A quantitative approach to second
language socialization. Language Learning, 51(4), 635–79.
McNamara, T. F., & Roever, C. (2006). Language testing: The social dimension. Oxford, England:
Blackwell.
8 assessment of pragmatics

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (pp. 13–103). New York,
NY: Macmillan.
Mey, J. L. (2001). Pragmatics: An introduction (2nd ed.). Oxford, England: Blackwell.
Niezgoda, K., & Roever, C. (2001). Grammatical and pragmatic awareness: A function of
the learning environment? In K. Rose & G. Kasper (Eds.), Pragmatics in language teaching
(pp. 63–79). Cambridge, England: Cambridge University Press.
Roever, C. (1996). Linguistische Routinen: Systematische, psycholinguistische und fremdsprachen-
didaktische Überlegungen. Fremdsprachen und Hochschule, 46, 43–60.
Roever, C. (2004). Difficulty and practicality in tests of interlanguage pragmatics. In D. Boxer
& A. Cohen (Eds.), Studying speaking to inform language learning (pp. 283–301). Clevedon,
England: Multilingual Matters.
Roever, C. (2005). Testing ESL pragmatics. Frankfurt, Germany: Peter Lang.
Roever, C. (2006). Validation of a web-based test of ESL pragmalinguistics. Language Testing,
23(2), 229–56.
Roever, C. (2007). DIF in the assessment of second language pragmatics. Language Assessment
Quarterly, 4(2), 165–89.
Roever, C. (2010). Tests of second language pragmatics: Past and future (Unpublished manuscript).
Roever, C., Elder, C., Harding, L. W., Knoch, U., McNamara, T. F., Ryan, K., and Wigglesworth,
G. (2009). Social language tasks: Speech acts and implicature (Unpublished manuscript).
University of Melbourne, Australia.
Rose, K. R., & Ng, C. (2001). Inductive and deductive teaching of compliments and compliment
responses. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 145–70).
Cambridge, MA: Cambridge University Press.
Schauer, G. A. (2006). Pragmatic awareness in ESL and EFL contexts: Contrast and development.
Language Learning, 56(2), 269–318.
Tada, M. (2005). Assessment of EFL pragmatic production and perception using video prompts
(Unpublished doctoral dissertation). Temple University, Philadelphia.
Taguchi, N. (2005). Comprehending implied meaning in English as a foreign language. The
Modern Language Journal, 89(4), 543–62.
Taguchi, N. (2007). Development of speed and accuracy in pragmatic comprehension in English
as a foreign language. TESOL Quarterly, 41(2), 313–38.
Taguchi, N. (2008a). Cognition, language contact, and the development of pragmatic compre-
hension in a study-abroad context. Language Learning, 58(1), 33–71.
Taguchi, N. (2008b). Pragmatic comprehension in Japanese as a foreign language. The Modern
Language Journal, 92(4), 558–76.
Takahashi, S. (2005). Noticing in task performance and learning outcomes: A qualitative analysis
of instructional effects in interlanguage pragmatics. System, 33(3), 437–61.
Walters, F. S. (2004). An application of conversation analysis to the development of a test of second
language pragmatic competence (Unpublished doctoral dissertation). University of Illinois,
Urbana-Champaign.
Walters, F. S. (2007). A conversation-analytic hermeneutic rating protocol to assess L2 oral
pragmatic competence. Language Testing, 24(2), 155–183.
Wildner-Bassett, M. (1994). Intercultural pragmatics and proficiency: “Polite” noises for cultural
appropriateness. IRAL, 32(1), 3–17.
Yamashita, S. O. (1996). Six measures of JSL pragmatics (Technical report #14). Honolulu: University
of Hawai’i, Second Language Teaching and Curriculum Center.
Yoshimi, D. (2001). Explicit instruction and JFL learners’ use of interactional discourse markers.
In K. R. Rose, & G. Kasper (Eds.), Pragmatics in language teaching (pp. 223–44). Cambridge,
MA: Cambridge University Press.
Yoshitake, S. S. (1997). Measuring interlanguage pragmatic competence of Japanese students of English
as a foreign language: A multi-test framework evaluation (Unpublished doctoral dissertation).
Columbia Pacific University, Novata, CA.

S-ar putea să vă placă și