Sunteți pe pagina 1din 11

Performance of English Language Learners as a Subgroup in Large-Scale Assessment: Interaction of Research and Policy

Jamal Abedi and Patricia Gandara, University of California, Davis


ularly academic subjects that are high in English language demand. The literature suggests that this performance gap is explained by many different factors including parent education level and poverty (Abedi, Leon, & Mirocha, 2003), the challenge of second language acquisition (Hakuta, Butler, & Witt, 2000; Moore & Redd, 2002), and a host of inequitable schooling conditions (Abedi, Herman, Courtney, Leon, & Kao, 2004; G andara, Rumberger, Maxwell-Jolly, & Callahan, 2003) in addition to measurement tools that are ill-equipped to assess their skills and abilities. English language learners are more likely to be taught by teachers without appropriate teaching credentials and with little classroom experience than other students (Rumberger & G andara, 2004). A recent study of 4800 teachers of ELL students in California responding to a survey about the challenges they faced in teaching ELLs in their classrooms found large percentages of these teachers expressing the concern that they were not prepared to teach these students (G andara et al., 2003). While many ELL students are native born, many also have at least one parent who is an immigrant, and overwhelmingly they are reared in homes in which English is not the primary language spoken. Moreover, a disproportionately
Jamal Abedi, School of Education, University of California, Davis, One Shields Avenue, Davis, CA 95616-5270; jabedi@ucdavis.edu. Patricia Gandara, School of Education, University of California, Davis, One Shields Avenue, Davis, CA 95616-5270; pcgandara@ucdavis.edu.

Factors that contribute to the performance gap between subgroups and mainstream students deserve special attention. Different subgroups are faced with different sets of challenges. To understand and control for factors leading to the performance gap between subgroups and mainstream students one must clearly understand the issues specic to each subgroup. This paper focuses on assessment and performance issues for English language learner (ELL) students as a subgroup. Identifying factors affecting the performance gap between ELL and non-ELL students may help gain insight into assessment issues for other subgroups of students as well as strengthen assessment of this group.

Keywords: assessment, language, English language learners, accountability, accommodation

Perspective Current legislative mandates in the No Child Left Behind (NCLB Act of 2002) require the states to report performance of all students, including students in major ethnic groups, economically disadvantaged students, students with disabilities, and English language learners (ELL). While this requirement to report by major subgroups brings highly needed attention to achievement issues for these students, it also introduces a whole new set of technical and ethical issues in assessment. Different subgroups are faced with different sets of challenges. Therefore, to understand and control for factors leading to the performance gap between subgroups and mainstream students, one must clearly understand the
36

issues specic to each subgroup. Identifying factors affecting the performance gap between ELL and non-ELL students may help gain insight into assessment issues for other subgroups of students as well. Thus, the focus of this paper is on the specic issues associated with ELL academic performance. English language learners now number over 4.5 million students; they represent about 8% of all K-12 students nationally (Zehler et al., 2003) and are the fastest growing subgroup in the nation (Kindler, 2002). Consequently, fairness and validity issues relating to their assessment must be among the top priorities of the national education agenda. ELL students have historically lagged behind their English procient peers in all content areas, partic-

Educational Measurement: Issues and Practice

English 60 Percent above 50th percentile 50 40 30 20 10 0 Reading

Non-English

Math

General knowledge

English 60 Percent above 50th percentile 50 40 30 20 10 0 Reading

Non-English

Math

General knowledge

FIGURE 1. Cognitive skills of beginning kindergarteners by language background, 1998. Source: U.S. Department of Education. National Center for Education Statistics. Americas Kindergartners, NCES 2000-070, by Kristin Denton, Elvira Germino-Hausken. Project Ofcer, Jerry West, Washington, DC: 2000. Retrieved January 10, 2006 from http://nces.ed.gov/pubs2000/2000070.pdf.

plish. Findings from studies suggest that a major factor contributing to the performance gap between ELL and nonELL students is the impact of linguistic complexity of assessments (Abedi, 2006a; Abedi, Hofstetter, & Lord, 2004). Sometimes ELL students are provided with accommodations to help offset the linguistic challenge of the test, but these accommodations may not be relevant or helpful, and almost always fall short of ideal. In this paper, we will discuss cognitive and non-cognitive (conative and affective) factors that contribute to a performance-gap between ELL and mainstream students, how such factors interact with academic achievement of ELL students and how research ndings can help resolve some of these issues. While the main thrust of this paper is on the cognitive aspect of academic achievement of ELL students, we will also touch on the conative and affective factors that may inuence ELL students academic progress. In the cognitive part of this paper we will rst discuss the development of reading skills among English language learners and then examine the impact of language factors on the academic achievement of ELL students, in particular, on the assessment and accountability systems for these students. We will then elaborate on how research ndings, though limited in content and scope, can help address some of the challenges that ELL students are faced with in their academic careers. Cognitive Factors 1. The Development of Reading Skills among English Language Learners Because reading is implicated in most assessments of cognitive skill, and because it is the fundamental window through which we have access to cognitive processes, we begin with the literature on learning to read. The newly released report from the National Literacy Panel (August & Shanahan, 2006) calls attention to the problems of assessment for English language learners. They note that
Most assessments do a poor job of gauging individual strengths and weaknesses. The research on the development of English literacy strongly suggests that adequate assessments are essential for gauging the individual strengths and weaknesses of language minority students. . . . [U]nfortunately, existing assessments are inadequate to the need

high percentage of these students are raised in poverty and attend poor schools where they receive weak instruction (G andara & Rumberger, in press). ELL students begin school signicantly behind their English speaking peers and so require extra time and instruction from the very beginning of school to catch up (See Figure 1). Within this context, ELL students must take on the challenges of learning English and U.S. culture, in addition to learning the academic content of the curriculum. The process of developing new language skills is difcult and requires time and effort that otherwise could be spent on acquiring academic content knowledge. For example, a number of studies have shown that it takes from ve to seven years and even more for most ELLs to gain sufcient mastery of academic English to join English speaking peers in taking full advantage of instruction in EnWinter 2006

glish (Hakuta, Butler, & Witt, 2000). During this time, learning cannot occur at the same rate as for a native speaker of English when that instruction is offered only in English. Limited English prociency may also make it difcult to benet fully from the teachers instructions and to understand assessment questions. Because ELL students, by denition, do not have a strong command of the English language, both learning and assessment are affected by their limited English prociency, and both learning and assessment conditions must be addressed to help close the performance gap. Standardized achievement tests that have been constructed for mainstream students, but which do not take into account the special needs of English Learners, can provide a further challenge for these students, and may not provide a good indication of what these students know and can accom-

37

in most respects. For example, most measures do not predict how well language-minority students will perform over time on reading or content-area assessments in English (August & Shanahan, 2006, Executive Summary, p. 6).

Theories about how second language learning takes place often identify factors that may impact ELL students assessment outcomes. For example, several cognitive processes are assumed to be imperative to the development of reading skills in English. These processes include phonological processing, syntactic awareness, and working memory (for a review of this literature, see Siegel, 1993). Although there is a dearth of research that examines whether these same processes characterize the reading development of English language learners (Lesaux & Siegel, 2003), we cite a few studies that are relevant to the topic of assessment of second language learners. One study by Chiappe, Siegel, and Wade-Woolley (2002) investigated the processes involved in learning to read among a sample of 727 native English speakers and 131 English language learners. While the English language learners performed more poorly than native speakers of English on phonological and linguistic processing tasks, the development of literacy skills followed a similar trajectory for both groups. This suggests that while the process of learning to read may follow the same pattern, more time may be necessary for ELLs to actually acquire the necessary skills. The implications for test makers are that they must be cognizant of these factors when developing, administering, and scoring test items for ELL students. Some research has focused specifically on the transfer of phonological awareness skills from the rst language to the second language. Durgunoglu, Nagy, & Hancin-Bhatt (1993) examined 27 Spanish-speaking rst graders to nd whether phonological awareness in a childs native language is related to their word recognition in the second language. In addition, the researchers investigated whether oral language competence in the second language impacts acquisition of reading in the second language. The results of this study indicated that Spanish phonological awareness predicted reading acquisition in the second language, suggesting that transfer of prereading skills can occur across languages. Moreover, as August and Shanahan (2006) conclude, the research
38

indicates that instructional programs work when they provide opportunities for students to develop prociency in their rst language (Executive Summary, p. 5). These ndings have particular implications for use of primary language assessments in order to better predict which students will need extra help in English reading. Unfortunately, policy in many states serving large numbers of English learners has worked against the development of strong primary language assessments. Most recently, a study by Gottardo, Pannucci, Kuske, & Brettin (2003) showed evidence for simultaneous phonological processing in English and the primary language for English language learners. A sample of 65 children whose rst language was Cantonese and second language was English was examined to determine the relationship between phonological skill and rst and second languages. The study found that phonological skill was correlated between the rst and second language. This is an interesting nding considering that Cantonese, unlike English, is not written in alphabetic orthography. This provides further evidence of the transfer of basic pre-reading skills across languages and the utility of assessing at least some skills in students primary languages. The areas of syntactic awareness and working memory have been studied very little in connection to English language learners acquisition of reading. However, a study by Geva, Yaghoub-Zadeh, and Schuster (2000) briey addressed syntactic awareness. The ndings of this study indicated that when compared to native English speaking students, average English language learners in reading had a decit in syntactic awareness skills. The lesson from this research appears to be that test makers would do well to reduce syntactic complexity in test items that will be used for ELL students, eschewing atypical sentence formats. Working memory as well as oral English prociency are two competencies that bode well for English reading development. In a study of working memory and second language acquisition conducted by Geva and Siegel (2000) among a sample of English speakers learning Hebrew, the researchers found that verbal memory was a signicant predictor of basic reading skills. Likewise, August and Shanahan (2006) summarize one of the primary ndings of the National Reading Panel as the critical importance of oral prociency

in English and in the primary language as precursors of strong reading skills in English. Again, this research suggests that the testing of verbal memory and oral prociency in either the rst or second language will yield important information about students readiness to undertake reading instruction in English. While the summary presented above clearly shows the challenges that English language learners are faced with in learning a new language, it also suggests that students are transferring their pre-reading skills across languages and so the time they spend learning in any language is going to be roughly predictive of learning to read. As such, knowledge of what students know in both primary and secondary languages can yield important information about the cognitive processing skills they have for learning to read. 2. Language Factors and Assessment of ELL Students Title III of No Child Left Behind Act (NCLB 2002) requires states to measure ELL students level of English prociency (in addition to the Title I assessment requirements for these students) in four domains: reading, writing, listening, and speaking (and comprehension). Therefore, students level of reading prociency obviously plays a major role in their assessment outcomes since without prociency in reading, students will have difculty understanding test questions. Prociency in other domains is also essential for a valid assessment. In this section we use the term English prociency in a broader sense to include various aspects of students level of prociency. There are many factors affecting academic progress of ELL students that are inuenced by students level of English prociency in general. For example, linguistic complexity of assessment may impact the performance outcome of these students and increase the performance gap between ELL students and their native English speaker peers. Invalid assessment results may affect decisions regarding classication and inclusion of ELL students and these all may impact the validity and fairness of the accountability system for these students. We rst discuss the impact of language factors on the assessment outcomes of ELL students and then provide some research-based recommendations on how to improve assessment for these students by controlling for language factors that could impact the
Educational Measurement: Issues and Practice

validity of assessment for these students. We will also discuss languagebased accommodations that would help ELL students with their English language needs. Impact of language factors on the validity of assessment for ELL students. The National Research Council has warned that the use of achievement tests developed for English-speaking students will not likely yield valid results for students who are not procient in English. As the NRC noted, if a student is not procient in the language of the test, her performance is likely to be affected by construct-irrelevant variancethat is, her test score is likely to underestimate her knowledge of the subject matter being tested (Huebert & Hauser, 1999, p. 225). We know that consistently low test scores often lead to placement in remedial and low-level instruction that further disadvantages already disadvantaged learners (Heubert, 2000) and exacerbates existing achievement gaps. Hence the consequences of such testing can be serious for these students. Findings from several studies conducted recently show the impact of language factors on the assessment of ELL students (Abedi, 2006a; Abedi, Hofstetter, & Lord 2004; Maihoff, 2002; Kiplinger, Haug, & Abedi, 2000). These ndings suggest that unnecessary linguistic complexity may hinder ELL students ability to express their knowledge of the construct being measured. Results from these studies question the appropriateness of mainstream assessments for English language learners, an issue that has long been debated. Educational psychologists, measurement experts, and educational practitioners have raised concerns over testing ELL students with assessment tools that are developed for native speakers of English, and which have been eld tested mainly for these students:
For all test takers, any test that employs language is, in part, a measure of their language skills. This is of particular concern for test takers whose rst language is not the language of the test. Test use with individuals who have not sufciently acquired the language of the test may introduce construct-irrelevant components to the testing process. In such instances, test results may not reect accurately the qualities and competencies intended to be measured . . . Therefore it is important to consider language background in

developing, selecting, and administering tests and in interpreting test performance. (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999, p. 91)

In the assessment of ELL students, research suggests that unnecessary linguistic complexity introduces a source of measurement error and is considered as a construct-irrelevant factor in the assessment (Abedi, 2006a; Abedi, Leon & Mirocha, 2003). Thus, language factors may seriously confound the outcomes of instruction and assessment in content-based areas (Abedi, Lord, & Hofstetter, 1998; Cocking & Chipman, 1988; De Corte, Verschaffel, & DeWin, 1985; Kintsch & Greeno, 1985; Larsen, Parker, & Trenholme, 1978; Lepik, 1990; Mestre, 1988; Munro, 1979; Noonan, 1990; Orr, 1987; Rothman & Cohen, 1989; Spanos, Rhodes, Dale, & Crandall, 1988). The results of a series of studies on the impact of language on assessment of ELLs resulted in two major conclusions: (1) reducing the linguistic complexity of assessment tools helped ELL students to perform signicantly better because it reduced the performance gap between ELL and non-ELL, and (2) the process of reducing linguistic complexity of test items did not alter the construct under measurement. Therefore by reducing the linguistic complexity of assessment tools, the validity of assessment was not compromised (see Abedi, Lord, Hofstetter, & Baker, 2000; Abedi & Lord, 2001; Abedi, Courtney, Mirocha, Leon & Goldberg, 2005; Kiplinger, Haug, & Abedi, 2000; Abedi, Lord, and Hofstetter, 1998; Abedi, Lord, and Plummer, 1997). In other studies on the impact of language factors on the assessment of ELL students, extant data from the National Assessment of Educational Progress (NAEP) were analyzed. Results (see, for example, Abedi, Lord, & Plummer, 1997) suggested that ELL students had difculty with test items that were linguistically complex. The results also indicated that ELL students exhibited substantially higher numbers of omitted/not-reached test items. Linguistic features that researchers identied in test items slow down the reader, make misinterpretation more likely, and add to the readers cognitive load, thus interfering with concurrent tasks. Indexes of language difculty in-

clude word frequency/familiarity, word length, and sentence length. Other linguistic features that may cause difculty for readers include long noun phrases, long question phrases, passive voice constructions, comparative structures, prepositional phrases, sentence and discourse structure, subordinate clauses, conditional clauses, relative clauses, concrete versus abstract or impersonal presentations, and negation (Abedi, Lord, & Plummer, 1997). 3. Language Factors and Accountability System for ELL Students The impact of language factors on the assessment outcomes for ELL students has direct consequences on the accountability system for these students. Under the NCLB Title I accountability requirements, ELL students are assessed with the same tests that are developed and eld tested mainly for native speakers of English. Earlier in this paper we discussed challenges that ELL students are faced with when taking these tests. Due to the complex linguistic structure of these tests, ELL students performance outcomes are very likely to be underestimated. Unfortunately, however, there is no simple approach to determine the level of underestimation that may occur due to the linguistic complexity of these tests. The large performance gap between ELL and non-ELL studentswhich is caused by many factors, including parent education, poverty, the challenge of second language acquisition, and inequity in opportunity to learn establish a substantially lower baseline score for ELL students. That is, from the very beginning, these students are clearly at risk of failing. It is quite clear that with starting low and progressing lower than their English procient peers, ELL students cannot make the race for reaching the intended goal of 100% prociency by the 2014 target date or even any time soon after that date. Data from the California Department of Education for 2005 reveal how key subgroups are performing on NCLB accountability measures. Of the English learners tested in California, 79% were designated as not meeting prociency standards. In addition, of the students with disabilities tested, 83% were classied as non-procient (California Department of Education, 2006). NCLB also requires states to annually test English language learners on their progress toward gaining prociency in

Winter 2006

39

English, and some states have developed their own tests of English prociency for this purpose. In California, the state developed test is the California English Language Development Test (CELDT). As concern over the lack of progress in closing the achievement gap between English language learners and English speakers has intensied, debates have ensued about the appropriate level of prociency as measured by the CELDT at which to place students into the regular English-only curriculum (basic, early advanced, or advanced) (Parrish et al., 2006). Pressure to mainstream students at lower levels of prociency has increased. Katz, Low, Stack, & Tsang (2004), in their study of San Francisco ELL students, also found little relationship between prociency on the CELDT and students ability to navigate the content of English-only standardized tests. They argued that the CELDT and standardized tests of academic competencies are either measuring very different constructs or measuring the same constructs at very different levels of difculty, but in either case, English prociency levels established by English language prociency tests such as CELDT should not be used to determine students readiness to take on English-only coursework that requires academic as opposed to social English. G andara and Rumberger (in press) likewise found that 60% of tenth grade English language learners in 2005 were able to pass the CELDT at a level of early advanced or advanced English prociency, but only 3% of these students could pass the state test of English Language Arts at a level of procientagain suggesting that the tests measure different types of language prociency, or have different standards of language prociency. Linn (in press) indicated that for large schools with diverse student populations, the disaggregated requirement can create major obstacles in reaching their AYP goal. A school with a large enough number in, say, three racial/ethnic groups, students with limited English prociency, economically disadvantaged students, and students with disabilities, would have a total of 29 hurdles to clear, 4 for each of the 6 subgroups plus the 5 that all schools have for the total student body (p. 21). Sunderman and Kim (2004) expressed concerns over the racial and ethnic equity of NCLB in states with a large number of minority enrollments. Nichols,

Glass, and Berliner (2005) found that states with a larger number and proportion of minority students are under more pressure. They concluded that problems associated with highstakes assessment disproportionately affect minority students. 4. Accommodation Issues in the Assessment of ELL Students To assist ELL students with their English language limitations, several accommodation strategies have been proposed. However, results of studies on the assessment and accommodations for ELL students have raised concerns over appropriateness of some of these accommodations. For example, researchers have found that many of the commonly used accommodations for ELL students are neither effective in helping ELL students with their language barriers nor are the results of these accommodated assessments valid (see, for example, Abedi, Lord, Hofstetter, & Baker, 2000; Abedi, Hofstetter, & Lord, 2004; Sireci, Li, & Scarpati, 2003). To investigate the relevance of accommodations used for ELL students, Abedi (2006b) analyzed a list of 73 accommodations that have been used nationally in the assessment of ELL students (Rivera, 2003). The results indicated that of the 73 accommodations used by different states only 11 (or 15%) of them were highly relevant for these students. The main questions here are: How do states decide what accommodation to use for English language learners and students with disabilities? What type of criteria they use for such decisions? Do states use research ndings as criteria in their decisions? To answer these questions and to understand the states policies and practices regarding inclusion and accommodations for students with disabilities (SDs) and ELL students, eight states with large numbers of ELLs and/or students with disabilities were surveyed (Pitoniak, Lutkus, CahalanLaitusis, Cook, & Abedi, 2006.) State testing directors were asked to identify criteria used in their decision about the type of accommodations offered to students and about their policies regarding inclusion of both SDs and ELLs in NAEP and state assessments. Reponses of the participating states across the two assessments (state and NAEP) were similar. Therefore, we discuss the results of these interviews regarding state as-

sessments both in terms of decisions regarding accommodations and the inclusion of students. In this paper, we limit our discussion to the results for ELL students. In general, the interview results suggest that decisions regarding both inclusion of and accommodations for ELL students were mainly based on the guidelines and recommendations from the state department of education. All of the eight participating states in this interview indicated that accommodations for ELL students are determined by following guidelines provided by the state department of education and only two of the eight indicated that accommodations must be language related to assist LEP students with their language needs. This raises serious concerns over the appropriateness of accommodations for these students. It is important to examine the pattern of responses in these interviews in terms of recent research ndings in the area of accommodations and inclusion of English language learners. It does not seem that research ndings greatly inuenced the decision making process. For example, when states were asked whether they determine accommodations based on opinion or research evidence, only two of the eight states indicated that they determine NAEP accommodations based on research evidence. When states were asked about specic research evidence affecting their decisions regarding accommodations and inclusion of ELL students, a majority of them did not nd research evidence to inuence their decisions. There are, however, some examples of research ndings that have a direct bearing on issues of accommodation and inclusion. Results from experimentally controlled studies suggest accommodations that are language-based or consistent with students language needs are more effective and valid for ELL students than those originally created and proposed specically for students with disabilities (see, for example, Abedi, Lord, Hofstetter, & Baker, 2000; Abedi, Hofstetter & Lord, 2004). In contrast, the interviews show only two of the eight states indicated that they use accommodations that address ELL students language needs. Similarly, research evidence supports the use of the length of time students have been instructed in the United States as well as level of English prociency

40

Educational Measurement: Issues and Practice

as valid criteria for decisions regarding appropriate accommodations for ELL students (Abedi, Lord, Hofstetter, & Baker, 2000; Abedi, Hofstetter, & Lord, 2004). The results from the interviews suggested that only three of the eight states reported using such criteria for selecting appropriate accommodations for ELL students. 5. Research Findings that may Help Improve the Validity of Assessment and Accountability Systems for ELL Students Results from studies on the assessment and accommodations of ELL students can help to identify assessment issues and provide research-based recommendations for improving assessment and accountability systems for ELL students. For example, research on the impact of language factors on the assessment of ELL students demonstrates that unnecessary linguistic complexity of assessment may jeopardize the validity of assessment and accountability system and provides suggestions for improving the assessment and accountability systems by reducing the unnecessary linguistic complexity of assessments. Abedi and Lord (2001) examined the effects of the linguistic modication approach with 1,031 eighth-grade students in Southern California. In this study, NAEP mathematics items were modied to reduce the complexity of sentence structures and to replace potentially unfamiliar vocabulary with more familiar words. Contentrelated terminologies, i.e., mathematical terms, were not changed. The results showed signicant improvements in the scores of ELL and non-ELL students in low and average-level mathematics classes, but no effect on scores among other non-ELL students. Among the linguistic features that appeared to contribute to the differences were lowfrequency vocabulary and passive voice verb constructions. In another study, Abedi, Lord, and Hofstetter (1998) examined the impact of language modication on the mathematics performance of English learners and non-English learners on a sample of 1,394 eighth graders in schools with a large number of Spanish speakers enrolled. Results showed that modications to the language of items contributed to improved performance on 49% of the items. The ELL students generally scored higher on shorter/less linguistically complex problem statements. In another study on a sample of
Winter 2006

946 eighth graders Abedi, Lord, Hofstetter, & Baker (2000) found that among four different accommodation strategies for ELL students, only the linguistically modied English form narrowed the score gap between English learners and other students. Another study (Abedi, Courtney, & Leon, 2003) examined 1,594 eighth grade students using items from NAEP and TIMSS. Students were given one of the following accommodations: a customized English dictionary (words were selected directly from test items), a bilingual glossary, a linguistically modied test version, or the standard test items. Only the linguistically modied version improved the ELL students scores without affecting the non-ELL students scores. Other researchers have employed a language modication approach based on CRESST studies. Maihoff (2002) found linguistic modication of content-based test items to be a valid and effective accommodation for ELL students. Kiplinger, Haug, and Abedi (2000) found linguistic modication of math items to help improve the performance of ELL students in math. Rivera and Stanseld (2001) compared English learner performance on regular and simplied fourth and sixth grade science items. Although the small sample size did not show signicant differences in scores, the study did demonstrate that linguistic simplication had no effect on the scores of Englishprocient students. This indicates that linguistic simplication is not a threat to score comparability. Research ndings can also help identify accommodations that make assessments more accessible to ELL students. The results from several studies have demonstrated that some accommodations may impact the validity of assessments. For example, some forms of accommodation strategies, such as glossary use plus extra time, raised performance of both ELL and nonELL students. The level of increase due to such accommodation strategies was higher for non-ELL students. This raised concerns regarding the validity of these accommodations (Abedi, Lord, Hofstetter, & Baker, 2000; Abedi, Lord, & Hofstetter, 1998). English and bilingual dictionaries were used as different forms of accommodation strategies. The results of these studies suggested that by gaining access to the denition of contentrelated terms, recipients of dictionar-

ies may be advantaged over those who did not have access to dictionaries. This may jeopardize the validity of assessments. The dictionary as a form of accommodation suffers from yet another major limitation, the feasibility issue. For a variety of reasons, it was logistically difcult to provide this form of accommodation to students (Abedi, Courtney, Mirocha, Leon, & Goldberg, 2005). There are however, some accommodations that help ELL students with their English language needs that do not affect the performance of nonELL students. For example, studies by CRESST researchers along with other studies nationwide suggested that linguistically modied versions of tests items are an effective and valid accommodation for ELL students (Abedi, Lord, & Hofstetter, 1998; Abedi, Lord, Hofstetter, & Baker, 2000). Duncan et al. (2005), using NAEP mathematics test data for 402 Spanish-speaking eighth-grade students, found that the students appreciated and preferred dual-language test booklets, with no undue advantage using the booklet. These investigators found the method hopeful, but cautioned that it was important to do more study to determine for which categories of ELLs the dual-language test booklets appear to be most useful. Non-Cognitive Factors 1. Communicative Competence and Socio-cultural Factors In this article, we use the concept of linguistic complexity in a more comprehensive manner, as it interacts with a complex set of social, cultural, and psychological factors. We believe English language learners who inadvertently suffer the most from language complexity in instruction and assessments are not only different from their native English speaker peers in their language background, but in their cultural, family, and personal characteristics. We believe, as research results in the past 10 years have clearly suggested, that linguistic and cultural factors may hinder a students ability to fully understand the instructional materials and do well on assessments. Celce-Murcias Model of Communicative Competence (Celce-Murcia, 1995) provides a comprehensive view of linguistic and cultural issues that may affect students academic performance.
41

Celce-Murcia, Dornyei, & Thurrell (1995) proposed a model which is an elaboration of the earlier communicative competence model (Canale, 1983) modied by Celce-Murcia (1995). This model is based on the following six components: linguistic competence, strategic competence, discourse competence, socio-cultural competence, formulaic competence, and paralinguistic competence (see also Bachman, 1990; Bachman & Palmer, 1996). Celce-Murcia (1995) emphasizes the dynamic aspect of her model and indicates that the different components in the model interact with each other:
It should be emphasized that the interactions and overlaps schematically represented in Figure 3 [Figure 2 in this article] are integral aspects of the proposed model of communicative competence. It is not sufcient simply to list all the components as was done in Figure 2; it is important to show the potential overlaps, interrelations and interactions, and to realize that discourse is where all the competencies most obviously reveal themselves. Discourse thus is the component in which (or through which) all the other competencies must be studiedand ultimately assessedif one is concerned with communicative competence, which is not a hierarchical system of discrete competencies or abilities but a dynamic, interactive construct. (p. 704)

SocioCultural Competence

Linguistic Competency

Discourse Competence

Formulaic Competence

Paralinguistic Competence

Strategic Competence
FIGURE 2. Celce-Murcias Model of Interaction among components of communicative competence.

Figure 2 shows Celce-Murcias model. In this model, socio-cultural competence represents the speaker/ listeners background knowledge of the target community (e.g., understanding communications, beliefs, values, conventions, and taboos of the target community), which makes informed comprehension and communication possible. The linguistic/grammatical competence and formulaic competence complement each other and allow the user to express original and creative messages or to draw upon conventionalized prepackaged and mundane formulaic messages. A comprehensive view of the communicative competence model (CCM) and its related issues is beyond the scope of this paper. However, the main reason for presenting this brief introduction to CCM is to indicate how complex the role of language factors is in the instruction and assessment of ELL students and how different factors interact within the framework.
42

2. Affective Factors in Learning and Assessment There is a multitudinous literature on the role of affect and learning. Banduras (1993) very large body of work on social learning theory is based on the notion that learners who feel they have control over their own learning are more apt to excel and that the sense of control is bound up in perceptions of self-efcacyhow they feel about themselves and their abilities. Likewise, Nolen-Hoeksema, Gurgus, & Seligman (1986) showed how a lack of feeling of self-efcacy could lead to learned helplessness, resulting in a failure to even try to accomplish tasks that are perceived as too difcult whether in fact they are or not. English Learners are fundamentally learners, like all other students, and so are vulnerable to the same social and psychological factors that affect learning. If they come to feel that they are not capable of learning either a new language or a new content area, they will

falter. Similarly, lack of condence in ones skills has been shown to affect test performance as well. Steele (1997), through a series of novel experiments, demonstrated that stereotypes or beliefs about how others view their ability can affect the test performance of both women and minorities. Often subjects are told that others do not believe they are particularly strong in the material being tested, and this results unwaveringly in depressed test performance. It is difcult for the test maker to account for these affective inuences on test performance in the building of a test, but the interpretation of test results can certainly take into account the error that arises from these affective factors in particular learners, such as those who are still learning English. High levels of pressure can also negatively impact learning. The level of pressure on all students, and particularly on ELL students due to language barriers, increases as the stakes associated with assessment increases. For
Educational Measurement: Issues and Practice

example, ELL students, their teachers and their parents may be blamed partly when their schools are not meeting the AYP requirements. Nichols, Glass, and Berliner (2005) indicated that the theory of action underlying the NCLB accountability system is that the pressure of high-stakes testing will increase student achievement. The authors created the Pressure Rating Index (PRI) and correlated the PRI outcome with the NAEP results from 1990 to 2003 in 25 states. They found that States with greater proportions of minority students implement accountability systems that exert greater pressure and High-stakes testing pressure is negatively associated with the likelihood that eighth and tenth graders will move into twelfth grade. They also found that increases in testing pressure are related to larger numbers of students being held back or dropping out of schools (p. ii). 3. Conative Factors: Motivational and Volitional Aspects of Learning Non-cognitive factorssuch as attitudes, beliefs, and motivationhave been shown to be related to student achievement (Schreiber, 2002). For instance, using an attitudes inventory, Tapia and Marsh (2000) found that students failing in mathematics scored lowest on self-condence, motivation, value, and enjoyment. Using nationally representative data of eighth-grade students from the National Education Longitudinal Study of 1988, Singh, Granville, & Dika (2002) found signicant positive effects of motivation factors. Research has found that students who struggle with reading grow to have less desire and motivation to read as a direct result of their struggles (OBrien & Dillon, 2002). Snow (1994) advocated that cognitive processes alone could not account for students learning, but that both affective and conative (motivational) processes had to be viewed as interacting with cognitive processes to produce learning. There is considerable literature to demonstrate this point in the research on second language learning. A number of studies (see Bialystok & Hakuta, 1994) have shown that individuals who are motivated to learn a language, whether by necessity or desire, are more effective learners of that language. Likewise, we know that the context of second language learning can have powerful effects on motivation, as when the learner
Winter 2006

perceives that she is being asked to reject her identity by displacing her primary language with a new language. In such cases, second language learning can be retarded (Schmidt, 2000). It was, in fact, this very principle that students who were made to feel that their primary language and culture had value in the society would be more motivated to learn a second language than those who felt their native language threatenedthat was behind the development of many bilingual bicultural programs during the 1970s. Adolescents, especially those who may be acutely concerned about how they appear as they struggle with a second language, can experience embarrassment that leads to lack of motivation to use the language, which in turn delays development of the second language. (Tse, 2001). Just as language learning is affected by motivation, so too can test performance be affected. Students who do not care about learning are not likely to care about how they perform on tests of that learning. G andara et al. (2003) found that many Latino high school students (especially males) who felt marginalized in their schools also professed not to care how they did in school and routinely failed their courses and made little or no effort to study for tests. Given these ndings, it is important to examine how policy issues regarding assessment measures consider possible differences in cognitive, conative, and affective factors for the ELL subgroup. Recent studies on the assessment and accommodations of English language learners and students with disabilities have raised concerns over a variety of issues related to academic assessment of these students. Discussion Fairness demands equal educational opportunities for all students including ethnic minorities, students with disabilities, low-income students, and English language learners. However, these subgroups of students have historically lagged behind their mainstream peers on test scores due, in part, to factors that may not be closely related to their academic achievement but do inuence their performance outcomes. In order to begin reducing the performance gaps between subgroups and mainstream students, improvements must be made in both instruction and assessment by

identifying and controlling for these factors (often referred to as nuisance variables) that affect a subgroups academic achievement. The four subgroups identied in the NCLB legislation are different in their background and are faced with different issues in their academic careers. While such differences are relevant to consider, in this paper we focused on assessment and accountability issues for ELL students, a key subgroup of students that is faced with serious challenges. Understanding issues that are involved in the instruction and assessment of ELL students will help us realize the limitations of the nations current accountability measures for all the subgroups under consideration. Many different factors such as parent education, poverty, and schooling conditions contribute to the existing performance gap between ELL and nonELL students. However, language factors have a greater impact on ELL student performance than any other factors. While ELL students perform behind non-ELL students in almost all subject areas in school, the performance gap between ELL and non-ELL is largest in areas with higher levels of language demand. Because ELL students, by denition, do not have a strong command of the English language, both learning and assessment are affected by their limited English prociency, and both learning and assessment conditions must be addressed to help close the performance gap. The concept of language as it affects ELL students performance is truly multidimensional and complex in nature. As elaborated in Celce-Murcias model of communicative competence, language interacts with many different socio-cultural factors. By understanding the complex nature of language factors, one can then discuss the impact of language on the academic achievement of ELL students. The challenges of second language acquisition, such as learning in a language in which students are not quite procient and working with measurement tools that are ill-equipped to assess their skills and abilities, have much more impact on ELL students academic performance than is acknowledged by many researchers and educational practitioners. For example, as elaborated earlier in this paper, when compared to native English speaking students in reading, average English
43

language learners had a decit in syntactic awareness skills. Furthermore, the process of developing new language skills is difcult and requires time and effort. Thus, it takes a long time for ELL students to become procient enough in English to understand teachers instructions and test questions in a language with an unfamiliar structure and vocabulary. Based on some research evidence it may take from ve to seven years or longer for most ELLs to gain sufcient mastery of academic English to join their English speaking peers in taking full advantage of instruction in English. During this period, ELL students may not be able to learn at the same rate as their native speaking peers when the instruction is offered only in English. Limited English prociency may also make it difcult to benet fully from the teachers instructions and to understand assessment questions. This has a direct impact on these students performance on assessments. It is ironic that during the period that ELL students are struggling to learn English, they are expected to progress at the same rate as their native English speaking peers. In addition, their schools and teachers may receive punitive corrective action for a lack of adequate progress. Perhaps one of the most challenging tasks in ELL students academic life is dealing with the often complex linguistic structure of assessment tools. Standardized achievement tests that have been constructed for mainstream students do not take into account the special needs of English learners. This can be a major source of frustration for these students and can contribute substantially to the performance gap between ELL and non-ELL students as there is no evidence to suggest that the basic abilities of ELL students are different from non-ELL students. Sometimes ELL students are provided with accommodations to help offset the linguistic challenges of assessment instruments, but many of these accommodations may not be relevant for these students since they were originally designed for other subgroups with completely different backgrounds and needs. That is, there is not enough evidence that these accommodations make the assessments more accessible for ELL students. Moreover, results of experimentally controlled studies show that some accommodations may alter

the construct being measured which consequently may affect the validity of the assessment. Problems in the assessment and instruction of ELL students, and the impact language background variables have on students learning and achievement outcomes undermine the authenticity of the nations accountability system for ELL students. The accountability system developed under NCLB has the expressed intent to pressure schools to be more responsive to the needs of at-risk students who have historically fared poorly in Americas public schools. By exerting pressure through sanctions, the belief has been that schools can be made to help these students improve academically. This is certainly a laudable and important goal, although there is far from universal agreement on the degree to which this strategy can achieve the goal. As Linn (2003) indicated, NCLB offers many contributions to the nations accountability system. For example, special attention given to groups of students who have had the lowest achievement in the past is encouraging. However the goals that NCLB sets for student achievement would be wonderful if they could be reached, but, unfortunately, they are quite unrealistic (Linn, 2003, p. 10). It is important to consider that there are many other factors beyond the cognitive domain in general and language factors in particular that are affecting the academic achievement of ELL students. Conative factors (motivational and volitional aspects of learning) as well as affective factors can greatly impact students performance in school. Lack of progress due to serious challenges that ELL students are faced with may seriously impact their level of motivation. Undue pressure to perform well under the nations accountability system and other sources of pressure in the academic life of ELL students may cause frustration and impact their academic performance. Research ndings can help to resolve some of the major issues in the instruction and assessment of ELL students and consequently improve the accountability system for these students. While research in this area is very limited in content and scope, and some of the ndings may be inconclusive, some of these ndings can help improve the existing assessment and accountability systems. For example, by identifying the sources of linguistic complexity of assessments,

and by providing ways to reduce the level of unnecessary linguistic complexity, research ndings have contributed signicantly to improving assessments for ELL students. Similarly, by identifying factors that impact the effectiveness and validity of accommodated assessments and by providing research-based recommendations for selecting the most appropriate accommodations for ELL students, research ndings can help create a more effective assessment system for ELL students. For example, as explained in this paper, research ndings have identied some accommodations that help ELL students with their English language needs without compromising the validity of the assessment. In this paper we discussed the challenges that English language learners are faced with in learning a new language. We also elaborated on the fact that ELL students transfer their prereading skills across languages. This implies that the time they spend learning in any language is going to be roughly predictive of learning to read. Therefore, knowledge of what students know in both primary and secondary languages can yield important information about the cognitive processing skills they can apply to learning to read. Including such information in the assessment and accountability systems for these students would be helpful. Instruction and assessments for ELL students are affected by many different factors; therefore, it is not possible to present a comprehensive picture of all of the issues involved in the educational experiences of these students. In this article we tried to present a few tangible examples of factors inuencing the academic achievement of these students. We hope this discussion might help policy makers, educational researchers, and educational practitioners to understand the complex nature of the academic life of ELL students in particular, and all four subgroups identied in NCLB in general, and plan their curriculum, instruction, and assessment accordingly. References
Abedi, J. (2006a). Language issues in item development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development. Mahwah, NJ: Lawrence Erlbaum Associates.

44

Educational Measurement: Issues and Practice

Abedi, J. (2006b). Are accommodations used for ELL students valid? Presented at the 2006 Annual Meeting of the American Educational Research Association, San Francisco. Abedi, J., Courtney, M., & Leon, S. (2003). Effectiveness and validity of accommodations for English language learners in large-scale assessments (CSE Tech. Rep. No. 608). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing. Abedi, J., Courtney, M., Mirocha, J., Leon, S., & Goldberg, J. (2005). Language accommodations for English language learners in large-scale assessments: Bilingual dictionaries and linguistic modication (CSE Tech. Rep. No. 666). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing. Abedi, J., Herman, J. L., Courtney, M., Leon, S., & Kao, J. (2004). English language learners and math achievement: A study on classroom-level opportunity to learn. Los Angeles: University of California: Center for the Study of Evaluation/National Center for Research on Evaluation, Standards, and Student Testing. Abedi, J., Hofstetter, C., & Lord, C. (2004). Assessment accommodations for English-language learners: Implications for policy-based empirical research. Review of Educational Research, 74(1), 1 28. Abedi, J., Leon, S., & Mirocha, J. (2003). Impact of students language background on content-based assessment: Analyses of extant data (CSE Tech. Rep. No. 603). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing. Abedi, J., & Lord, C. (2001). The language factor in mathematics tests. Applied Measurement in Education, 14(3), 219 234. Abedi, J., Lord, C., & Hofstetter, C. (1998). Impact of selected background variables on students NAEP math performance (CSE Tech. Rep. No. 478). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing. Abedi, J., Lord, C., Hofstetter, C., & Baker, E. (2000). Impact of accommodation strategies on English language learners test performance. Educational Measurement: Issues and Practice, 19(3), 1626. Abedi, J., Lord, C., & Plummer, J. (1997). Language background as a variable in NAEP mathematics performance (CSE Tech. Rep. No. 429). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (1999). Standards for educational

and psychological testing. Washington, DC: American Educational Research Association. August, D., & Shanahan, T. (2006) (Ed.). Developing literacy in second-language learners. Mahwah, NJ: Lawrence Erlbaum Associates. Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford: Oxford University Press. Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford: Oxford University Press. Bandura, A. (1993). Perceived self efcacy cognitive development and functioning. Educational Psychologist, 28, 117148. Bialystok, E., and Hakuta, K. (1994). In Other Words. New York: Basic Books. California Department of Education (2006). State AYP report. Retrieved June 6, 2006 from www.ayp.cde.gov/reports. Canale, M. (1983). From communicative competence to communicative language pedagogy. In J. C. Richards & R. W. Schmidt (Eds.), Language and communication. New York: Longman. Celce-Murcia, M. (1995, March). The elaboration of sociolinguistic competence: Implications for teacher education. Presented at the Georgetown University Round Table on Languages and Linguistics Conference, Washington, DC. Celce-Murcia, M., Dornyei, Z., & Thurrell, S. (1995). Communicative competence: A pedagogically motivated model with content specications. Issues in Applied Linguistics, 6(2), 535. Chiappe, P., Siegel, L. S., & Wade-Woolley, L. (2002). Linguistic diversity and the development of reading skills: A longitudinal study. Scientic Studies of Reading, 6(4), 369400. Retrieved June 6, 2006, from PsycINFO database. Cocking, R. R., & Chipman, S. (1988). Conceptual issues related to mathematics achievement of language minority children. In R. R. Cocking & J. P. Mestre (Eds.), Linguistic and cultural inuences on learning mathematics (pp. 1746). Hillsdale, NJ: Erlbaum. De Corte, E., Verschaffel, L., & DeWin, L. (1985). Inuence of rewording verbal problems on childrens problem representations and solutions. Journal of Educational Psychology, 77(4), 460470. Duncan, T., Parent, L., Chen, W-H., Ferrara, S., Johnson, E., Oppler, S., & Shieh, YY. (2005). Study of a dual-language test booklet in eighth-grade mathematics. Applied Measurement in Education, 18, 129 161. Durgunoglu, A., Nagy, W., & Hancin-Bhatt, B. (1993). Cross-language transfer of phonological awareness. Journal of Educational Psychology, 85, 453465. G andara, P., Rumberger, R., MaxwellJolly, J., & Callahan, R. (2003). English learners in California schools: Unequal resources, unequal outcomes.

Education Policy Analysis Archives, 11(36). Retrieved June 14, 2006 from http://epaa.asu.edu/epaa/v11n36/. G andara, P., & Rumberger, R. (in press). Immigration, language, and education: How does language policy structure opportunity? In J. Holdaway & R. Alba (Eds.), Education of immigrant youth: The role of institutions and agency. New York: Social Science Research Council. Geva, E., & Siegel, L. S. (2000). Orthographic and cognitive factors in the concurrent development of basic reading skills in two languages. Reading and Writing: An Interdisciplinary Journal, 12, 130. Geva, E., Yaghoub-Zadeh, Z., & Schuster, B. (2000). Understanding differences in word recognition skills of ESL children. Annals of Dyslexia, 50, 123154. Gottardo, R., Pannucci, J. A., Kuske, C. R., and Brettin, T. (2003). Statistical analysis of microarray data: A Bayesian approach. Biostatistics, 4, 597620. Hakuta, K., Butler, Y., & Witt, D. (2000). How long does it take English learners to attain prociency? (Policy Report 2000-1). University of California: Linguistic Minority Research Institute. Huebert, J. P. (2000). High-stakes testing: Opportunities and risks for students of color, English-language learners, and students with disabilities. In M. Pines (Ed.), The continuing challenge: Moving the youth agenda forward (Policy Issues Monograph 00-02, Sar Levitan Center for Social Policy Studies). Baltimore, MD: Johns Hopkins University Press. Huebert, J., & Hauser, R. (1999). High stakes: Testing for tracking, promotion, and graduation. Washington, DC: National Research Council. Katz, A., Low, P., Stack, J., & Tsang, S.-L. (2004). A Study of content area assessment for English language learners. Prepared for the Ofce of English Language Acquisition and Academic Achievement for English Language Learners, U.S. Department of Education. Oakland, CA: ARC Associates. Kindler, A. L. (2002). Survey of the states limited English procient students and available educational programs and services, 2000-2001 summary report. Washington, DC: National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs. Kintsch, W., & Greeno, J. G. (1985). Understanding and solving word arithmetic problems. Psychological Review, 92(1), 109 129. Kiplinger, V. L., Haug, C. A., & Abedi, J. (2000). Measuring mathnot reading on a math assessment: A language accommodations study of English language learners and other special populations. Presented at the annual meeting of the American Educational Research Association, New Orleans, LA. Larsen, S. C., Parker, R. M., & Trenholme, B. (1978). The effects of syntactic complexity upon arithmetic performance.

Winter 2006

45

Educational Studies in Mathematics, 21, 8390. Lepik, M. (1990). Algebraic word problems: Role of linguistic and structural variables. Educational Studies in Mathematics, 21, 8390. Lesaux, N., & Siegel, L. (2003). The development of reading in children who speak English as a second language. Developmental Psychology, 39(6), 10051019. Linn, R. L. (in press). Scientic evidence in educational policy and practice: Implications for adequate yearly progress. In C. Dwyer (Ed.), Measurement and research in the accountability era. Mahwah, NJ: Lawrence Erlbaum Associates. Linn, R. (2003). Accountability: Responsibility and reasonable expectations. Educational Researcher, 32(7), 313. Maihoff, N. A. (2002). Using Delaware data in making decisions regarding the education of LEP students. Presented at the Council of Chief State School Ofcers 32nd Annual National Conference on Large-Scale Assessment, Palm Desert, CA. Mestre, J. P. (1988). The role of language comprehension in mathematics and problem solving. In R. R. Cocking & J. P. Mestre (Eds.), Linguistic and cultural inuences on learning mathematics (pp. 200220). Hillsdale, NJ: Erlbaum. Moore, K. A., & Redd, Z. (2002). Children in poverty: Trends, consequences, and policy options. Washington, DC: Child Trends. Munro, J. (1979). Language abilities and math performance. Reading Teacher, 32(8), 900915. Nichols, S. L., Glass, G. V., and Berliner, D. C. (2005). High-stakes testing and student achievement: Problems for the No Child Left Behind Act. Tempe, AZ: Education Policy Studies Laboratory. Nolen-Hoeksema, S., Gurgus, J., & Seligman, M. (1986). Learned helplessness in children: A longitudinal study of depression, achievement, and explanatory style. Journal of Personality and Social Psychology, 51, 435442. Noonan, J. (1990). Readability problems presented by mathematics text. Early Child Development and Care, 54, 5781.

OBrien, D. G., & Dillon, D. R. (2002). Motivation and engagement in reading (Module prepared for the Minnesota Reading Excellence Act Grant and the Minnesota Reading First Grant). Minneapolis, MN: University of Minnesota and the Minnesota State Department of Education. Orr, E. W. (1987). Twice as less: Black English and the performance of black students in mathematics and science. New York: W. W. Norton. Parrish, T. B., Merickel, A., Perez, M., Linquanti, R., Socia, M., Spain, A., Speroni, C., Esra, P., Brock, L., & Delancey, D. (2006). Effects of the implementation of Proposition 227 on the education of English learners, K-12: Findings from a ve-year evaluation. Palo Alto, CA: American Institutes for Research and WestEd. Pitoniak, M., Lutkus, A., Cahalan-Laitusis, C., Cook, L., & Abedi, J. (2006). Are inclusion policies and practices for state assessment systems and NAEP state assessments aligned? Princeton, NJ: Educational Testing Service. Rivera, C. (2003). State assessment policies for English language learners. Presented at the 2003 Large-Scale Assessment Conference. Rivera, C., & Stanseld, C. W. (2001). The effects of linguistic simplication of science test items on performance of limited English procient and monolingual English-speaking students. Presented at the annual meeting of the American Educational Research Association, Seattle, WA. Rothman, R. W., & Cohen, J. (1989). The language of math needs to be taught. Academic Therapy, 25(2), 13342. Rumberger, R., & G andara, P. (2004). Seeking equity in the education of Californias English learners. Teachers College Record, 106, 20312055. Schmidt, R. (2000). Language policy and identity politics in the United States. Philadelphia: Temple University Press. Schreiber, J. B. (2002). Institutional and student factors and their inuence on advanced mathematics achievement. Journal of Educational Research, 95(5), 274 286.

Siegel, L. S. (1993). The development of reading. In H. W. Reese (Ed.), Advances in child development and behavior (pp. 63 97). San Diego, CA: Academic Press. Singh, K., Granville, M., & Dika, S. (2002). Mathematics and science achievement: Effects of motivation, interest, and academic engagement. Journal of Educational Research, 95(6), 323332. Sireci, S. G., Li, S., & Scarpati, S. (2003). The effects of test accommodation on test performance: A review of the literature (Center for Educational Assessment Research Report No. 485). Amherst, MA: University of Massachusetts. Snow, R. E. (1994). Abilities in academic tasks. In R. J. Sternberg & R. K. Wagner (Eds.), Mind in context: Interactionist perspectives on human intelligence (pp. 337). Cambridge, England: Cambridge University Press. Spanos, G., Rhodes, N. C., Dale, T. C., & Crandall, J. (1988). Linguistic features of mathematical problem solving: Insights and applications. In R. R. Cocking & J. P. Mestre (Eds.), Linguistic and cultural inuences on learning mathematics (pp. 221240). Hillsdale, NJ: Erlbaum. Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American Psychologist, 52(6), 613629. Sunderman, G., & Kim, J. (2004). Inspiring vision, disappointing results: Four studies on implementing the No Child Left Behind Act. Boston: Harvard University, The Civil Rights Project. Tapia, M., & Marsh, G. E. (2000). Effect of gender, achievement in mathematics, and ethnicity toward mathematics. Presented at the annual meeting of the Mid-South Educational Research Association, Bowling Green, KY. Tse, L. (2001) Why dont they learn English? Separating fact from fallacy in the U.S. language debate. New York: Teachers College Press. Zehler, A., Fleischman, H., Hopstock, P., Stephenson, T., Pendzick, M., & Sapru, S. (2003). Descriptive study of services to limited English procient students. Washington, DC: Development Associates.

46

Educational Measurement: Issues and Practice

S-ar putea să vă placă și