Documente Academic
Documente Profesional
Documente Cultură
research-article2013
Article
Edward J. Fuller1
Abstract
The National Council on Teacher Qualitys (NCTQ) recent review of university-based teacher preparation programs
concluded the vast majority of such programs were inadequately preparing the nations teachers. The study, however, has
a number of serious flaws that include narrow focus on inputs, lack of a strong research base, missing standards, omitted
research, incorrect application of research findings, poor methodology, exclusion of alternative certification programs, failure
to conduct member checks, and failure to use existing evidence to validate the reports rankings. All of these issues render
the NCTQ report less than useful in efforts to understand and improve teacher preparation programs in the United States.
The article also suggests alternative pathways NCTQ could have undertaken to work with programs to actually improve
teacher preparation. The article concludes by noting that the shaky methods used by NCTQ suggest shaky motives such that
the true motives of NCTQ for producing the report must be questioned.
Keywords
preservice education, educational policy, education reform
Introduction
Recent headlines and leading remarks about U.S. teacher
preparation programs proclaim Teacher prep programs get
failing marks (Sanchez, 2013), University programs that
train U.S. teachers get mediocre marks in first-ever ratings
(Layton, 2013), The nations teacher-training programs do
not adequately prepare would-be educators for the classroom, even as they produce almost triple the number of graduates needed (Elliot, 2013).
As readers of the Journal of Teacher Education are likely
aware, these remarks stem from the recently released Teacher
Prep Review by the National Council on Teacher Quality
(NCTQ, 2013b). Partnering with the U.S. News & World
Report, the NCTQ released their evaluation of universitybased teacher education programs in the United States based
on 18 standards developed by NCTQ. The study is another
critique of U.S. teacher preparation programs that follows a
long history of critiquing teacher preparation programs
(Zeichner & Liston, 1990).
Critics of traditional teacher preparation have used the
report as evidence that teacher preparation in the United
States is broken and we need to fix the system by either
radically changing traditional university-based programs
and/or abandoning traditional programs in favor of alternative programs. For example, Arthur Levine (2013) wrote,
Corresponding Author:
Edward J. Fuller, Penn State University, 204D Rackley Bldg., University
Park, PA 16802, USA.
Email: ejf20@psu.edu
64
Purpose
The purpose of this commentary is to examine the effort by
NCTQ to evaluate, judge, and rank university-based teacher
preparation programs using a one- to four-star system. This
commentary is important for those in the field of teacher
preparation for two primary reasons. First, the NCTQ report
will be conducted in future years and those seeking to attack
and dismantle university-based preparation programs will
use the reports as evidence of the poor quality of such programs as shown above. Those in Colleges of Education
particularly in teacher preparation programsneed to be
acutely aware of the report details and the problems with the
report so that they can engage effectively with others in a
thoughtful and educated manner. Indeed, I contend being
knowledgeable about the political happenings in our field is
part of the job duties of a professor. In particular, such
knowledge is necessary to thoughtfully discuss the issue
with the media and policymakers at all levelsincluding
those at your own university. This is an important role for
faculty that has traditionally been largely ignored but is
increasingly important given the unrelenting attacks on education in the mainstream media. Finally, despite the flaws of
the NCTQ report, it does accurately document the paucity of
research examining the association between what happens
in preparation programs and outcomes such as teacher
placement, teacher retention, teacher sense of self-efficacy,
Method
This commentary is based on my own analysis of the NCTQ
report as well as a number of other critiques of the report. My
own analysis was initially posted as a blog the day before the
report was released and was based on the many problems
with the past NCTQ reports. Subsequent to the release of the
study, I expanded my critique based on the details of the
report. Finally, for this commentary, I read a number of critiques of the NCTQ report from numerous organizations and
scholars in the field.
While this review encompasses the major critiques made
by others, it also includes my own unique critiques from my
experiences in the field as a researcher. Thus, most of my
unique contribution appears in the critique of the NCTQ
methodology and in the critique concerning the exclusion of
alternative preparation programs. My qualifications for making such critiques are presented below.
65
Fuller
teacher preparation programs in Texasincluding alternative certification programs (ACPs). Thus, I have experience
in working with teacher preparation program data and creating report cards on such programs.
Finally, I am a strong proponent of thoughtfully collecting data and carefully analyzing such data as a means to provide useful feedback to preparation program personnel,
make available information to prospective preparation program students, and hold preparation programs accountable.
Yet, I cannot emphasize enough how careful such efforts
need to be because collecting and appropriately analyzing
such data is terribly complex and requires highly skilled
researchers with deep knowledge of preparation programs.
My commitment to these ideals is evidenced by my aforementioned activities in Texas.
66
not researchers. While thinkers and practitioners can undoubtedly provide useful insight, researchers are critical to such
standard setting. In fact, many beliefs based on common
sense turn out to be incorrect after research examines an issue.
In the full report,4 NCTQ provides a difficult to interpret
graph about the sources of support for the various standards.
The most striking revelation of the graph is that high-quality
research was only a very small source for the development
and adoption of the standards.
Even the research consensus portion of the graph, however, is quite misleading as will be explained below. To their
credit, NCTQ does provide additional documentation for
each standard by providing the number of research studies
supporting each standard in separate documents located on
their website at http://www.nctq.org/teacherPrep/ourApproach/standards/. For each standard, NCTQ (2013a) classified research for each standard in two stages: first
considering design strength relative to several variables
common to research designs, and second, considering
whether student effects (as measured by external, standardized assessments) were considered (p. 2). More detailed
descriptions of stronger and weaker designs as defined by
NCTQ are included in the appendix.
Using the tables provided by NCTQ for each standard, I
created Table 1 that includes the number and percentage of
studies for each standard within the four possible categories
created by NCTQ. As shown in Table 1, only 9 of the 18
standardsjust 50%rely on more than one study classified as having a strong design and a focus on student test
scores. Astonishingly, 7 of the 18 standards did not have a
single study classified as having a strong design and a focus
on student test scores. Only three standardsselection criteria, elementary mathematics, and High School Contenthad
five or more such studies. Thus, I would argue only three
standards have enough studies to create some sort of consensus that a particular standard is associated with positive student outcomes.
Even this is misleading in two ways. First, NCTQ does not
provide any connection between the listed research studies
and the individual indicators within each standard, the core
subject areas included in the study (elementary reading and
mathematics, English language arts, mathematics, science,
and social studies), or the school levels addressed (elementary schools, middle schools, and high schools). Thus, while
a standard may have a few supportive research studies, we do
not know how well research supports the actual indicators
used by NCTQ or whether the research supports the use of
those indicators across the various subject areas and school
levels. For example, while the research provided by NCTQ
on secondary content provides some limited evidence of the
importance of subject matter knowledge in improving student
achievement, NCTQ uses the evidence to adopt an indicator
that measures whether a graduate has at least 30 hr of content
courses or a major in the field. The research cited by NCTQ,
however, does not support the adoption of this indicator in
English language, arts, or social studies. More disturbingly,
67
Fuller
Table 1. Number and Percentage of Studies Per Standard by Strength of Methods and Examination of Student Outcomes.
Strong design
Weak design
No
outcomes
Outcomes
Outcomes
No outcomes
Standard
Total studies
Selection criteria
Early reading
English language learners
Struggling readers
CC Elementary mathematics
CC Elementary content
CC middle school content
CC high school content
Special education
Classroom management
Assessment and data
Equity
Student Teaching 1
Student Teaching 2
Secondary methods
Instruction design for special education
Outcomes
Evidence of effectiveness
6
2
0
2
5
2
3
5
0
2
0
2
0
1
1
0
0
0
31
46.2
9.5
0.0
16.7
14.3
13.3
33.3
35.7
0.0
9.1
0.0
5.3
0.0
6.7
10.0
0.0
NA
NA
11.4
6
1
1
0
6
2
2
2
1
2
2
1
1
0
0
1
0
0
28
46.2
4.8
33.3
0.0
17.1
13.3
22.2
14.3
16.7
9.1
7.4
2.6
5.6
0.0
0.0
6.7
NA
NA
10.3
0
1
0
1
0
0
0
0
0
0
8
0
0
0
0
0
0
0
10
0.0
4.8
0.0
8.3
0.0
0.0
0.0
0.0
0.0
0.0
29.6
0.0
0.0
0.0
0.0
0.0
NA
NA
3.7
1
17
2
9
24
11
4
7
5
18
17
35
17
14
9
14
0
0
204
7.7
81.0
66.7
75.0
68.6
73.3
44.4
50.0
83.3
81.8
63.0
92.1
94.4
93.3
90.0
93.3
NA
NA
74.7
13
21
3
12
35
15
9
14
6
22
27
38
18
15
10
15
0
0
273
68
69
Fuller
effort to meet standards that additional research may determine to not be associated with important outcomes.
70
Secondary
Standard
Scored
% scored
Standard
Scored
% scored
Selection criteria
Early reading
CC Elementary mathematics
CC Elementary content
Student teaching
English language learners
Struggling readers
Classroom management
Lesson planning
Assessment and data
Outcomes
Evidence of effectiveness
1,175
609
712
1,175
659
527
621
420
335
337
496
1
100.0
51.8
60.6
100.0
56.1
44.9
52.9
35.7
28.5
28.7
42.2
0.1
Selection criteria
CC high school content
CC middle school content
Student teaching
Classroom management
Lesson planning
Assessment and data
Secondary methods
Outcomes
Evidence of effectiveness
1,146
1,121
1,146
619
420
333
321
665
497
0
100.0
97.8
100.0
54.0
36.6
29.1
28.0
58.0
43.4
0.0
With respect to production, 55% of the individuals obtaining an initial teaching certificate from an in-state teacher
preparation program in Texas from 2003 to 2010 were from
ACPs.6 In comparison, only 40% of newly certified individuals were from traditional university-based undergraduate
programs. Moreover, since 2008, 33% of all newly certified
teachers were from privately managed programs that tend to
have very low grade point average (GPA) requirements (or
none at all) and, in some cases, provide no preservice hours
prior to the person entering the classroom despite state regulations that require such hours (Vigdor & Fuller, 2012).
Furthermore, individuals from the privately managed
ACPs tend to be much more likely to fail content certification
tests. For example, Table 3 documents the number of certification test-takers, number of those test-takers passing the test
on the first attempt, and the percentage passing on the first
attempt for selected Texas Examination of Educator Standards
(TExES) certification tests in the 2012 academic year. For
four of the six secondary content tests, individuals from
university-based programs (including university-based ACPs)
had passing rates more than 20 percentage points greater than
individuals from privately managed alternative programs.
There were large differences for the other tests as well.
Such results cannot be explained by basic differences in
teacher demographicsrace/ethnicity, sex, or age. Indeed,
using logistic regression analysis, Vigdor and Fuller (2012)
examined individual certification scores on the TExES tests
administered from 2003 to 2007 by type of certification program. The regression analysis was based on the following
model:
ln (P / (1 P)) = + 1 (PC) + 2 (PT) + t,
where P = the probability of failing the certification test, =
a constant, PC = personal characteristics (sex is female, race/
ethnicity is White, the interaction of sex and race/ethnicity
age, and age squared), PT = program type (private ACP,
71
1,266
1,619
353
296
79.3
81.2
88.7
88.4
4,529 2,781
5,972 3,783
1,443 1,002
5,569 3,906
61.4
63.3
69.4
70.1
204
238
34
177
137
169
32
164
67.2
71.0
94.1
92.7
306
350
44
634
188
222
34
534
61.4
63.4
77.3
84.2
166
217
51
138
101
135
34
96
60.8
62.2
66.7
69.6
359
434
75
464
257
322
65
434
71.6
74.2
86.7
93.5
484
585
101
416
236
304
68
312
48.8
52.0
67.3
75.0
321
439
118
134
147
234
87
92
45.8
53.3
73.7
68.7
1,596
1,994
398
335
51.7
Science 8-12
2,933 1,515
English 8-12
54.4
62.1
Mathematics 8-12
3,978 2,164
1,045 649
Science 4-8
69.0
Math 4-8
5,234 3,610
English 4-8
University-based
programa
AC programsb
AC programs
(not private)
AC programs
(private)
All generalist
Takers Passed % pass Takers Passed % pass Takers Passed % pass Takers Passed % pass Takers Passed % pass Takers Passed % pass Takers Passed % pass Takers Passed % pass Takers Passed % pass
Generalist 4-8
Program type
Generalist EC-6
Table 3. Number of Test-Takers, Number of Test-Takers Passing on Initial Attempt, and Percentage of Test-Takers Passing on Initial Attempt for Selected TExES Certification
Tests in Texas (2012).
72
Table 4. Odds Ratios and p-Values for Logistic Regression Analysis of individuals Failing a Texas Certification Examination, 2003-2007.
Program
Generalist EC-4
English 4-8
Math 4-8
Science 4-8
Generalist 4-8
English 8-12
Math 8-12
Science 8-12
Type
Exp(B)
Significance
Exp(B)
Significance
Exp(B)
Significance
Exp(B)
Significance
Exp(B)
Significance
Exp(B)
Significance
Exp(B)
Significance
Exp(B)
Significance
Other ACP
ACP: Private
1.044
1.551
.114
.000
1.078
1.439
.567
.004
0.995
1.287
.943
.001
0.919
1.323
.254
.001
0.880
1.648
.149
.000
1.258
1.548
.002
.000
0.969
1.171
.598
.022
0.833
1.176
.030
.063
73
Fuller
100
90
80
70
60
50
40
30
20
10
0
0.0
0.5
1.0
1.5
2.0
NCTQ Stars
2.5
3.0
3.5
Summary. Thus, there are at least three major issues with the
methodology used by NCTQ. Most troublesome is the failure of NCTQ to examine the relationship between their rankings and important preparation program outcomes. There are
certainly other issues that have been mentioned by the many
other individuals who have critiqued the study. Ultimately,
all of the methodological issues cast serious doubt on the
findings by NCTQ. Indeed, given the seriousness of the
issues, the findings of the report should be ignored by the
public and policymakers.
74
Process variables
Input variables
Effective instruction
Number of required courses
Number of required clinical hours
Quality of mentoring
Change in content knowledge
Change in pedagogical knowledge
Qualifications of instructors
Class size
Supervisor-student teachers ratio
Course content
Coherency of courses
Number of teachers per mentor
collected. Equally important as data collection is data analysis. Many of the outcomes are influenced by factors outside
the control of the program. Thus, appropriate statistical
methodologies would need to be used to accurately assess
outcomes for individual programs.
My list is certainly not exhaustive and not all of the variables are substantiated by a peer-reviewed body of literature.
It does, however, provide ideas for those engaged in efforts
to gather and analyze data on teacher preparation programs
with the intent of improving practice. Data points, however,
regardless of how they are collected, simply do not provide
enough information to make high-stakes decisions about
teacher preparation programs.
Again, I come back to the Texas case because I know it
quite well. Texas was the first state to adopt an educator
accountability system. The system was based purely on data
and almost entirely on the passing rates of graduates on the
state certification exams. A few programs were cited as
unacceptable and in need of improvement and all of those
programsto the best of my knowledgeresponded appropriately and increased their passing rates. After the explosion
of privately managed programs after 2003, numerous complaints from teachers from such programs and principals
employing the graduates of such programs became more pronounced each year. Partially in response to these complaints,
the Texas state legislature passed a bill that created a
Consumer Report Card for all teacher preparation programs
in Texas that included a wealth of information such as
entrance requirements, placement rates, retention rates, and
other data on programs.
Ultimately, in addition to implementing the state-mandated
consumer report card, the state also started making statemandated site visits to programs to conduct audits. These
audits must occur at least once every 5 years. While largely
focused on compliance with state statutes, the audits provided a much more in-depth assessment of the behaviors of
Conclusion
As shown above, there are a number of very serious problems
with the NCTQ report. These issues range from the rationale
for the reviews standards to various methodological problems. Myriad other problems with the review exist that are
well documented by others elsewhere.9 Not mentioned previously is the issue of applying the same set of standards across
all certification areas at all levels and holding all areas and
levels accountable to the same standards. Should research
focus on the effective practices specific to each certification
area and level and then identify the commonalities across all
programs? Or, alternatively, should a set of generic standards
that apply to all certification areas and levels serve as the
focus of research that examines the association between the
standards and outcomes? The answers to these questions are
certainly not clear. However, without sufficient evidence in
all certification areas and levels, NCTQ has established a
common set of standards that apply to all areas and levels.
This risks losing the important differences in effective practice across areas and levels.
Finally, and most disturbingly, the star ranking system
does not even appear to be associated with program outcomes such as licensure/certification test passing rates or the
75
Fuller
aggregate value-added scores in reading or mathematics of
programs. NCTQ could have chosen to ensure some semblance of a correlation between their star system and outcomes using publicly available data from various states, yet
they chose not to. NCTQs refusal to even attempt to validate
their own effort gives substantial support to those who
believe NCTQ has absolutely no intention of helping traditional university-based programs and has every intention of
destroying such programs and replacing them with a marketbased system of providers.
As I have shown above, Texas went down that route and
the results were not pretty. Given their existence relied upon
students enrolling in programs, privately managed alternative programs allowed individuals with less than a 2.0 undergraduate GPA to enter programs. These same programs, not
surprisingly, had abysmally low passing rates on the state
certification examinations. The programs even allowed
uncertified individuals to enter the classroom and instruct
students. Does NCTQ really believe that a Wild West freemarket system will increase the quality of the preparation of
teachers and improve student outcomes?
If NCTQ wants to truly help improve student outcomes
by improving teacher preparation, they should stop using
incredibly weak methods, unsubstantiated standards, and
unethical evaluation strategies to shame programs and start
working with programs to build a stronger research base and
information system that can be used by programs to improve
practice. Yes, teacher preparation certainly has room for
improvement, but throwing rocks from a glass house is not
helpful to anyone but NCTQ and the organizations funding
the NCTQ study.
Given the very shaky foundation upon which the NCTQ
review was built and the shaky motives of NCTQ in conducting the review, the entire review should be discounted by educators, policymakers, and the public. If NCTQ was truly
interested in improving all teacher preparation programs, there
are certainly different pathways that could have been chosen.
For example, NCTQ could have invested resources to
conduct high-quality studies examining the association
between inputs and processes with outcomes. Validity studies could have been conducted in states with easily accessible outcome data such as Louisiana, Florida, North Carolina,
Washington, and Texas. Furthermore, NCTQ could have
chosen to create state working groups to discuss the different
details of available data so that NCTQ employees would not
misinterpret the data and to use member checks to ensure
reported data was accurate. NCTQ could have chosen to
include all programs, not just university-based programs. A
completely alternative pathway could have been to simply
report the findings from the study and, instead of assigning
stars, simply argue that programs and institutions like NCTQ
should work together to improve data collection and analysis
as a means to improve program outcomes.
In the end, NCTQ chose the pathway that rejected the
voices of those educators highly committed to improving
teacher preparation and chose to highlight their own voices
and agenda instead. This has damaged any sense of partnership between teacher preparation programs and NCTQ.
As such, funding should be provided to organizations truly
committed to the improvement of teacher preparation rather
to those that care mostly about their own level of influence.
Appendix
Classification of Research Studies
National Council of Teacher Quality (NCTQ) provides the
following information about the classification of research
studies into strong or weak designs:
Studies with stronger design use some sort of control or
comparison group in an experiment, natural or otherwise, or use
a multiple regression for evaluation. These studies have a sample
size of 100 or more unless the subjects involved are not
individuals (e.g., teacher preparation programs) in which case
the minimum sample size was determined based on the context
of the study and the nature of the subjects. In the case of
experiments, the number of subjects in each of the treatment and
control groups had to total 100 or more to classify the relevant
study as having strong design. In cases in which dyadic groups
were analyzed, 50 participants constituted the minimum sample
size for categorization as having strong design.
Studies with weaker design have no comparison or control,
are often simply case studies with potential selection bias and
rely on survey or otherwise qualitative data. These studies have
a sample size of fewer than 100.
Some studies with control groups were categorized as having
weak design when the control group was inappropriately selected
or the study did not provide enough details about the control
group to rule out significant differences between the treatment
and control groups.
In the case of studies that had both strong and weak characteristics,
categorization was determined by whether the research would
be useful for teacher educators, teacher education program
administrators and/or policymakers. If it seemed potentially
useful, it was categorized as strong design.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Notes
1. For examples of these critiques, see http://aacte.org/resources/
nctq-usnwr-review/responses-to-2013-nctq-us-news-a-worldreport-review.html
2. Goldhaber and Liddle (2011) found that the inclusion of
school fixed effects did not substantially later the rankings of
76
References
American Association of Colleges of Teacher Education. (2013,
June 18). NCTQ review of nations education schools deceives,
misinforms public. Washington, DC: Author. Retrieved from
http://aacte.org/news-room/press-releases/nctq-review-ofnations-education-schools-deceives-misinforms-public.html
Boyd, D. J., Grossman, P. L., Lankford, H., Loeb, S., & Wyckoff,
J. (2009). Teacher preparation and student achievement.
Educational Evaluation and Policy Analysis, 31(4), 416-440.
Coggshall, J. G., Bivona, L., & Reschly, D. J. (2012). Evaluating
the effectiveness of teacher preparation programs for support
and accountability. Washington, DC: National Comprehensive
Center for Teacher Quality.
Darling-Hammond, L. (2006). Assessing teacher education the usefulness of multiple measures for assessing program outcomes.
Journal of Teacher Education, 57(2), 120-138.
Darling-Hammond, L. (2013, June 19). Why the NCTQ teacher
prep ratings are nonsense. Palo Alto, CA: Stanford Center for
Opportunity Policy in Education.
Dooley, C. M., Meyer, C., Ikpeze, C., OByrne, I., Kletzien, S.,
Smith-Burke, T., . . .Dennis, D. (2013). LRA response to the
NCTQ Review of Teacher Education Programs. Retrieved
from http://www.literacyresearchassociation.org/pdf/LRA%20
Response%20to%20NCTQ.pdf
Eduventures. (2013, June 18). A review and critique of the National
Council on Teacher Quality (NCTQ) methodology to rate schools of
education. Retrieved from http://www.eduventures.com/2013/06/areview-and-critique-of-the-national-council-on-teacher-qualitynctq-methodology-to-rate-schools-of-education/
Elliot, P. (2013, June 18). Too many teachers, too little quality.
Yahoo News. Retrieved from http://news.yahoo.com/reporttoo-many-teachers-too-little-quality-040423815.html
Goldhaber, D. (2007). Everyones doing it, but what does teacher
testing tell us about teacher effectiveness? Journal of Human
Resources, 42(4), 765-794.
Goldhaber, D., & Liddle, S. (2011). The gateway to the profession: Assessing teacher preparation programs based on student achievement (Working Paper No. 2011-2.0). Seattle, WA:
Center for Education Data and Research.
Harris, D. N., & Sass, T. R. (2011). Teacher training, teacher quality, and student achievement. Journal of Public Economics,
95(7), 798-812.
Kamras, J., & Rotherham, A. (2007). Americas teaching crisis.
Democracy. Retrieved from http://www.democracyjournal.
org/5/6535.php?page=all
Layton, L. (2013, June 18). University programs that train
U.S. teachers get mediocre marks in first-ever ratings. The
Washington Post. Retrieved from http://www.washingtonpost.com/local/education/university-programs-that-train-usteachers-get-mediocre-marks-in-first-ever-ratings/2013/06/17/
ab99d64a-d75b-11e2-a016-92547bf094cc_story.html
Levine, A. (2013, June 21). Fixing how we train U.S. teachers. The
Hechinger Report. Retrieved from http://hechingerreport.org/
content/fixing-how-we-train-u-s-teachers_12449/
Lincoln, Y. S., & Guba, E. G. (1985). Establishing trustworthiness.
In Y. S. Lincoln & E. G. Guba (Eds.), Naturalistic inquiry (pp.
289-331). Newbury Park, CA: SAGE.
Mihaly, K., McCaffery, D., Sass, T., & Lockwood, J. R. (2012).
Where you come from or where you go? Distinguishing
between school quality and the effectiveness of teacher preparation program graduates (CALDER Working Paper No.63).
Washington, DC: CALDER and American Institutes for
Research.
Monk, D. H. (1994). Subject area preparation of secondary
mathematics and science teachers and student achievement.
Economics of Education Review, 13, 125-145.
Montano, T. (2013, June 28). Debunking NCTQs teacher prep
review. California Teachers Association, Retrieved from
http://www.calitics.com/showDiary.do;jsessionid=78C550C8
45B509AFEDB0BA9C1A9DB64E?diaryId=15104
National Council on Teacher Quality. (2013a). Standards.
Washington, DC: Author. Retrieved from http://www.nctq.org/
teacherPrep/ourApproach/standards/
National Council on Teacher Quality. (2013b). Teacher prep
review. Washington, DC: Author.
National Council on Teacher Quality Audit Panel. (2013). Audit
panel statement on the NCTQ teacher prep review. Washington,
DC: National Council on Teacher Quality. Retrieved from
http://nctq.org/dmsView.do?id=2181
Patton, M. Q. (2002). Qualitative research and evaluation methods
(3rd ed.). Thousand Oaks, CA: SAGE.
Pearson, P. D., & Goatley, V. (2013, July 2). Response to the NCTQ
teacher education report. Newark, DE: International Reading
Association. Retrieved from http://www.reading.org/general/
Publications/blog/LRP/literacy-research-panel/2013/07/02/
response-to-the-nctq-teacher-education-report
Pugach, M. C., & Blanton, L. P. (2012). Enacting diversity in dual
certification programs. Journal of Teacher Education, 63(4),
254-267.
77
Fuller
Sanchez, C. (2013, June 18). Study: Teacher prep programs get failing marks. National Public Radio. Retrieved from http://www.
npr.org/2013/06/18/192765776/study-teacher-prep-programsget-failing-marks
Stanovich, P. J., & Stanovich, K. E. (2003). Using research and
reason in education: How teachers can use scientifically
based research to make curricular and instructional decisions.
Portsmouth, HN: RMC Research Corporation.
State Auditors Office. (2008). An audit report on the Texas education agencys oversight of alternative teacher certification
programs. Austin, TX: Author.
Vigdor, J., & Fuller, E. J. (2012). Examining teacher quality in
Texas. Unpublished expert witness report for Texas school
finance court case: Texas taxpayer and student fairness
Coalition v. Robert Scott and State of Texas.
Author Biography
Edward J. Fuller is an associate professor in the Department of
Educational Administration at Penn State University. He also
serves as the director for the Center for Evaluation and Education
Policy Analysis and associate director of policy for the University
Council for Educational Administration.