Documente Academic
Documente Profesional
Documente Cultură
DOI 10.1007/s11135-004-1670-0
Abstract. The purpose of this paper is to provide evidence that the debate between quanti-
tative and qualitative is divisive and, hence, counterproductive for advancing the social and
behavioral science field. We advocate that all graduate students learn to utilize and to
appreciate both quantitative and qualitative research methodologies. As such, students will
develop into pragmatist researchers who are able to utilize both quantitative and qualitative
techniques when conducting research. We contend that the best way to accomplish this is by
eliminating quantitative research methodology and qualitative research methodology courses
from curricula and replacing these with research methodology courses at different levels that
simultaneously teach both quantitative and qualitative techniques within a mixed methodo-
logical framework.
Key words: research methods courses; teaching research; qualitative research; quantitative
research; mixed methods; pragmatist researcher
1. Introduction
The last several decades have witnessed intense and sustained debates about
quantitative and qualitative research paradigms. Unfortunately, this has led
to a great divide between quantitative and qualitative researchers in general
and between instructors of quantitative and qualitative research methodol-
ogy courses in particular, who often view themselves as being in competition
with each other. This polarization has promoted what Onwuegbuzie (2000a)
has termed ‘‘uni-researchers,’’ namely, researchers who restrict themselves
q
An earlier version of this article received the 2003 Southwest Educational Research
Association (SERA) Outstanding Paper Award.
?
Author for correspondence: Anthony J. Onwuegbuzie, Department of Educational Mea-
surement and Research, College of Education, University of South Florida, 4202 East Fowler
Avenue, EDU 162, Tampa, FL 33620-7750. E-Mail: tonyonwuegbuzie@aol.com
268 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
For slightly more than 100 years, research methodology in the social and
behavioral sciences has undergone four major phases. The first phase, ending
just prior to the late 19th century, was characterized by the popularization of
the quantitative research paradigm. During this era, positivism prevailed, in
which mathematical and statistical procedures were utilized to explore, to
describe, to explain, to predict, and to control social and behavioral phe-
nomena (Smith, 1983; Smith and Heshusius, 1986; Onwuegbuzie, 2000a; cf.
Johnson and Onwuegbuzie, 2004). Social science positivists promoted re-
search studies that were value-free, using rhetorical neutrality that resulted in
discoveries of social laws, from which in time and context-free generaliza-
tions ensued.
The beginning of the 20th century marked the second research method-
ology phase in the social and behavioral sciences. This phase involved the
emergence of the qualitative research paradigm. Proponents of this school of
thought rejected the positivistic use of the traditional scientific method to
study social observations (Hodges, 1944, 1952; Ermarth, 1978; Crotty, 1998).
Instead, they advocated the use of interpretive/hermeneutic approaches in the
social and behavioral science field. Moreover, they contended that social
reality was constructed and thus was subjective. As such, this phase of history
is distinguished by the polarization of the quantitative and qualitative re-
search paradigms (Hughes, 1958; Outhwaite, 1975, 1983; Smith, 1983; Smith
and Heshusius, 1986). As the status and visibility of the qualitative research
paradigm rose between the turn of the 20th century and the World War II,
positivism in its purist form (i.e., logical positivism) in the social and
behavioral sciences became increasingly discredited (Tashakkori and Teddlie,
1998).
The third research methodology phase in the social and behavioral sci-
ences came to the forefront during the late 1950s and 1960s. This phase,
which grew out of an attempt to reject some of the tenets of logical posi-
tivism, saw the emergence of post-positivism (e.g., Hanson, 1958; Popper,
1959; Campbell and Stanley, 1966). Post-positivism represented a compro-
mise between the quantitative and qualitative research paradigms. While
believing that reality is constructed and that research is value-laden, post-
positivists also believed that some relatively stable relationships exist.
However, they tended to emphasize the importance of the scientific method
in general and methodological appropriateness in particular (Cook and
Campbell, 1979; Smith, 1994).
During this third phase, more radical philosophies also arose. These
philosophies, which included post-structuralism and post-modernism, con-
tended even more strongly that no objective social reality existed; rather, they
maintained that multiple realities existed such that interpretation was
270 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
paradigms are based on the relationship between the researcher and objective
of study (e.g., participant). Whereas positivists assert that researchers should
separate themselves from the object of study, interpretivists contend that
these two entities are dependent on one another and that qualitative
researchers should take advantage of this relationship better to understand
phenomena. Finally, axiological differences focus on the role of values in
research. Positivists maintain that research should be value-free, whereas
interpretivists posit that research is influenced to a great extent by the values
of the researcher.
These major differences in understanding the world have caused great rifts
between positivists and interpretivists (Creswell, 1998). On both sides, many
staunch advocates believe that due to these major differences in beliefs, there
is no middle ground at which to meet. Instead, advocates of positivist and
interpretivist paradigms believe their way is the only way, and thus, cannot
see beyond their own back yards.
The most disturbing feature of the paradigm wars is the unending focus on
the two orientations. These orientations have evolved into subcultures in the
research world: the subculture of positivistic quantitative thinking and the
subculture of interpretivist qualitative thinking (Sieber, 1973). As noted by
Newman and Benz (1998), rather than representing bi-polar opposites,
quantitative and qualitative research represent an interactive continuum.
Even though there is a substantial rift between the two paradigms, there are
many more similarities than there are differences between these two orien-
tations.
One of the most basic similarities between the two paradigms is that they
both include the use of research questions. Furthermore, the research ques-
tions in both paradigms are addressed through some type of observation.
Sechrest and Sidani (1995: 78) describe how both paradigms ‘‘describe their
data, construct explanatory arguments from their data, and speculate about
why the outcomes they observed happened as they did’’.
Another similarity is how both paradigms interpret data. Quantitative
researchers use an array of statistical procedures and generalizations to
determine what their data mean, whereas qualitative researchers use phe-
nomenological techniques and their worldviews to extract meaning. That is,
researchers from both paradigms use analytical techniques to find meaning
(Dzurec and Abraham, 1993). In addition, techniques to verify data are
utilized by researchers in both paradigms.
Both sets of researchers attempt to reduce the dimensionality of their data.
For example, quantitative researchers use data-reduction methods such as
factor analysis and cluster analysis, whereas interpretivists conduct thematic
272 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
analyses (Onwuegbuzie, 2003). As such, factors that are derived from mul-
tivariate analyses are analogous to emergent themes that are extracted from
thematic analyses.
As outlined above, similarities between quantitative and qualitative re-
search abound. Moreover, regardless of epistemology, all research in the
behavioral and social sciences represent an attempt to understand human
behavior (Onwuegbuzie and Leech, 2004a). Thus, it is clear that if differences
prevail between quantitative and qualitative researchers, these discrepancies
do not arise from different goals. Instead they occur because the two groups
of investigators have operationalized their strategies differently for reaching
these goals (Dzurec and Abraham, 1993). This suggests that methodological
pluralism should be enhanced. The best way to accomplish this is for as many
investigators as possible to become pragmatist researchers.
have not yet constructed a way to ensure their success in utilizing both
paradigms (pp. 7–8).
occurring in many research textbooks, most of these books are still domi-
nated by the quantitative research paradigm. Encouragingly, however, some
research methodology textbooks now contain a section (e.g., McMillan and
Schumacher, 2001; Gay and Airasian, 2003) or even a whole chapter
(e.g., Punch, 1999; Creswell, 2002; Johnson and Christensen, 2004) devoted
to mixed methodology research. Nevertheless, for the remaining chapters in
most textbooks, the quantitative and qualitative approaches are presented
separately. Unfortunately, by presenting these two sets of methods in sepa-
rate sections and chapters, research methodology instructors who possess
little background or knowledge of one paradigm may not be able to resist the
temptation to omit one of the sections of the research text. Students enrolled
in such one-dimensional classes then would not be educated to become
pragmatist researchers.
It could be argued that separating quantitative and qualitative techniques
makes the book read as if it contains two books in one. Further, by sepa-
rating the two paradigms in research textbooks, students may form the
impression that research represents a dichotomy of choices rather than an
integrative, interactive, and systematic process for the purpose of generating
new knowledge or validating or refuting existing knowledge. Further, such
separation does little to advance epistemological ecumenism. Yet, not only is
such a quantitative–qualitative dichotomy false, but as Miles and Huberman
(1984: 21) stated, ‘‘epistemological purity doesn’t get research done’’.
Moreover, as advanced by methodologists (e.g., Newman and Benz, 1998;
Onwuegbuzie, 2003), rather than representing a dichotomy, quantitative and
qualitative research traditions lie on an epistemological continuum. In fact,
all the various dichotomies that are used to distinguish quantitative and
qualitative paradigms should be re-conceptualized as lying on continua.
These continua include realism vs. idealism, objective vs. subjective, imper-
sonal vs. personal, deductive reasoning vs. inductive reasoning, logistic vs.
dialectic, rationalism vs. naturalism, reductionistic vs. holistic, generalization
vs. uniqueness, causal vs. acausal, macro vs. micro, quantifiers vs. describers,
and numbers vs. words. Understanding and using such a re-conceptualiza-
tion would allow all research methodology instructors to focus more on
teaching research strategies rather than on paradigmatic issues.
The prevailing practice of presenting quantitative and qualitative in sep-
arate sections of textbooks likely has shaped the way many research meth-
odology instructors teach their classes. As stated by Tashakkori and Teddlie
(2003):
Although no research data are available, one may assume that pedagogy
has undergone the same transitions as well; that is, that current teachers
of research methods cover both types of methods in their general
research courses, and do so in a separate manner. (p. 63)
276 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
We agree with Tashakkori and Teddlie that research methods textbooks and
courses should be presented in a more integrated and interactive manner.
Further, we agree that ‘‘those who teach social/behavioral research meth-
odology have to stop identifying themselves as qualitative or quantitative
researchers’’ (p. 22). Also, we applaud these authors for providing a frame-
work for accomplishing this (cf. Tashakkori and Teddlie, 1998, 2003).
However, our position is even stronger. Specifically, we believe that the best
way to develop pragmatist researchers is by eliminating quantitative research
methodology courses (e.g., including statistics courses) and qualitative re-
search methodology courses from curricula and replacing these with research
methodology courses at different levels that simultaneously teach both
quantitative and qualitative techniques within a mixed methodological
framework. Further, we advocate that the terms ‘‘quantitative’’ and ‘‘quali-
tative’’ be eliminated from research textbooks as much as possible. Next, we
provide a framework for reaching our goals.
Once the research objective has been determined, the next step is to determine
the purpose of the research. The research purpose indicates what will be/was
studied. More than one research purpose can ensue from a research objective.
Clearly, there is no need to distinguish the quantitative and qualitative
research paradigms here.
Alongside the research purpose is the research question, which also stems
from the research objective. Just as all studies contain one or more research
purposes, they include one or more research questions, although these
questions may not be explicitly stated in the final report. Regardless of
paradigm, research questions are open-ended general questions being asked
by the investigator(s). Research questions can be exploratory-based or
TAKING THE ‘‘Q’’ OUT OF RESEARCH 279
The two central issues routinely presented in research texts that pertain to
data collection issues are sampling and instrumentation. Chapters on the two
topics appear in virtually all research methodology textbooks. We will now
discuss how we believe these sections/chapters should be re-organized.
2.4.1. Sampling
In the quantitative research section of textbooks, authors correctly differ-
entiate probability (i.e., random) sampling designs from non-probability
(non-random) sampling designs. Typically, four probability sampling de-
signs are presented: simple random sampling, stratified random sampling,
cluster random sampling, and systematic random sampling. In addition,
multistage random sampling designs are discussed, comprising some com-
bination of these four designs (e.g., Ary et al., 2002; Gay and Airasian,
2003). In addition, some combination of the following four non-probability
sampling designs commonly is presented: convenience/accidental/haphazard
sampling designs, purposive/judgmental sampling designs, quota sampling
designs, and network/snowball sampling designs. However, because these
eight designs are presented as being used for the purpose of generalizing
findings from the sample to the population, these sampling schemes are
TAKING THE ‘‘Q’’ OUT OF RESEARCH 281
Thus, clearly, the cut points for minimum sample sizes appearing in intro-
ductory-level textbooks need to be revised.
Disturbingly, typically no discussion takes place in introductory-level re-
search methodology textbooks about sample sizes needed for qualitative
research designs. Yet, as contended by Onwuegbuzie and Leech (in press-b),
284 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
2.4.2. Instrumentation
Selecting measuring instruments plays a central role in all research textbooks.
However, discussion of instrumentation in quantitative research always is
kept separate from any discussion of data collection tools in qualitative re-
search. Yet, an increasing number of instruments generate both closed-ended
(e.g., attitudinal scores) and open-ended (e.g., words) items. Moreover, data
collection concepts and issues pertaining to the quantitative paradigm are
treated as being diametrically opposite to those relating to the qualitative
paradigm. This practice is disturbing because it leads to the creation of
assumptions that are grossly in error. For example, research textbooks ad-
vance the myth that quantitative data methods are objective and qualitative
data procedures are subjective. Yet, the former could not be farther from the
truth. As noted by Onwuegbuzie (2000a):
analyses and for qualitative analyses may appear logical; this can be mis-
leading because it gives the impression that statistical analyses should be used
exclusively for quantitative studies and qualitative data analyses should be
used solely for qualitative inquiries. In particular, it does not take advantage
of the fact that both statistical analyses and qualitative data analyses can be
used to explore and to confirm the collected data.
Consequently, we believe that the type of analysis selected should be
driven by the research objective and not by the research paradigm. If the
research objective is exploratory, then quantitative exploratory data analyses
(e.g., descriptive statistics, exploratory factor analysis, cluster analysis) and
exploratory qualitative data analyses (e.g., traditional thematic analyses)
could be used. On the other hand, if the research objective is confirmatory,
then quantitative data-analytical techniques can be selected from the array of
inferential statistics. Also, qualitative data-analytic methods can be used that
involve confirmatory thematic analyses, in which replication qualitative
studies are conducted to assess the replicability of previous emergent themes
(i.e., research driven) or to test an extant theory (i.e., theory driven), when
appropriate. Thus, the research objective unites quantitative and qualitative
data analytical techniques under the same framework (Onwuegbuzie and
Teddlie, 2003).
In addition, as recommended by several methodologists (e.g., Patton,
1990; Miles and Huberman, 1994; Tashakkori and Teddlie, 1998;
Onwuegbuzie and Teddlie, 2003), analysts can convert one type of data to
another. Specifically, Tashakkori and Teddlie (1998) uses the term quanti-
tizing to describe the conversion of qualitative data (e.g., interview responses)
to quantitative data (e.g., frequencies). They use the term qualitizing to de-
note the conversion of quantitative data (e.g., test scores) to qualitative data
(e.g., profiles). Further, Onwuegbuzie (2003) advocates the use of effect sizes
for qualitative data, which, in its most basic form, involves obtaining counts
of the frequency of an observed phenomenon. We agree with Tashakkori and
Teddlie (2003) that using these conversions breaks down the quantitative–
qualitative dichotomy.
If a study’s research objective is both exploratory and confirmatory, then
mixed-methodological data analyses techniques could be used. On-
wuegbuzie and Teddlie (2003) identified seven stages of the mixed-meth-
odological data analysis process. These are (a) data reduction, (b) data
display, (c) data transformation, (d) data correlation, (e) data consolida-
tion, (f) data comparison, and (g) data integration. Data reduction, the
initial stage, involves reducing the dimensionality of the quantitative data
(e.g., via descriptive statistics, exploratory factor analysis) and qualitative
data (e.g., via exploratory thematic analysis, memoing). Data display, the
second stage, involves describing pictorially the quantitative data (e.g., ta-
bles, graphs) and qualitative data (e.g., matrices, charts, graphs, networks,
288 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
lists, rubrics, and Venn diagrams). This stage is followed by data trans-
formation, the third stage, in which data are quantitized and/or qualitized.
The fourth stage is represented by data correlation, in which the quanti-
tative data are correlated with the qualitized data. This is followed by data
consolidation, the fifth stage, whereby both quantitative and qualitative
data are combined to create new or consolidated variables or data sets. The
sixth stage, data comparison, involves comparing data from the quantita-
tive and qualitative data sources. Data integration marks the seventh stage
of Onwuegbuzie and Teddlie’s model, whereby both quantitative and
qualitative data are integrated into a coherent whole, or two separate sets
(i.e., quantitative and qualitative) of coherent wholes. Onwuegbuzie and
Teddlie (2003) note, however, that although these seven stages are some-
what sequential, they are not linear. Including such mixed-methods data-
analytic frameworks in this chapter will help to break down further the
barriers between quantitative and qualitative approaches that currently
prevail in research textbooks.
Interpretations of findings are only as good as the data are valid or legiti-
mate. Thus, validity represents the most important stage of the research
process. In recognition of this fact, research text authors tend to provide
detailed discussion of validity in quantitative research. This discussion usu-
ally includes a detailed explanation of internal and external validity. How-
ever, many introductory-level research textbooks pay scant attention to
legitimation or verification in qualitative research. When validity is discussed,
it is presented in direct contrast to the discussion of validity in quantitative
research, thereby representing yet another dichotomy. Yet, comparing
components of validity/legitimation between quantitative and qualitative
design reveals several similarities. In particular, legitimation for both para-
digms involves eliminative induction, which is a systematic attempt to elimi-
nate rival hypotheses until the only one hypothesis remains as a viable
explanation (e.g., Manicas and Kruger, 1976). In addition, it could be argued
that some of the validity components are interchangeable. For example,
reactive arrangements (feelings and attitudes of the participants involved),
which traditionally has been associated with the quantitative paradigm, also
is applicable to the qualitative paradigm.
Similarly, theoretical validity (i.e., the degree to which a theoretical
explanation developed from research findings fits the data) and generaliz-
ability (i.e., the extent to which a researcher can generalize the account of a
particular situation or population to other individuals, times, settings, or
context), both legitimation concerns in qualitative studies (Maxwell, 1992),
also are applicable in quantitative investigations.
TAKING THE ‘‘Q’’ OUT OF RESEARCH 289
Once all conclusions have been formulated and assessed for validity, the
researcher is ready to communicate the findings. The writing styles in qual-
itative and quantitative research often represent some major differences. In
particular, quantitative researchers tend to advocate rhetorical neutrality,
consisting of an exclusively formal writing style utilizing the impersonal voice
and specific terminology. In contrast, the writing style of qualitative
researchers typically is informal, characterized by use of the personal voice
and limited definitions (Onwuegbuzie and Johnson, 2004; Onwuegbuzie and
Leech 2005a).
Nevertheless, quantitative and qualitative research reports contain many
similarities. Regardless of paradigm, a well-written report should be highly
descriptive of all first seven phases of the research process outlined earlier
(i.e., from formulating a research problem/objective to interpreting/validat-
ing data). Most importantly, researchers, regardless of orientation, always
should contextualize their reports. That is, they should carefully communi-
cate the context in which the study took place. Although qualitative
researchers routinely accomplish this, many quantitative reports are devoid
of adequate description of context. As noted by Tashakkori and Teddlie
(2003: 72): ‘‘We are struck by how much some of the ‘quantitative’ papers are
void of any reference to the cultural context of the behaviors/phenomenon
under study’’. Contextualization not only helps researchers to examine how
290 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
the present findings relate previous results, but it also helps the readers to
know the extent to which they can generalize the findings. Also, where
possible, both quantitative and qualitative research reports should be
holistic.
Consequently, there is no reason why discussion about quantitative and
qualitative research reports should take place in separate chapters in intro-
ductory research methods texts. In fact, we recommend that textbook au-
thors emphasize the importance of all components of research reports being
aligned to the research objective. Again, keeping the research objective in
mind throughout the research report can deter students from being distracted
by the quantitative–qualitative dichotomies.
References
American Educational Research Association, American Psychological Association, and Na-
tional Council on Measurement in Education (1999). Standards for Educational and Psy-
chological Testing, Revised edn Washington: American Educational Research Association.
Ary, D., Jacobs, L. C. & Razavieh, A. (1972). Introduction to Research in Education. New
York, NY: Holt, Rinehart and Winston.
Ary, D., Jacobs, L. C. & Razavieh, A. (2002). Introduction to Research in Education (6th edn).
Belmont, CA: Wadsworth/Thomson Learning.
Bordens, K. S. & Abbott, B. B. (1988). Research Design and Methods: A Process Approach.
Mountain View, CA: Mayfield.
Bryman, A. (1984). The debate about quantitative and qualitative research: a question of
method or epistemology? British Journal of Sociology 35: 78–92.
Campbell, D. T. & Stanley, J. (1966). Experimental and Quasi-Experimental Design for
Research. Chicago, IL: Rand McNally.
Caracelli, V. W. & Greene, J. C. (1993). Data analysis strategies for mixed-method evaluation
designs. Educational Evaluation and Policy Analysis 15: 195–207.
Cohen, J. (1965). Some statistical issues in psychological research. In: B. B. Wolman (ed.),
Handbook of Clinical Psychology. New York: McGraw-Hill, pp. 95–121.
Collins, R. (1984). Statistics versus words. In R. Collins (ed.), Sociological theory San
Francisco, CA: Jossey-Bass, pp. 329–362.
Connolly, P. (1998). ‘Dancing to the wrong tune’: ethnography generalization and research on
racism in schools. In: P. Connolly and B. Troyna (eds.), Researching Racism in Education:
Politics, Theory, and Practice. Buckingham, UK: Open University Press.
Cook, T. D. & Campbell, D. T. (1979). Quasiexperimentation: Design and Analysis Issues for
Field Settings. Boston, MA: Houghton Mifflin.
292 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
Gay, L. R. (1996). Educational Research: Competencies for Analysis and Application, 5th edn.
Englewood Cliffs, NJ: Merrill.
Gay, L. R. & Airasian, P. (2000). Educational Research: Competencies for Analysis and
Application, 6th edn. Upper Saddle River, NJ: Prentice-Hall.
Gay, L. R. & Airasian, P. (2003). Educational Research: Competencies for Analysis and
Application, 7th edn. Upper Saddle River, NJ: Pearson Education.
Greene, J. C., Caracelli, V. J. & Graham, W. F. (1989). Toward a conceptual framework for
mixed-method evaluation designs. Educational Evaluation and Policy Analysis 11: 255–274.
Guba, E. G. (1987). What have we learned about naturalistic evaluation? Evaluation Practice,
8: 23–43.
Hanson, N. R. (1958). Patterns of discovery: An Inquiry into the Conceptual Foundations of
Science. Newbury Park, CA: Sage.
Hodges, H. (1944). Wilhelm Dilthey: An Introduction. London: Routledge and Kegan Paul.
Hodges, H. (1952). The Philosophy of Wilhelm Dilthey. London: Routledge and Kegan Paul.
Howe, K. R. (1988). Against the quantitative–qualitative incompatability thesis or dogmas die
hard. Educational Researcher 17: 10–16.
Hughes, H. (1958). Consciousness and Society. New York: Knopf.
Johnson, R. B. & Christensen, L. B. (2004). Educational Research: Quantitative, Qualitative,
and Mixed Approaches. Boston, MA: Allyn and Bacon.
Johnson, R. B. & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm
whose time has come. Educational Researcher 33(7), 14–26
Leary, M. R. (1995). Introduction to Behavioral Research Methods. Pacific Grove, CA: Brooks/
Cole.
Liebert, R. M. & Liebert, L. L. (1995). Science and Behavior: An Introduction to Methods of
Psychological Research. Englewood Cliffs, NJ: Prentice-Hall.
Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.
Madill, A., Jordan, A. & Shirley, C. (2000). Objectivity and reliability in qualitative analysis:
Realist, contextualist and radical constructionist epistemologies. The British Journal of
Psychology 91: 1–20.
Manicas, P. Z. & Kruger, A. N. (1976). Logic: The Essentials. New York: McGraw-Hill.
Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educa-
tional Review 62: 279–299.
Maxwell, J. A. & Loomis, D. M. (2002). Mixed methods design: An alternative approach. In:
A. Tashakkori and C. Teddlie (eds.), Handbook of Mixed Methods in Social and Behavioral
Research. Thousand Oaks, CA: Sage, pp. 241–271.
McMillan, J. H. & Schumacher, S. (2001). Research in Education: A Conceptual Introduction,
5th edn. New York, NY: Longman.
McNemar, Q. (1960). At random: Sense and nonsense. American Psychologist 15: 295–300.
Meier, S. T. & Davis, S. R. (1990). Trends in reporting psychometric properties of scales used
in counseling psychology research. Journal of Counseling Psychology 37: 113–115.
Miles, M. & Huberman, A. M. (1984). Qualitative Data Analysis: An Expanded Sourcebook.
Thousand Oaks, CA: Sage.
Miles, M. & Huberman, A. M. (1994). Qualitative Data Analysis: An Expanded Sourcebook,
2nd edn. Thousand Oaks, CA: Sage.
Minor, L. C., Onwuegbuzie, A. J., Witcher, A. E. & James, T. L. (2002). Preservice teachers’
educational beliefs and their perceptions of characteristics of effective teachers. Journal of
Educational Research 96: 116–127.
Newman, I. & Benz, C. R. (1998). Qualitative–quantitative research Methodology: Exploring
the Interactive Continuum. Carbondale, Illinois: Southern Illinois University Press.
294 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
Onwuegbuzie, A. J., & Leech, N. L. (in press-c). Validity and qualitative research: An oxy-
moron? Quality & Quantity: International Journal of Methodology.
Onwuegbuzie, A. J. & Teddlie, C. (2003). A framework for analyzing data in mixed methods
research. In: A. Tashakkori and C. Teddlie (eds.), Handbook of Mixed Methods in Social
and Behavioral Research. Thousand Oaks, CA: Sage, pp. 351–383.
Outhwaite, W. (1975). Understanding Social Life: The Method Called Verstehen. London:
Allen and Unwin.
Outhwaite, W. (1983). Concept Formation in Social Science. London: Routledge and Kegan Paul.
Patton, M. Q. (1990). Qualitative Research and Evaluation Methods, 2nd edn. Newbury Park,
CA: Sage.
Popper, K. R. (1959). The Logic of Scientific Discovery. New York: Basic Books.
Punch, K. F. (1999). Introduction to Social Research: Quantitative and Qualitative Approaches.
Thousand Oaks, CA: Sage.
QSR International Pty Ltd (2002). N6 (Non-numerical Unstructured Data Indexing Searching
& Theorizing) Qualitative Data Analysis Program. (Version 6.0) [Computer software].
Melbourne, Australia: QSR International Pty Ltd.
Reichardt, C. S. & Rallis, S. F. (1994). Qualitative and quantitative inquiries are not
incompatible: A call for a new partnership. In: C. S. Reichardt & S. F. Rallis (eds.), The
Qualitative–Quantitative Debate: New Perspectives. San Francisco, CA: Jossey-Bass,
pp. 85–92.
Rosnow, R. L. & Rosenthal, R. R. (1996). Beginning Behavioral Research: A Conceptual
Primer. Englewood Cliffs, NJ: Prentice Hall.
Salkind, N. J. (2000). Exploring Research. Upper Saddle River, N.J: Prentice Hall.
Sandelowski, M. (2001). Real qualitative researchers do not count: The use of numbers in
qualitative research. Research in Nursing & Health 24: 230–240.
Sechrest, L. & Sidani, S. (1995). Quantitative and qualitative methods: Is there an alternative?
Evaluation and Program Planning 18: 77–87.
Sieber, S. D. (1973). The integration of fieldwork and survey methods. American Journal of
Sociology 73: 1335–1359.
Smith, J. K. (1983). Quantitative versus qualitative research: An attempt to clarify the issue.
Educational Researcher 12: 6–13.
Smith, J. K. & Heshusius, L. (1986). Closing down the conversation: The end of the quan-
titative–qualitative debate among educational inquirers. Educational Researcher 15: 4–13.
Smith, M. L. (1994). Qualitative plus/versus quantitative: The last word. In: C. S. Reichardt &
S. F. Rallis (eds.), The Qualitative–Quantitative Debate: New Perspectives. San Francisco:
Jossey-Bass, pp. 37–44.
Tashakkori, A. & Teddlie, C. (1998). Mixed Methodology: Combining Qualitative and Quan-
titative Approaches. Thousand, CA: Sage.
Tashakkori, A. & Teddlie, C. (2003). Issues and dilemmas in teaching research methods
courses in social and behavioral sciences: a US perspective. International Journal of Social
Research Methodology 6(1): 61–77.
Thompson, B. & Snyder, P. A. (1998). Statistical significance and reliability analyses in recent
JCD research articles. Journal of Counseling and Development 76: 436–441.
Thompson, B. & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reli-
able. Educational and Psychological Measurement 60: 174–195.
Vacha-Haase, T., Ness, C., Nilsson, J. & Reetz, D. (1999). Practices regarding reporting of
reliability coefficients. A review of three journals. The Journal of Experimental Education
67: 335–341.
Wallen, N. E. & Fraenkel, J. R. (2001). Educational Research: A Guide to the Process.
Mahwah, N.J.: Lawrence Erlbaum.
296 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH
Wilkinson, L. & the Task Force on Statistical Inference. (1999). Statistical methods in psy-
chology journals: Guidelines and explanations. American Psychologist 54: 594–604.
Willems, E. P. & Raush, H. L. (1969). Naturalistic Viewpoints in Psychological Research. New
York: Holt, Rinehart, & Winston.
Willson, V. L. (1980). Research techniques in AERJ articles: 1969 to 1978. Educational
Researcher 9(6): 5–10.
Witcher, A. E., Onwuegbuzie, A. J. & Minor, L. C. (2001). Characteristics of effective
teachers: Perceptions of preservice teachers. Research in the Schools 8: 45–57.