Sunteți pe pagina 1din 10

http://aje.sagepub.

com/
American Journal of Evaluation
http://aje.sagepub.com/content/33/1/79
The online version of this article can be found at:

DOI: 10.1177/1098214011426470
2012 33: 79 originally published online 8 November 2011 American Journal of Evaluation
Leslie J. Cooksy and Melvin M. Mark
Influences on Evaluation Quality

Published by:
http://www.sagepublications.com
On behalf of:

American Evaluation Association


can be found at: American Journal of Evaluation Additional services and information for

http://aje.sagepub.com/cgi/alerts Email Alerts:

http://aje.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.nav Reprints:

http://www.sagepub.com/journalsPermissions.nav Permissions:

http://aje.sagepub.com/content/33/1/79.refs.html Citations:

What is This?

- Nov 8, 2011 OnlineFirst Version of Record

- Feb 1, 2012 Version of Record >>


by guest on January 29, 2014 aje.sagepub.com Downloaded from by guest on January 29, 2014 aje.sagepub.com Downloaded from
Dialogue
Influences on Evaluation
Quality
Leslie J. Cooksy
1
and Melvin M. Mark
2
Abstract
Attention to evaluation quality is commonplace, even if sometimes implicit. Drawing on her 2010
Presidential Address to the American Evaluation Association, Leslie Cooksy suggests that evaluation
quality depends, at least in part, on the intersection of three factors: (a) evaluator competency, (b)
aspects of the evaluation environment or context, and (c) the level and nature of supportive
resources that are available in the evaluation community. In a brief reaction to her views, Mel Mark
discusses selected implications of Cooksys approach and comments on the notion of evaluation
quality itself.
Keywords
competency, context, evaluation quality, professional development
Criteria for judging evaluation quality change with different evaluation paradigms and different
evaluands (Cooksy & Caracelli, 2009). The Program Evaluation Standards (Yarbrough, Shulha,
Hopson, & Caruthers, 2011) and the American Evaluation Association (AEA, 2004) guiding prin-
ciples are touchstones for many evaluators. While they offer a comprehensive vision of evaluation
quality, I use a short list of necessary (but not sufficient) technical criteria as a starting point for
assessing the quality of my own evaluations and those of others. This short list includes whether the
right methods were used for the evaluation objectives and whether sufficient data were collected
with appropriate rigor. Sufficiency of evidence is defined by the government auditing standards
as the quantity of the evidence used to support the findings and conclusions (U.S. Government
Accountability Office [GAO], 2007). Rigor, as I define it, is a multidimensional construct, subsuming
the ideas of systematic inquiry (AEA, 2004; Stevahn, King, Ghere, & Minnema, 2005); accuracy, in
which we are striving to portray and interpret the evidencewhatever its naturewithout distortion;
and principled action, particularly in the sense communicated by the propriety standards (avoiding
conflicts of interest and treating evaluation participants with respect, e.g., Yarbrough et al., 2011). This
is a core set of criteria that I use as a starting point; it is neither a complete list nor necessarily where a
client or other stakeholder would start.
1
University of Delaware, Newark, DE, USA
2
The Pennsylvania State University, University Park, PA, USA
Corresponding Author:
Leslie J. Cooksy, University of Delaware, 104 Pearson Hall, Newark, DE 19716, USA
Email: ljcooksy@udel.edu
American Journal of Evaluation
33(1) 79-87
The Author(s) 2012
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1098214011426470
http://aje.sagepub.com
by guest on January 29, 2014 aje.sagepub.com Downloaded from
Whether using a short list or a comprehensive set of criteria to define evaluation quality, meeting
standards of quality is often challenging. Achieving quality is, at least in part, an outcome of the
intersection of practitioner competencies, evaluation context, and supportive resources that evalua-
tors can access through participation in a professional community. In an ideal world, all evaluators
would have a high degree of competency, an environment that provides opportunities for good work,
and a range of resources and supports provided by a vibrant community of evaluation theorists,
researchers, and practitioners. This article discusses some of the characteristics of and constraints
on each of these influences on quality, drawing on my own experience as an evaluator in a variety
of settings and program areas.
Evaluator Competency
The competence principle of the AEA (2004) guiding principles for evaluators refers to our edu-
cation, abilities, skills, and experience. Evaluators have been discussing what competencies are
essential for conducting quality evaluation and defining oneself as an evaluator for some time. In
1996, Scriven identified 10 necessary competencies that an evaluator needs to know and be able
to apply:
1
1. Basic qualitative and quantitative methodologies.
2. Validity theory, generalizability theory, meta-analysis.
3. Legal constraints on data control and access, funds use, and personnel treatment (including the
rights of human subjects).
4. Personnel evaluation.
5. Ethical analysis.
6. Needs assessment.
7. Cost analysis.
8. Internal synthesis models and skills.
9. Conceptual geography.
10. Evaluation-specific report design, construction, and presentation.
Scriven concluded that someone who cant competently do technically challenging evaluation
tasks is lexically excluded from professional status as an evaluator, even if they can and regularly do
perform the ancillary tasks of a professional (p. 159); however, he tempered this by acknowledging
that teams are generally involved in evaluation.
More recently, Stevahns research with King and colleagues (2005) identified a set of competen-
cies in six categories:
1. Professional practice includes such competencies as the application of professional evaluation
standards, respect for stakeholders, and contributions to the knowledge base of evaluation.
2. Systematic inquiry includes competencies in quantitative, qualitative, and mixed methods;
developing questions; making judgments; reporting; and conducting meta-evaluation.
3. Situational analysis refers to describing the program, addressing conflicts, attending to issues of
evaluation use, and organizational change.
4. Project management includes competencies like the ability to respond to requests for proposals,
writing formal agreements, preparing budgets, and so on.
5. Reflective practice has items such as awareness of self, reflection on personal evaluation practice,
and pursuit of professional development.
6. Interpersonal competence is comprised of skills in communication, conflict resolution,
cross-cultural competence, and others.
80 American Journal of Evaluation 33(1)
by guest on January 29, 2014 aje.sagepub.com Downloaded from
Altogether there are 41 competencies; almost half (20) are in systematic inquiry.
In addition to competencies, our guiding principles indicate that a good evaluator, an evaluator
capable of conducting a high-quality evaluation, must also have certain ways of being in the world,
including being honest, having integrity, and respecting people (AEA, 2004). Schwandt (2008)
refers to these as dispositions and includes the dispositions to be judicious in ones claims,
truth-seeking, open-minded, and intellectually humble.
How many of us have all of these competencies and dispositions? When I reflect on my own prac-
tice, I see some evidence that I have the dispositions identified by Schwandt (2008) and competen-
cies specified by Stevahn, King, Ghere, and Minnema (2005) that relate to frameworks and
sensitivities that one brings to an evaluation. But I feel very weak in some of the things that I believe
are important technical skills. These can mostly be found on Scrivens list (1996): Cost analysis is
just one example. Although I have the competence and disposition to have a pretty good sense of
what I dont know and the contact information and goodwill of some very talented people, I have
an acute sense of the gap between what I know and what I could or should know. Moreover, I feel
the gap widening as the number of resources and information about new ways of thinking and new
methodologies increase exponentiallyfrom AEAs 365 Tip-a-Day e-mail alerts to the online
and print texts, journals, and other materials. These resources are available to help us practice our
craft more thoughtfully and more skillfully, but the sheer volume of information can be daunting.
Perhaps, a subcompetency should be added to Stevahn et al.s competency on professional develop-
ment: the ability to quickly identify and absorb the most relevant and useful resources on a topic
(methods, models, program areas, etc.).
Evaluation Environment
Evaluation quality is also influenced by the context in which we work. Evaluation policies, one
important contextual variable, include decisions about everything related to evaluationfunding,
objectives, uses, audiences, and so on. (Trochim, 2009). The policies may be implicit, as when insuf-
ficient funds are allocated to evaluation, or explicit, as when policies favor one approach or another.
For example, in July 2010, Institute for Education Sciences (IES) Director John Easton said that IES
would be expanding its definition of rigorous methods and ensuring that evaluation questions drove
methods choices, an explicit change in IES evaluation policy toward a more open definition of rigor.
Not surprisingly, with the short list of evaluation quality criteria that I provided at the beginning of
this article, I consider this a move toward a more supportive environment. Evaluation policies are
not necessarily either supportive or constraining: They can have elements of both. For example, the
program assessment rating tool used by the office of management and budget in the Bush admin-
istration encouraged agency attention to evaluation (U.S. GAO, 2005), but was also considered by
some to define evaluation quality too narrowly (AEA Evaluation Policy Task Force, 2008). (The
original guidance for Office of Management and Budget (OMB) examiners emphasized rando-
mized controlled trials as evidence of quality, but this was later revised to acknowledge that the
design is not suitable or feasible for every program or purpose.)
Evaluation policies that support evaluation quality also include internal organizational strategies for
quality control. For example, the U.S. Government Accountability Office (GAO), an independent
agency of the U.S. Congress that performs audits and investigations of federal programs and activities,
routinely employs the evaluation quality control strategies of member checks with stakeholders and
indexing and referencing, a process similar to a confirmability audit (Lincoln & Guba, 1985;
Schwandt &Halpern, 1988). Indexing involves linking every claimof knowledge in the report to source
documents, whether these are interview write-ups, the results of statistical tests, or other data. Refer-
encing consists of an independent meta-evaluator (also a GAO employee but not involved in the
project) using the index to examine the confirmability of every one of the statements. Other quality
Cooksy and Mark 81
by guest on January 29, 2014 aje.sagepub.com Downloaded from
control strategies include checklists applied either by the evaluator or his or her supervisor, indepen-
dent entities that conduct critical reviews, and funding for external meta-evaluation.
The political context affects evaluation quality in ways other than evaluation policy (Chelimsky,
1987, 2009; Cronbach & Associates, 1980; Palumbo, 1987). A simple example of how politics can
influence quality, drawn from my own experience, is when a change in political leadership ends a
program in the middle of its evaluation. When the decision makers lost interest, the evaluation was
wrapped up quickly, failing to gather the sufficiency of evidence needed to warrant a conclusion
about the programs merit or worth. (Fortunately, because of other contextual issues, I was able
to provide some useful information to other stakeholders). In a more consequential example, House
(2008) identifies several ways in which political and economic interests can undermine the quality of
drug trials, concluding that evaluation rigor can be compromised by conflict of interest.
Economic considerations of supporting oneself and others can also play a role in the kind of evalua-
tions one chooses to do, which in turn can affect the extent to which they meet criteria of quality. For
example, many of the positions in the Delaware education research and development center, where I
work, are dependent on grants and contracts. As a result, I routinely take on evaluations that meet my
core evaluation quality criteria but would not meet some utility standards (Yarbrough et al., 2011). Spe-
cifically, many of the evaluations that I have been a part of involve program developers who are not
interested in evaluation except to the extent that it gets them funded and funding agencies that are
only seeking evidence that evaluation was conducted, without interest in how it was conducted or
what it found. Are there sometimes opportunities to be useful even in an inhospitable environment?
Certainly and evaluations done purely for the sake of a sort of superficial accountability without
connection to a specific action are not inherently inconsequential (Mark, 2006). But consequential
and good are, to me, two different things. In my context, I cannot limit my practice to projects that
provide opportunities for excellence. I knowI amnot alone, but ideally, we would have many oppor-
tunities to work for clients and in organizations genuinely supportive of good evaluation practice.
Support From the Evaluation Community
While evaluation policy and other contextual factors may not always encourage evaluations of the
highest quality, evaluation centers and associations are working to provide resources and support
policies intended to improve evaluation quality. For example, Western Michigan Universitys
Evaluation Center website provides a variety of checklists that can be used for guiding and assessing
evaluations. In addition to checklists for the Program Evaluation Standards (Stufflebeam, 1999) and
Guiding Principles (Stufflebeam et al., 2005), the Western Michigan site also includes checklists on
the hallmarks of high-quality constructivist evaluation (Guba & Lincoln, 2001), deliberative dem-
ocratic evaluation (House & Howe, 2000), and utilization-focused evaluation (Patton, 2008), and
other approaches. While these checklists may not fit on one page or use simple language, as recom-
mended in The Checklist Manifesto (Gawande, 2009), they distill and present both general standards
and standards for some specific approaches in a form that is easy to use. Other evaluation centers
support evaluation practitioners with tipsheets, guides, templates, and other resources.
Evaluation associations have a variety of approaches for encouraging evaluation quality. Through its
evaluation policy task force, AEAseeks to create a policy environment that supports evaluation quality
bycommunicatingthe nature andutilityof goodevaluationworktopolicymakers (AEA, 2010). Withits
professional designations program, the Canadian evaluation society (CES) is using a different strategy.
To become a CES credentialed evaluator, one needs to provide evidence of a graduate-level degree or
certificate, at least 2 years of experience, and evidence demonstrating educationand/or experience in the
five domains of competencies for Canadian Evaluation Practice (Canadian Evaluation Society [CES],
2010). (These domains draw heavily from Stevahn et al., 2005.) The CES program serves as a way to
educate those who contract for evaluation about what competencies a skilled evaluator should have.
82 American Journal of Evaluation 33(1)
by guest on January 29, 2014 aje.sagepub.com Downloaded from
CES also promotes the ongoing development of evaluator quality through continuing education
requirements. Similarly, AEA and other professional associations focus on building skills and
strengthening appropriate dispositions through conferences, workshops, institutes, webinars, and
other instructional activities. For example, in addition to the guiding principles for evaluators, AEA
provides training materials on the guiding principles on its website (AEA, 2007). Support for the
program evaluation standards (Yarbrough et al., 2011) and public statements on such topics as the
importance of culturally competent evaluation are other AEA strategies for increasing competence
and creating a more supportive environment for high-quality evaluation.
These resources are not necessarily without controversy. For example, when another evaluation
group, the Network of Networks on Impact Evaluation, published its guidance on impact evaluation,
the attention given to causal attribution, and the use of counterfactuals created both debate and
efforts at reconciliation (Chambers, Karlan, Ravallion, & Rogers, 2009; Leeuw & Vaessen,
2009). Because these controversies challenge our thinking and require us to consider our stance
on criteria of quality, they can reinforce the desirable dispositions of being open-minded and intel-
lectually humble, and thus support evaluation quality both directly through the resources provided
and indirectly through the discussions that follow.
Conclusion
The three influences on evaluation quality discussed above are interdependent. Competent evaluators
with a comprehensive sense of the field should be influencing evaluation policy, otherwise the policies
may undermine quality. Supportive policies should be in place to ensure that competence is rewarded
and resources are available, otherwise poor-quality evaluations may become the norm. It is easy to say
that we need competent practitioners and supportive policies, but challenging to achieve in part because
evaluators donot always agree onthe core characteristics of high-quality evaluation. We come fromdif-
ferent disciplines, work in different contexts, and have different beliefs and values about what is
important in our work. Despite that, we share a common commitment to evaluation quality. To carry
this commitment out, each of us should identify our own set of key criteria for evaluation quality and
take time during and at the end of evaluations to assess the extent to which we are likely to, or have,
met themin other words, we should do meta-evaluation. In addition, considering and identifying
the gaps in our knowledge, the ways in which policies have constrained us, and the resources that we
lack can lead us to actions such as seeking professional development, advocating for better evalua-
tion policies, and working with evaluation centers and associations to develop new resources.
Evaluation quality is never completely achieved. Our key criteria evolve as we gain experience and
knowledge. The evaluation context shiftsskills that were essential in one evaluation may not be
relevant in the next or a policy that was supportive in one instance may feel constraining in another.
As criteria evolve and contexts change, evaluators must continue to strive for the right combination
of competency, policies, and resources in order to achieve the elusive goal of evaluation quality.
References
American Evaluation Association. (2004, July). Guiding principles for evaluators. Retrieved from http://www.
eval.org/Publications/GuidingPrinciples.asp
American Evaluation Association. (2007). Guiding principles training package. Retrieved from http://www.
eval.org/GPTraining/GPTrainingOverview.asp
American Evaluation Association. (2010, October). An evaluation roadmap for a more effective government.
Retrieved from http://www.eval.org/EPTF/aea10.roadmap.101910.pdf
American Evaluation Association Evaluation Policy Task Force. (2008, March). Comments on what constitutes
strong evidence of a programs effectiveness? Retrieved from http://www.eval.org/aea08.omb.guidance.
responseF.pdf
Cooksy and Mark 83
by guest on January 29, 2014 aje.sagepub.com Downloaded from
Canadian Evaluation Society. (2010, April). Competencies for Canadian Evaluation Practice. Retrieved from
http://www.evaluationcanada.ca/txt/2_competencies_cdn_evaluation_practice.pdf
Chambers, R., Karlan, D., Ravallion, M., & Rogers, P. (2009). Designing impact evaluations: Different
perspectives. Working Paper 4. New Delhi, India: International Initiative for Impact Evaluation. Retrieved
from http://www.monitoreoyevaluacion.info/biblioteca/MVI_115.pdf
Chelimsky, E. (1987). The politics of evaluation. Society, 25, 2432. doi:10.1007/BF02695393
Chelimsky, E. (2009). Integrating evaluation units into the political environment of government: The role of
evaluation policy. In W. M. K. Trochim, M. M. Mark, & L. J. Cooksy (Eds.), Evaluation policy and evalua-
tion practice. New directions for evaluation (Vol. 123, pp. 5166). doi:10.1002/ev.305
Cooksy, L. J., & Caracelli, V. J. (2009). Metaevaluation in practice: Selection and application of criteria.
Journal of MultiDisciplinary Evaluation, 6, 115.
Cronbach, L. J., & Associates. (1980). Toward reform of program evaluation. San Francisco, CA: Jossey-Bass.
Easton, J. (2010). New research initiatives for IES. IES Research Conference Keynote. Retrieved June 29, 2010,
from http://ies.ed.gov/director/speeches2010/2010_06_29.asp
Gawande, A. (2009). The checklist manifesto: How to get things right. New York: Metropolitan Books.
Guba, E. G. & Lincoln, Y. S. (2001). Guidelines and checklist for constructivist (a.k.a. Fourth Generation)
evaluation. Retrieved from http://www.wmich.edu/evalctr/archive_checklists/constructivisteval.pdf
House, E. R. (2008). Blowback: Consequences of evaluation for evaluation. American Journal of Evaluation,
29, 416426. doi:10.1177/1098214008322640
House, E. R. & Howe, K. R. (2000). Deliberative democratic evaluation checklist. Retrieved from http://
www.wmich.edu/evalctr/archive_checklists/dd_checklist.PDF
Leeuw, F., & Vaessen, J. (2009). Impact evaluations and development: NONIE guidance on impact evaluation.
Washington, DC: Network of Networks on Impact Evaluation.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Thousand Oaks, CA: Sage.
Mark, M. M. (2006). The consequences of evaluation: Theory, research, and practice. St. Louis, MO: Presidential
address presented at the annual meeting of the American Evaluation Association.
Palumbo, D. (Ed.). (1987). The politics of program evaluation. Thousand Oaks, CA: Sage.
Patton, M. Q. (2002). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
Schwandt, T. A. (2008). Educating for intelligent belief in evaluation. American Journal of Evaluation, 29,
139150. doi:10.1177/1098214008316889
Schwandt, T. A., & Halpern, E. S. (1988). Linking auditing and metaevaluation: Enhancing quality in applied
research. Applied Social Research Methods Series, Volume 11. Thousand Oaks, CA: Sage.
Scriven, M. (1996). Types of evaluation and types of evaluator. American Journal of Evaluation, 17, 151161.
doi:10.1177/109821409601700207
Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program
evaluators. American Journal of Evaluation, 26, 4359. doi:10.1177/1098214004273180
Stufflebeam, D. L. (1999). Program evaluation metaevaluation checklist. Retrieved from http://www.wmich.
edu/evalctr/checklists/program_metaeval.pdf
Stufflebeam, D. L., Goodyear, L., Marquart, J., & Johnson, E. (2005). Guiding principles checklist for evaluating
evaluations. Retrieved from http://www.wmich.edu/evalctr/archive_checklists/guidingprinciples2005.pdf
Trochim, W. M. K. (2009). Evaluation policy and evaluation practice. New Directions for Evaluation, 123, 1332.
doi:10.1002/ev.303
U.S. Government Accountability Office. (2005). Performance budgeting: PART focuses attention on program
performance, but more can be done to engage Congress (GAO-0628). Washington, DC: Author.
U.S. Government Accountability Office. (2007). Government auditing standards (GAO-07162G). Washington,
DC: Author.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards:
A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.
84 American Journal of Evaluation 33(1)
by guest on January 29, 2014 aje.sagepub.com Downloaded from
Influences on Evaluation
Quality: Reflections and
Elaborations
Melvin M. Mark
Cooksy (2011) calls evaluators attention to the multifaceted attribute of evaluation quality. In one
sense, the issue of evaluation quality is simple and straightforward: Surely, we would all prefer to have
conducted (or otherwise be associated with) an evaluation that is widely judged to be of high quality,
rather than one of dubious or poor quality. In another sense, the issue of evaluation quality is complex:
Evaluation quality is multifaceted, dependent on context, potentially dependent also on ones perspec-
tive about evaluation, and thus perhaps difficult to assess. These complications, however, do not justify
ignoring evaluation quality. To the contrary, high evaluation quality remains our aspiration.
In a valuable article based on her 2010 American Evaluation Association Presidential Address,
Cooksy offers a concise and thoughtful overview of evaluation quality. In summary, Cooksy notes
that (a) there are standards (Yarbrough et al., 2011) and guiding principles (American Evaluation
Association [AEA], 2004) which offer a comprehensive vision of evaluation quality (p. 3), but
(b) she uses a shorter list of criteria as a starting point in judging quality: whether the right methods
were used for the evaluation objectives and whether sufficient data were collected with appropriate
rigor (p. 3). Cooksy further notes (c) that achieving evaluation quality depends at least in part on
three factors: (a) evaluator competencies, (b) the context in which evaluation takes place, and (c)
supportive resources from within the evaluation community. Finally, (d) Cooksy discusses important
aspects of each of these three factors.
In discussing evaluator competency, Cooksy notes a list of 10 competencies from Scriven (1996),
a list of 41 competencies organized in six categories by Stevahn, King, Ghere, and Minnema (2005),
and four attributes that Schwandt (2008) includes on a list of evaluator dispositions. Cooksy expli-
citly adds to the list the ability to climb a learning curve quickly, and she implicitly suggests adding
the ability of an evaluator to know what it is that he or she does not know. I would highlight three
additional evaluator competencies. First is the ability to make defensible judgments about what the
evaluation objectives should be in a given instance, and to contribute to relevant decision processes
in which evaluation objectives are selected and/or refined. A second is adaptability, the ability to
make changes in the face of shifting circumstances. Often this skill will be exhibited when chal-
lenges arise in practice, requiring skillful redeployment of evaluation resources (Fitzpatrick,
2008). A third is the ability to identify whose game you are in, for example, to assess early on
whether a funder is interested more in appearing interested in evidence-based decision making or
in getting a waiver from standard practices, rather than in learning from the evaluation. By seeing
that in a particular case, for instance, evaluation is a ritualistic exercise to the funder and program
leaders, the evaluator can exercise adaptability and reformulate the emphasis of the evaluation in
ways that might bring positive consequences nevertheless (a topic addressed subsequently). These
three competencies are to some extent present or implied in the lists Cooksy cites, but in my expe-
rience may deserve more attention than they receive.
In discussing contextual variables that affect evaluation quality, Cooksy highlights evaluation
policies, organizational protocols for evaluation quality control, the broader political context, and
economic considerations. With regard to economic considerations, Cooksy describes her situation
in a university-based Center where many peoples jobs depend on a continuing stream of grants and
Cooksy and Mark 85
by guest on January 29, 2014 aje.sagepub.com Downloaded from
contracts. As a result, Cooksy explains, she routinely takes on projects that allow her to meet her
core evaluation quality criteria, but would not meet some utility standards of the Joint Committee
(Yarbrough, Shulha, Hopson, & Caruthers, 2011). In these projects, program developers and funders
see evaluation as a formal requirement to be met but have little if any interest in actual evaluation
use. Cooksys description of such circumstances, which undoubtedly are familiar to many evalua-
tors, leads me to several observations.
First, the utilization-focused evaluator (Patton, 2008) would have the evaluator strive mightily to
find to a happy intersection of evaluation quality and potential use. This might involve, for instance,
a more formative evaluation stance or the formative-on-steroids approach of developmental evalua-
tion (Patton, 2011). Second, an alternative approach, when direct use seems unlikely, is to focus the
evaluation on knowledge development of some kind (Mark, Henry, & Julnes, 2000). Any resulting
contribution of the evaluation to the knowledge base would mean that the evaluation had desirable
effects, even if the program developer and funder are not interested in use. Third, in my experience
funders do sometimes have potential uses for evaluation, even in circumstances in which they and
especially their grantees appear to be ritualistic in their approach to evaluation. In particular, a state
or federal agency may need to be ready to report to legislators about what type of clients received
which kind of services under the funding stream in question. The funding agency may also need to
be able to illustrate for legislators the benefits the funding brings. A utility-oriented evaluator may find
solace in identifying such potential uses and ensuring that the evaluation will satisfy them. Fourth,
there may be important consequences of evaluation that do not conform to more common, traditional
ideas about evaluation use. In many cases, the mere existence of an evaluation may help greatly in
terms of the quality of program implementation. The presence of an evaluator, the knowledge that
measures will be taken and observations made, the anticipation of evaluation reports being pro-
ducedthese and related factors may have several beneficial consequences. They may, for example,
help keep program developers and managers from ignoring the ongoing implementation challenges of
the program while they deal with other demands. The existence of an evaluation may also increase the
attention of program staff to implementation, including their efforts to implement in ways that are con-
sistent with the program model. At the extreme, fraud and other inappropriate behaviors may be less
likely if an evaluation is taking place and could uncover them. Fifth, the consequences of evaluation
can go beyond the program being evaluated and even beyond the organization in which that program
exists. In particular, for a Center of the kind Cooksy describes, an adequate portfolio of projects main-
tains and builds evaluation capacity. Capacity building or maintenance for a center includes increasing
the individual and collective skills of staff members, and also growing or maintaining a suitable staff-
ing level to ensure the ability to do high-quality evaluation in the future. In short, while economic con-
siderations can of course require trade-offs with respect to evaluation quality, in circumstances of the
type, Cooksy describes the overall picture may be more positive than would be suggested by attention
only to direct, instrumental use. More generally, as Cooksy reminds us, judgments of evaluation qual-
ity may be contextual, and our standards for high quality may be different for an evaluation that takes
place in a context in which prime evaluation partners are not interested in use.
Cooksy also addresses the kinds of supports that are available from within the evaluation
community. I offer three brief observations related to these supports. First, despite the existence
of training programs that specialize in evaluation, it seems clear that far more people come to eva-
luation from various indirect, back door pathways. This increases the need for supports such as
professional development workshops about evaluation. Second, the list of supports also includes tra-
ditional means of conveying information, such as journals and books. Third, Cooksys presidential
address and resulting article are themselves an example of the worthwhile supports available for
those engaged in the enterprise of evaluation.
Finally, I return to the question of what constitutes evaluation quality. Perhaps, a Triple A
rating of evaluation quality should be reserved for those evaluations that not only employ
86 American Journal of Evaluation 33(1)
by guest on January 29, 2014 aje.sagepub.com Downloaded from
appropriate methods with rigor and lead to use but also those that have a certain kind of conse-
quence. Specifically, perhaps the highest, AAA rating of evaluation quality should be reserved for
an evaluation that actually benefits its intended beneficiaries. And in this regard, the key (as opposed
to the secondary or tertiary) intended beneficiaries of an evaluation should be the intended benefi-
ciaries of the program, policy, or practice being evaluated. For example, when evaluating an evalua-
tion of preschool programs, what does evaluation contribute to the young children who are the
current and future preschool attendees? Such benefits, admittedly in conjunction with other criteria
such as technical merits, may be the ultimate dimension of evaluation quality.
References
American Evaluation Association. (2004, July). Guiding principles for evaluators. Retrieved from http://www.
eval.org/Publications/GuidingPrinciples.asp
Cooksy, L. J. (2011). Influences on evaluation quality. American Journal of Evaluation. Current issue.
Fitzpatrick, J. (2008). Exemplars choices: What do these cases tell us about practice? In J. Fitzpatrick, M. M.
Mark, & T. C. Christie (Eds.), Evaluation in action: Interviews with expert evaluators. Thousand Oaks: Sage.
Mark, M. M., Henry, G. T., & Julnes, G. (2000). Evaluation: An integrated framework for understanding, guid-
ing, and improving policies and programs. San Francisco, CA: Jossey-Bass.
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use.
New York, NY: Guilford.
Schwandt, T. A. (2008). Educating for intelligent belief in evaluation. American Journal of Evaluation, 29,
139150. doi:10.1177/1098214008316889
Scriven, M. (1996). Types of evaluation and types of evaluator. American Journal of Evaluation, 17, 151161.
doi:10.1177/109821409601700207
Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program
evaluators. American Journal of Evaluation, 26, 4359. doi:10.1177/1098214004273180
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards:
A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.
Cooksy and Mark 87
by guest on January 29, 2014 aje.sagepub.com Downloaded from

S-ar putea să vă placă și