Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Working with Assumptions in International Development Program Evaluation: With a Foreword by Michael Bamberger
Working with Assumptions in International Development Program Evaluation: With a Foreword by Michael Bamberger
Working with Assumptions in International Development Program Evaluation: With a Foreword by Michael Bamberger
Ebook465 pages4 hours

Working with Assumptions in International Development Program Evaluation: With a Foreword by Michael Bamberger

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book discusses the crucial place that assumptions hold in conceptualizing, implementing, and evaluating development programs. It suggests simple ways for stakeholders and evaluators to 1) examine their assumptions about program theory and environmental conditions and 2) develop and carry out effective program monitoring and evaluation in light of those assumptions. A survey of evaluators from an international development agency reviewed the state of practice on assumptions-aware evaluation. This 2nd edition has been updated with further illustrations, case studies, and frameworks that have been researched and tested in the years since the first edition.

Regardless of geography or goal, development programs and policies are fueled by a complex network of implicit ideas. Stakeholders may hold assumptions about purposes, outcomes, methodology, and the value of project evaluation and evaluators—which may or may not be shared by the evaluators.  A major barrier to viable program evaluations is that development programs are based on assumptions that often are not well articulated. In designing programs, stakeholders often lack clear outlines for how implemented interventions will bring desired changes. This lack of clarity masks critical risks to program success and makes it challenging to evaluate such programs. Methods that have attempted to address this dilemma have been popularized as theory of change or other theory‐based approaches. Often, however, theory-based methods do not sufficiently clarify how program managers or evaluators should work with the assumptions inherent in the connections between the steps. The critical examination of assumptions in evaluation is essential for effective evaluations and evaluative thinking.

"How does one think evaluatively? It all begins with assumptions. Systematically articulating, examining, and testing assumptions is the foundation of evaluative thinking…  This book, more than any other, explains how to build a strong foundation for effective interventions and useful evaluation by rigorously working with assumptions." 
Michael Quinn Patton, PhD. Author of Utilization-Focused Evaluation and co-editor of THOUGHTWORK: Thinking, Action, and the Fate of the World, USA.

"This updated edition presents us with a new opportunity to delve into both the theoretical and practical aspects of paradigmatic, prescriptive, and causal assumptions. We need to learn, and apply these insights with the deep attention they deserve."  
Zenda OfirPhD. Independent Evaluator, Richard von Weizsäcker Fellow, Robert Bosch Academy,    Berlin, Germany. Honorary Professor, School of Public Leadership, Stellenbosch University, South Africa.

This thought-provoking book explains why assumptions are an essential condition within the theories and methodologies of evaluation; and how assumptions influence the ways that evaluators approach their work…It will enrich the ways that evaluators develop their models, devise their methodologies, interpret their data, and interact with their stakeholders.”
Jonny Morell, Ph.D., President, 4.669… Evaluation and Planning, Editor Emeritus, Evaluation and Program Planning

LanguageEnglish
PublisherSpringer
Release dateNov 27, 2019
ISBN9783030330040
Working with Assumptions in International Development Program Evaluation: With a Foreword by Michael Bamberger

Related to Working with Assumptions in International Development Program Evaluation

Related ebooks

Social Science For You

View More

Related articles

Reviews for Working with Assumptions in International Development Program Evaluation

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Working with Assumptions in International Development Program Evaluation - Apollo M. Nkwake

    © Springer Nature Switzerland AG 2020

    A. M. NkwakeWorking with Assumptions in International Development Program Evaluationhttps://doi.org/10.1007/978-3-030-33004-0_1

    1. Working with Assumptions: An Overview

    Apollo M. Nkwake¹  

    (1)

    Questions LLC, Maryland, MD, USA

    Apollo M. Nkwake

    Most policymakers are unaware of the fact that much of their action rests on assumptions and, moreover, they are unaware of the particular set of assumptions they hold. Even worse, policy makers are generally unaware of any methods that can help them in examining and assessing the strengths of their assumptions. Unfortunately, despite this need, few academicians have shown interest in developing methods to help policymakers examine their assumptions.

    – Mason and Mitroff (1981, p. 18)

    Abstract

    This chapter outlines the key discussions in this book. The first two parts of the book are theoretical, intended to review literature on program evaluation themes that are most closely related to assumptions. The last two parts focus on more practical discussions about how to explicate and evaluate program assumptions, including findings from a survey on assumptions-aware evaluation practice.

    Keywords

    Program evaluationEvaluation professionAssumptions ubiquityAssumptions inevitabilitySimplificationComplexity

    Challenges of a Growing Profession

    I don’t always find it easy to explain my job to people in other professions. It’s not that I don’t know or I don’t want to say what I do. It’s just that when I tell them whatever I tell them, they don’t seem to get it. Once, when I returned to the USA, an immigration official asked what I did for a living. I evaluate development programs, I responded. The official replied, So your company builds roads in communities and your work is to check how well it has happened. That’s a bit off target, but a lot closer than many guesses I’ve heard over the years. My wife never has to explain what she does when she tells people that she is a pharmacist. But she once told me, People ask me what you do, and I don’t know what to tell them. She knows what I do. She just finds the same problem I find in explaining what it actually is.

    I suspect that many evaluators have encountered this challenge. In January 2012, evaluator Nora Murphy asked the evaluation community to share its elevator speeches. These are a well-known exercise—sum up in a minute what you do to earn your living. More than 30 evaluators responded. Many agreed that they struggled with explaining evaluation. One said she had distributed basic evaluation books to family and friends. It seemed to help; she now gets fewer questions about her work. Some other elevator speeches:

    I work with people who are passionate about what they do to help assess three things: Did we do the right things? Did we do things right? What could we do better?

    I work with people who collect the data that informs the decisions that make the world a better place.

    The profession of an evaluator is thus revealing various expectations, collecting multi-dimensional evidence, and then comparing and contrasting the similarity and differences for the intended uses.

    The final example may represent part of the communication gap. Even my wife might have to think twice to get the point of that statement.

    While the views in this discussion cannot be generalized, they do show that evaluators are passionate about their work and see their jobs as important. It must be frustrating indeed for such passionate professionals to find that people don’t know much about their discipline. Fortunately, most evaluators keep moving forward and doing good jobs. And we should not assume that their profession is a recent one.

    Evaluation is not a young discipline. Evaluation practice can be traced as far back as the late 1940s and early 1950s, mostly within such US-based organizations as the World Bank, United Nations, and US Agency for International Development (USAID). In the early days, the focus was more on appraisal than on evaluation. By 1986, the discipline was speedily expanding and creating new frontiers (Scriven, 1986). In 2005, Vestiman wrote of evaluation as a discipline in its adulthood. Among the major catalysts for the growth of the evaluation profession was the increased interest in and demand for development effectiveness (Hughes & Hutchings, 2011; Segone, 2006). The demand for documented and preferably positive results of development activities has increased markedly over the past 30 years. These programs must demonstrate that they are not only well intended but effective in advancing the economic, health, welfare, and/or social conditions of the recipient population.

    The increasing importance of demonstrated effectiveness was documented in the Paris Declaration for Development Effectiveness, hosted by the French government on 2 March 2005 and signed by more than 100 donor and developing countries. Parties resolved to reform aid to increase its effectiveness in combating global poverty. One of the declaration’s principles, Managing for Results, focuses on the obligation of both donor and recipient countries to ensure that resource management and program decision-making concentrate on producing the expected results. Donors agreed to fully support developing countries’ efforts to implement performance assessment frameworks to help track progress toward key development goals (OECD, 2006). This is just one example of how development actors are concerned not only with effectiveness but also with measurement and documentation of that effectiveness.

    Measuring the level and nature of effectiveness is increasingly understood as an essential part of being effective in both current and future efforts. It’s little wonder that such mantras as you cannot manage what you cannot measure and what gets measured gets done continue to intrigue us. It’s also true that effectiveness and the measurement of effectiveness have sometimes been mistaken for synonyms. As Forss, Marra, and Schwartz (2011, p. 6) note:

    Accountability fever, results-based management and the evidence.-based policy movement contribute to a sense that everything can and should be evaluated. In many jurisdictions, evaluation is a standard operating procedure—something That should please evaluators as there is an abundance of work to be done.

    The recognition that evaluation is a critical element of identifying and measuring development effectiveness (or the lack thereof) has been accompanied by significant investment in development evaluation (Bamberger, 2006). There is much work to do, and there are many questions about how best to do it. Debates abound about the best methods and tools to use in what situations and the best time to use them. There is almost always tension between what stakeholders want to know about development programs and what evaluations can tell them within the programs’ funding, design, and time constraints.

    Assumptions in Responding to Evaluation’s Challenges

    How have evaluators addressed these challenges? Sometimes they have adjusted their tools and methods to make them better suited to unexpected complications. Sometimes the response has been to measure what is easy (or possible) to measure. Patton (1997, 2010) has illustrated this behavior with a clever analogy. A person was seen looking for his watch under a light. He was asked where he could have dropped it. In the grass, he replied. He was then asked why he was not looking for the watch in the grass. He responded, Because there is no light there. Often when we need to look for answers in dark (complex) places, we look where the light is, instead of getting a flashlight to create light where we need it. Perhaps we can always find something where the light is, even though it’s not what we’re looking for. The logical action, fetching a flashlight to look for what we want to find, is frequently is off the table. The option is all too often constrained or ruled out by shortage of resources, lack of time, and other contextual circumstances.

    What if the watch were dropped only where there was light? It would be much easier to find. Dropping the watch in a lighted area may feel like a stretch for this analogy, but programs are designed with some (and generally a lot of) intentionality. Their goals, objectives, and timeframes (short, intermediate, or long term) provide some structure and organization. But in too many cases, the design leaves much of the ground to be covered in the dark.

    Weiss (1995) argued that a major reason that complex programs are so difficult to evaluate is that the assumptions that inspire them are poorly articulated. Stakeholders are often unclear about how the change process will unfold. In western Uganda, a mother taught her young daughter never to eat food in the saucepan, but to put it on a plate. To ensure that the daughter obeyed, the mother told her that if she ever did otherwise, her stomach would bulge. The daughter kept this in mind. One day, when the two visited a local health center, they sat next to a pregnant woman in the waiting room. The girl pointed at the pregnant woman’s bulging belly and announced, I know what you did! The pregnant woman was not pleased, the girl’s mother was embarrassed, and the girl was puzzled about why she was getting stares from the adults in the room. The problem was that the pregnant woman and the girl had two different theories about the cause of bulging stomachs.

    The same dilemma surfaces in evaluating development programs, especially complex programs. All too often, an evaluation has many stakeholders with widely varied perspectives. Each may have a completely different theory about how the program intervention(s) will cause the desired change (or even about what change the stakeholder seeks). There is seldom a process to help all stakeholders clarify their theories of change and their expectations. These are seldom made explicit. As a result, critical information that could help explain why the interventions succeeded or failed is not captured, and some stakeholders are left with a sense that the intervention failed because it did not meet their personal expectations.

    Stakeholders involved in program design and operation are often unclear about how the change process will unfold. They often gloss over the early and midterm changes that are needed to bring about longer-term changes (Weiss, 1995). This lack of clarity about the interim steps needed to achieve the eventual outcome makes the task of evaluating a complex initiative challenging and reduces the likelihood of a useful outcome in understanding program successes as well as failures. Equally important, it reduces the likelihood of successful program replication.

    Assumptions are not only inevitable but also necessary, because evaluators use models or frames to simplify complex reality in order to recognize and prioritize the relationships among elements that matter. With regard to the inevitability of assumptions, Mason and Mitroff (1981, p. 18) have correctly stated that:

    Complex problems depend on assumptions because it is not humanly possible to know everything of importance about a problem of organized complexity prior to the taking of action. If the policy maker deferred engaging in action before everything of critical importance was known with complete assurance, action would be postponed indefinitely. No action would ever be taken. Policy makers cannot afford this luxury. Of necessity, a policy maker must take some action and so he or she must make a host of major and minor assumptions about a problem situation.

    Assumptions exist in the elements of reality that are excluded from the model. Assumptions have consequences. Making assumptions is inevitable, and the consequences of assumptions are also inevitable. Unexamined assumptions can be a risk to program success and useful evaluation. Nonetheless, it is important to use caution in examining assumptions and prioritize the most important ones. Assumptions-aware evaluation practice starts with understanding what assumptions are and which categories are worth examining.

    Both the ubiquity of assumptions and the importance of examining them are illustrated in the work of Richard Thaler, a professor in the University of Chicago’s Booth School of Business. Thaler was awarded the 2017 Nobel Prize in economics for his groundbreaking work in establishing the field of behavioral economics, which blends psychology with economics to better understand human decision-making. He questions traditional assumptions that markets act rationally and take human nature into account to challenge what he calls a distorted view of human behavior. Economists assume that human beings are rational maximizers, who use their large brains to calculate every decision as they strive to reach concrete objectives. But, as Thaler noted in a 2017 radio interview:

    After the ‘87 crash when the market fell 20 percent in a day, and the internet bubble, when the Nasdaq went from 5000 to 1400, and then the real estate bubble, which led to the financial crisis [of 2008] … the idea that markets work perfectly is no longer tenable. [The interview mentions the concept of sunk costs, time and money already spent and notes that doctrinaire economists assume that everyone knows when to quit, cut their losses, move on.] Let’s suppose you bought tickets to go to a concert … at 40 bucks each. And the group is OK, but then it starts to rain. How long do you think you’re going to stick around at this concert? … Not much … [Asked whether the decision would be different if the tickets had been more expensive] … Well, economists would say how much you paid for the ticket, tough luck, if it’s $40 or $500. You should just decide whether the music is worth the annoyance of the rain.

    [The interviewer notes that Thaler has been honored for recognizing that people don’t always act rationally when making economic decisions.] Well, yes, although pointing that out is obvious to everybody, except economists. So in some ways, it’s pointing out the obvious. But I think the contribution that I have made, and the young economists following in my footsteps have made, is saying, OK, what follows from there? How should we do things differently if people aren’t perfect? … people are nicer than economists give us credit for. We’re more likely to contribute to charity. Or look at all the volunteers in hurricanes and other natural disasters. Economists have no explanation for why people would work for days trying to clear rubble in an earthquake. So that’s the nature of humans. I guess we call it human nature. And Incorporating human nature into economics is what I’ve been trying to do for 40 years.¹

    Assumptions are what we believe to be true (Vogel, 2012). They may be implicit or explicit, examined or unexamined. It is difficult to exclude assumptions from what we think we know. Knowledge often rests on assumptions that you are justified in making even though you do not know those assumptions to be true. You may know things that are in some sense supported by things you merely take for granted. For example, you may know that your car is outside in front of your house. Your knowledge rests on various assumptions that you do not know but justifiably take for granted, such as the fact that no one has taken your car away since you parked it in front of your house an hour ago (Sherman & Harman, 2011).

    Assumptions-aware evaluation often involves explicating tacit assumptions.

    A tacit assumption or implicit assumption is an assumption that includes the underlying agreements or statements made in the development of a logical argument, course of action, decision, or judgment that are not explicitly voiced nor necessarily understood by the decision maker or judge. Often, these assumptions are made based on personal life experiences, and are not consciously apparent in the decision-making environment. These assumptions can be the source of apparent paradoxes, misunderstandings and resistance to change in human organizational behavior (Wikipedia, n.d.).²

    The Focus of This Book

    Building on the growing volume of work on theory-driven evaluation, this book discusses the crucial place that assumptions hold in conceptualizing, implementing, and evaluating development programs. It suggests simple ways for stakeholders and evaluators to (1) make explicit their assumptions about program theory and environmental conditions and (2) develop and carry out effective program monitoring and evaluation in light of those assumptions.

    The concept of assumptions is a sizable but relatively unexplored field. In no way do I pretend to discuss it comprehensively. We live a great deal of our lives based on assumptions, and we do not behave any differently in designing or implementing development programs. This book attempts to demonstrate how vital it is to recognize when we make assumptions and what assumptions we make. When we plan, we should not assume that we haven’t assumed.

    The first six chapters set the context for discussing tacit program assumptions. Chapter 2 covers the complexity of development programs and the types of complexity. Chapter 3 considers frameworks for designing programs in complex contexts, and Chap. 4 reviews methodologies for evaluating complex programs. Chapter 5 locates assumptions within theory. The building blocks for theory (including a program theory) are concepts, and assumptions are the glue that holds the blocks together. Chapter 6 defines assumptions, and Chap. 7 outlines their major roles.

    Assumptions may be understood as beliefs that we take for granted about how the world works. They may seem too obvious to require explication (Brookfield, 1995). Stakeholders often take for granted their beliefs or expectations about how a program should (or will) work. The assumptions that are often taken for granted are the focus of this book. They may even be tacit assumptions, unarticulated by any of the stakeholders. Chapters 8, 9, and 10 outline the categories of tacit assumptions—normative, diagnostic, prescriptive, and transformational (causal). These tacit assumptions need to be explicated during program design. Chapter 11 points out the need to ensure that more overtly identified assumptions are examined, refined, and assessed for both implementation and evaluation

    Enjoying the preview?
    Page 1 of 1