Sunteți pe pagina 1din 21

Missing Links in Evidence-Based Practice for Macro Social Work

Richard Hoefer Catheleen Jordan

ABSTRACT. The paradigm of evidence-based practice includes a process for searching, appraising, and synthesizing evidence to answer a question. Two important elements related to macro social work practice must be addressed in this process. One missing link relates to the imperative to foster client self-determination and empowerment by allowing clients to choose among equally salient interventions. The second missing link is to ensure that the intervention is implemented with fidelity to the original model. This article describes the need for incorporating these elements in evidence-based macro practice. Guidelines are provided for implementing the elements and their implications For practice, research, and policy.

KEYWORDS. Client self-determination, evidence-based practice, fidelity, implementation, macro social work practice INTRODUCTION Attempting to incorpiorate the paradigm of evidence-based practice (EBP) in macro social work practice is a challenge. Since the concept
Richard Hoefer, PhD, and Catheleen Jordan, PhD, are professors in the School of Social Work, Universiiy of Texas at Arlington. Address correspondence to Richard Hoefer, School of Social Work, University of Texas at Arlington, Arlington, TX, 76019 (E-mail: rhocfer@uta.edu).
I

Joumal of Evidence-Based Social Work, Vol. 5(3-4) 2008 http://www.haworthprcss.com/web/JEBSW 2008 by The Haworth Press, Inc. All rights reserved. doi : 10.1080/15433710802084292

549

550

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

of EBP is fairly new in the social work profession, it has not been fully accepted in macro practice (or even in micro practice where the idea was first imported from medicine) and, problematically, th(i process of appraising and using evidence has not been adjusted for the unique elements of macro practice. In fact, as recently as 2004, McNeece and Thyer stated, "Relatively little has been written about evidence-based macro practice" (p. 15). This article defines EBP and refines the approach by incorporating two important elements that can become missing links. The first link relates to client self-determination and empowerment. The second concerns a significant but often ignored process of assessing the fidelity of the intervention in its implementation, which must be conducted before evaluating outcomes. The process of EBP is amended for the application to social work macro practice. Techniques for assessing implementation fidelity are described. Implications for practice, research, and policy are also discussed.

RE-DEFINING THE PROCESS OF EVIDENCE-BASED PRACTICE Gambrill has written extensively defining evidence-based practice in social work (e.g., see Gambrill, 1990, 1999; Gibbs & Gambrill, 2002). According to her formulation, EBP is not an end state where one can be an evidence-based practitioner simply based on information one has amassed. Sackett, Richardson, Rosenberg, and Haynes (1997) describe EBP as a problem-solving process consisting of five steps. 1. Convert information needs into answerable questions. 2. Track down, with maximum efficiency, the best evidence with which to answer these questions. 3. Critically appraise the evidence for its validity and usefulness. 4. Apply the results of this appraisal to policy/practice decisions. 5. Evaluate the outcome (p. 3). Missing Link Number I: Efficacy of Intervention According to McNeece and Thyer (2004), EBP also involves providing the client (whether a person, a community, or an organization)

Richard Hoefer and Catheleen Jordan


I

551

TABLE 1. Evidence-Based Macro Practice Process


1 I

Step 1 Step 2 Step 3 Step 4

Convert information needs into a relevant question for practice in a community and/or organizational context. Track down with rnaximum efficiency the best evidence to answer the question. Critically appraise, the evidence for its validity and usefulness. Provide clients with appropriate information about the efficacy of different interventions and collaborate with them in making the final decision in selecting the best practice. ', Apply the results of this appraisal in making policy/practice decisions that affect organizational and/or community change. Assess the fidelity' implementation of the macro practice intervention. Evaluate service outcomes from implementing the best practice.

Step 5 Step 6 Step 7

Note. Adapted from Sackett et ai., (1997).

with appropriate information about the efficacy of different interventions and allowing the client to make the final decision. This step becomes Step 4 in the process of evidence-based macro practice as depicted in Table 1. This insight is important for macro social workers to consider in using this practice paradigm. Allowing client communities and organizations to choose from among the best options available ensures that client self-determination is respected. Because self-determination and empowerment are core values within the National Association of Social Workers'(1999) Code of Ethics, promoting them within macro,practice is essential. When social workers are able to address these ethical hallmarks by providing information about macro interventions that are not only well suited for the situation but also have been shown to be efficacious, many benefits may accrue. Among these are procedural benefits (greater community and organizational backing for the proposed program or practice, more support for overcoming any initial difficulties that may arise, and a greater willingness to take responsibility for institutionalizing the approach) and substantive benefits (a greater likelihood of positive results for the community or organization and the people who participate in them from the use of an empirically supported intervention).

Missing Link Number 2: Fidelity of Implementation Gibbs and Gambrill (2002) make a heroic assumption that the intervention (whether at a micro level or macro level) is implemented

552

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

completely and properly. Once knowledge has been applied, they believe that it will stay applied, completely, properly, and consistently, so that one may then easily evaluate the intervention for the outcomes that are produced. This, however, is not necessarily the case. The problem of a lack of implementation or of implementing an evidencebased practice only partially, while unfortunate in micro social work, is perhaps even more problematic in macro practice. The second missing link becomes Step 6 in the process of implementing an evidence-based macro practice intervention, as shown in Table 1. Complete and accurate implementation is not usually the case in real life. We want to avoid what has been called a type III error: evaluating a program for its outcomes when that program has not been implemented properly (Hoefer, 1994). Pressman and Wildavsky (1973) noted over three decades ago that programs launched with the best of intentions frequently fail to live up to expectations because they were not implemented as planned. Choosing an empirically validated program because it has been shown to work well and produced positive effects on clients is consistent with EBP at the macro level. But expecting the program to achieve those effects after it has been only partially or poorly implemented is to be overly optimistic and may produce negative outcomes suggesting that the program does not really work. Verifying that an intervention has been implemented as designed is crucial to evaluating the effectiveness of a macro practice intervention. WHAT ARE EVIDENCE-BASED INTERVENTIONS? Evidence-based practice consists of defining a problem, finding the best possible answer to solving that problem (based on research and client preferences), operationalizing that answer, assessing whether the answer was actually put into effect, and then evaluating how well that answer worked in fixing the problem it was supposed to fix. Evidence-based interventions are those "answers to problems" that have been shown through, whenever possible, some scientific proi^ess to work. When scientific research is not available or has not been conducted, other sorts of evidence to support the intervention as being efficacious may be used. In this paradigm, not all evidence is equal, so the use of EBP implies being able to evaluate the research base put forward to support each possible intervention. McNeece and Thyer (2004, p. 10) indicate

Richard Hoefer and Catheleen Jordan

553

the following forms of evidence and their strength (from strongest to weakest): ; Systematic reviews/meta-analyses Randomized controlled trials Quasi-experimental studies Case-control and cohort studies Pre-experimentai group studies Surveys Qualitative studies Still, even if an intervention has credible scientific evidence to support its efficacy (t whatever level of social work we wish to examine), if the intervention is not actually implemented completely and correctly, we cannot suggest that the intervention as designed is being implemented. Carrilio (2006) argues that examining program implementation is important in understanding program effectiveness. Thus, choosing an "evidence-based intervention," in and of itself, should not be enough' for any practitioner. The intervention model must be put through implementation fidelity testing so that the practitioner can be assured that the techniques of the model are being applied correctly. ' Intervention planning for a particular locale may include modifications of non-essential elements of the program to have it better fit the situation in that locale. The Centers for Disease Control (CDC), for example, have information on a number of different programs regarding HIV/AIDS prevention efforts. Each program described has core elements that are required and may not be altered. Core elements are based on behavioral theory and "are thought to be responsible for the intervention's effectiveness" (Centers for Disease Control, 2006, April, p. 15). The CDC (p. 16) suggests that "key characteristics" of an intervention can be altered to fit the organization and clients. CDC guidelines include procedures to follow during implementation as well as resource requirements. The guidelines promulgated by the CDC are helpful in showing the ipiportance of having a clear understanding of what a community-based intervention is, what is vital to replicate, and what is adaptable. ' The Substance Abuse and Mental Health Services Administration (SAMHSA) within the federal government has taken a few additional steps in specifying the need for implementation monitoring. They use the term "fidelity assessment" to describe this process. According to SAMHSA (2006), fidelity is defined as:

554

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

Fidelity is the degree to which a specific implementation of a program or practice resembles, adheres to, or is faithful to the evidence-based model on which it is based. Fidelity is formally assessed using rating scales of the major elements of the evidence-based model.

CAN MACRO PRACTICE BE EVIDENCE-BASED?


The most vexing issue for EBP in the macro arena is the extent to which community and administrative practice can be theoretically based, tested, generalized, manualized, implemented, and declared successful. We suggest the most pressing need for social work maci o practice is to more clearly specify what its interventions actually are. This specification process requires developing a model, either deductively or inductively, to address a macro issue, propose a course of action, and specify tasks to be completed in implementing the model. Netting, Kettner and McMurtry (2004) define macro practice as "professionally guided intervention designed to bring about planned change in organizations and communities" (p. 4). In order for macro social work practice to be evidence-based, then, interventions must be systematically developed from theory, tested in practice, revised as needed for greater effectiveness and for different conditions, standardized as much as possible, and communicated to others. In theory, macro social work practice can be evidence-based as much as micro social work can be (Thyer, 2001), simply by following the steps illustrated in Table 1. Still, many arguments have been advanced as to why evidence-based macro practice is not possible or is unwise. Hewing closely to the logic of Gibbs and Gambrill (2002), let us look at these objections and how they may be countered.

ARGUMENTS AGAINST AND COUNTER-ARGUMENTS FOR EVIDENCE-BASED MACRO PRACTICE EBP Ideas Only Apply to Clinical Practice, Not Macro Practice
Some may argue that because EBP was initially implemented in clinical medical settings, it is difficult or impossible to have the

Richard Hoefer and Catheleen Jordan

555

concepts apply to macro social work with a focus on communities and organizations. The argument is that clinical social work concepts just don't apply to administration and community practice. We consider this a misunderstanding of what EBP is. To say that one cannot apply the process involved in EBP because of its origin in a clinical setting is akin to saying that one cannot become a community organizer or administrator if one has begun a career in clinical social work. With some re-orientation and effort, we believe the concepts of EBP can successfully be transferred to macro social work practice. There is Not Enough Credible or Useful Evidence to Support the Use of EBP in Macro Practice This may be the most popular critique of EBP in general, not just in the macro social work arena. We must recall an important point about macro practice, though: "In today's world, macro practice is rarely the domain of one profession. Rather, it involves the skills of many disciplines and professionals in interaction" (Netting et al., 2004, p. 7). We must therefore remember to search for evidence beyond the social work literature. Considerable evidence is available in related disciplines and fields such as sociology, political science, community psychology, public health, nonprofit administration, business administratioh, public administration, and so on, in both American and other sources. Of course, this has to be evaluated and perhaps adapted to fit the social work context, but that is all part of the process of EBP. Even if there is little or no research-based evidence with higher levels of credibility, evidence that is based on practice wisdom or anecdotal is still considered better than a random guess. Also, literature exists in other countries where social work takes on a more macro orientation, such as the Swedish Institute for Evidence-Based Social Work Practice: English version home website at http://www.socialstyrelsen.se/en/about/IMS/index.htm. EBP Ignores the Practice Wisdom of Macro Practice Because so much of what happens in macro practice seems idiosyncratic to the situation, generalizations are few and far between. This argument is similar to the previous one. Practice wisdom is not ignored, but it is relegated^ to a lower status of evidence when assessing what intervention should 'be chosen.

556

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

EBP Ignores the Values of the Client Commttnity or Organization by Being One Size Fits All This argument completely ignores the process used to choose an intervention as described above. The practice principle emphasized by McNeece and Thyer (2004) allows the client in the context of a community or organization to choose the intervention when all the information is collected, appraised, and synthesized. EBP clearly does not ignore the values of the client system in macro practice.

DO ANY MACRO INTERVENTIONS FIT THE EBP CRITERIA? Given macro social work's need for clearly delineated interventions, we would like to assure those interested in using EBP in the macro arena that numerous examples of clearly specified community and administrative interventions exist. Assessing how well other settings have implemented these programs is made much easier because of their programs' detailed specifications. Community Practice Examples Ohmer and Korr (2006) reviewed research to assess the level of evidence available to guide community practice. Their review, covering 269 articles published over a 16-year period, included nine that used some type of experimental controls in the research. The Centers for Disease Control (2006) provide a dozen examples of programs that have been implemented to diffuse effective behavioral interventions to prevent additional HIV infections. Each of these programs is presented with a clear differentiation between core elements, key characteristics, procedures, resource requirements, policies and standards, quality assurance plans, and monitoring and evaluating efforts. This same document by the CDC discusses other activities, services, and strategies that are supported by research. Examples of these strategies include comprehensive risk counseling and services, HIV counseling, testing and referral, and incorporating HIV prevention into the medical care of persons living with HIV. The Substance Abuse and Mental Health Services Administration (2006) also has considerable information available describing

Richard Hoefer and Catheleen Jordan

557

programs and practices that have been researched and found effective. It disseminates' this information through its National Registry of Evidence-based Programs and Practices (NREPP) available at www.nrepp.samhsa.gpv. All programs in this registry are considered "evidence-based," meaning that they are conceptually sound and internally consistent, have program activities that are related to conceptualization,! and have been reasonably well implemented and evaluated. The programs on the registry are divided into three categories, depending on the strength of the research that supports them: Promising (soine positive outcomes). Effective (consistendy positive outcomes, strongly implemented and evaluated), and Model ("effective" programs' available for dissemination and with technical assistance available from the program developers). Administrative Practice Examples
I

One of the more useful articles in the area of evidence-based interventions is an analytic literature review of the topic. Many of the social work texts related to administrative practice do not appear to be evidence-based. These authority-based information products are primarily descriptive and prescriptive, a failure on the part of their authors. i The lack of evidenc'e-based practices in social work texts is not due to a paucity of research. For example, a considerable amount of interesting research' on nonprofit organization effectiveness and leadership was conducted in the mid to late 1990s by Herman and Renz (1998, 1999) and others, such as Rojas (2000) and Smith (1999). Little of this has filtered into social work management texts but could be incorporated into research and practice with only minor translation (at most) into social work language. In another area of adniinistrative practice, that of the organization of case management services, Ziguras and Stuart (2000) conducted a meta-analysis of the effectiveness of different types of case management. Based on their analysis of 44 studies, they argue that Assertive Case Management has some demonstrable advantages over clinical case management in terms of positive outcomes for clients, although both types of case management were stronger than the "usual" treatment approach. This evidence, if disseminated properly within social work, should encourage macro social workers to abandon "usual" treatment approaches for evidence-based practices and interventions. '

558

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

Policy Practice Examples In this case, an evidence-based macro worker would be looking less for details on the program being implemented and more for evidence on the process of trying to affect policy. Political science books authored by Birkland (2005), Gerston (1997), and Sabatier (1999) present analyses of how to best affect public policy. The leading proponent of policy as a practice field in social work is Jansson (2002), who has consistently incorporated recent research from political science and other fields into his policy practice-related textbooks. Hoefer (2006), in explaining how to conduct advocacy practice, relies heavily on empirical research to make recommendations on how best to conduct advocacy. One particular empirically based area of focus within the book includes how to be influential, a topic that is heavily based on the work of Cialdini (2000). Hoefer explores communications theory and other empirically supported research as they relate to advocacy and policy practice. Hoefer also has a series of journal articles in which he addresses questions regarding the effectiveness of interest groups trying to affect social policy (Hoefer, 2000, 2001, 2002, 2005; Hoefer & Ferguson, 2007). His accumulated findings provide empirical support for interest in the paradigm of evidence-based practice. Beyond what is described here, several web-based sources can also be consulted: The Campbell Collaboration, particularly what they call their C2-RIPE (Campbell Collaboration Reviews of Interventions and Policy Evaluations) social welfare database, which had 41 listings as of October 13, 2006, at www.campbellcollaboration.org. The Centers for Disease Control Diffusion of Effective Behavioral Interventions (DEBI) project, which can be found at www. effectiveinterventions.org. The Centers for Disease Control Replicating Effective Programs (REP) project, available at www.cdc.gov/hiv/projects/rep/default. htm. The Coalition for Evidence-Based Social Policysocial programs that workcan be found at www.evidencebasedprogranis. org. The Cochrane Collaboration focuses on health care, which includes mental health, and is available at www.cochrane.org. Columbia University's Evidence-Based Practice and Policy Online Resource Training Center is available at http://\vww.

Richard Hoefer and Catheleen Jordan

559

columbia.edu/cu/musherAVebsiteAVebsite/EBP_Resources_Web EBPPhtm. ' The Swedish Institute for Evidence-Based Social Work Practice compilation of effective practices in the areas of substance abuse, child and adolescent welfare, economic aid (social assistance), ethnicity, migration, and social work theory and practice of evaluation is available at http://www.socialstyrelsen.se/en/about/IMS/ index.htm. PROCESSES FOR EVALUATING , IMPLEMENTATION

!
Recognizing that there is a need for assessing or evaluating the implementation process still leaves us with the question of how to conduct that assessment. Werner (2004) describes some of the challenges in designing and conducting a study to document the implementation of a program: 1. Developing an initial idea of what to observe, whom to interview, and what to ask; 2. Sorting through conflicting or even contradictory descriptions and assessments of a program; 3. Dealing with variations over time and across program sites; and 4. Combining quantitative and qualitative data in cogent and useful ways (p. 8). ' Qualitative data are important to the implementation fidelity process and may be collected by interviewing program participantsstaff, consumers, and other stakeholders. Questions that are appropriate for implementation research include the following: Are program goals, concepts, and design based on sound theory and practice? If not, in what respects are they not? What types and le;vels of resources are needed to implement the program as planned? Are the resources needed in place? If not, how and why not? Are program processes and systems operating as planned? If not, how and why? ^ Is the program reaching the intended target population with the appropriate services at the planned rate and dosage? If not, in what respects is it not, and why? (adapted from Werner, 2004, pp. 15-19). i

560

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

Qualitative data from such questions may then be verified by collecting and analyzing quantitative data, as described in the next section.

TECHNIQUES OE ASSESSING IMPLEMENTATION FIDELITY


Whenever possible, it is preferred to use a pre-developed implementation fidelity measure that might accompany a particular evidencebased intervention. Using the measure that has already been developed will ensure that you are assessing the implementation of the most important elements of the program and will allow you to compare the state of implementation to other program users. This will also allow your data to be added to the literature because of its comparability with other implementation fidelity studies. If this is not possible, however, you can develop your own version of an implementation fidelity instrument. Bond et al. (2000) provide a 14-step approach, as depicted in Table 2, to develop an implementation fidelity scale for a program that does not have a ready-made or standardized fidelity measure. We will briefly describe this approach. n

TABLE 2. Steps for Developing a Fidelity Measure


Stage 1 Step 1 2 3 4 2 5 6 7 8 9 10 3 11 12 13 14
Source. Bond et al. (2000). Action

Define the purpose of the fidelity scale Assess the degree of model development Identify model dimensions Determine if appropriate fidelity scales already exist Formulate fidelity scale plan Develop items Develop response scale points Choose data collection sources and methods Determine item order Develop data collection protocol Train interviewers/raters Pilot the scale Assess psychometric properties Determine scoring and weighting of items

Richard Hoefer and Catheleen Jordan


I

561

some organizations, many of the steps may require outside consultant assistance. Stage 1: Preparing for Scale Development Step 1: Define the Purpose of the Fidelity Scale. Depending on the purpose of the research effort, the scale will be either more detailed or less, more expensive to use or less, and so on. Step 2: Assess the Degree of Model Development. If the model is well developed (that is, it is clear what is supposed to be done, when and with whom), one can use more detailed and quantitative research techniques. If the model is not well developed, it will be more important to assess the implementation of the program more heuristically and qualitatively. Step 3: Identify Model Dimensions. Assuming a well-specified model, then one must identify what elements of the model are the most important and that differentiate it from other approaches to helping clients. These elements can be determined by experts in the use of the model or, more loosely, by people on your staff who are well versed or trained in the model. Step 4: Determine if Appropriate Fidelity Scales Already Exist. This step consists of answering three questions: How closely is the staff trying to replicate an existing program model? How well defined is that model? and How adequate are existing fidelity instruments? If there is a well-defined model with an adequate instrument, one should use the pre-existing measure. If the situation includes not replicating an existing model, if the model being replicated is not well defined, or if th existing instrument is not well constructed, proceed to step 5. ' Stage 2: Scale Development Step 5: Formulate Fidelity Scale Plan. According to Bond et al. (2000, p. 48), "A scale plan states the model dimensions, gives definitions, and outlines the number and possible content of items required to tap into those definitions." The plan is considered the road map to guide the instrument's creation and ensures that items will be consistent with the model, that no elements will be left out, and as a way for others to check if the fidelity instrument is appropriate for them. I

562

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

Step 6: Develop Items. Items must be constructed that follow these principles (Bond et al., 2000, p. 51): Items should refer to the structure and activities of the program and behaviors of the staff. Items should refer to things under the control of the program staff and program administration. Items should be written to fit with the sociocultural context. Items should be clear and specific. Additional information on developing good items can be found in Bond et al. (2000, pp. 52-54) and most research textbooks. Step 7: Develop Response Scale Points. Bond et al. (2000) reconrimend the following attributes for response scales (the item answers): A standard number of scale points for every item (e.g., a 5-point scale). Ordinal scale points approximating equal intervals between each point. Points that are behaviorally anchored. No gaps in the response alternatives. No overlap in the scale points. Scale points based on the empirical literature. Step 8: Choose Data Collection Sources and Methods. This step entails choosing who or what will provide the information desired (staff, consumers, administrators are possible "whos," while agency records are a possible "what"). It also means deciding how the information will be collected (orally, surveys, observations, etc.). Each source and each method has plusses and minuses and should be chosen to maximize the benefit for the overall process. Table 3 shows additional tips for developing the data collection strategy. Step 9: Determine Item Order. Four principles of item ordering increase the chances of getting full and accurate information from respondents (Bond et al., 2000, pp. 62-63). Ask easy items at the beginning. This helps the respondent feel at ease and willing to answer the questions. Design the questions in a logical order. This lends coherence to the interview.

Richard Hoefer and Catheleen Jordan


I

563

TABLE 3. Additional Tips for Developing the Data Collection Strategy


I

Choose data sources and methods that are congruent with the information needed. Sources and methods may vary from item to item. Use multiple sources and rnethods for each item vtrhenever possible. Train interviewers and observers. Build in methods to check data quality. Assess fidelity at more than one timepoint. Assess reliability of ratings. '
Source. Bond et al. (2000).
I

Group questions that are related together. This maximizes coherence and helps respondents recall information accurately. Begin with general questions and move to more specific ones within each sectipn. Step 10: Develop Data Collection Protocol. A data collection protocol tells the evaluat;or exactly how to collect data. Having such protocols improves inter-rater reliability relating to the presence or absence of intervention elements. Step 11: Train Interviewers/Raters. While it is obvious that interviewers need to be trained, it is not always so clear what to train the interviewers to be able to do. Bond et al. (2000, p. 64), suggest that interviewers should learn: How to contact and introduce the scale to respondents. How to lay out th questionnaire and how the interviewer should progress through it. How to probe or follow up if initial responses are not on track. How to code responses and place them on the response scale. How to interact with the respondent. Stage 3: Piloting the Scale Step 12: Pilot the Scale. Piloting the scale is used to identify problems that were not noticed in the previous stage. Piloting is extremely useful in pinpointing where content may be difficult to collect and different methods might need to be used. Step 13: Assess Psychometric Properties. Analysis of information relating to the instrument's reliability and validity is done in this step.

564

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

As this type of analysis can be very technical, experts should be called in for consultation. The important information from this step, however, tells us whether we can trust the information coming from the fidelity scale. Step 14: Determine Scoring and Weighting of Items. At times, information from one source or item may be considered to have more importance, or weight, than information from other sources. If this is true in your situation, this is the step in which you take the different levels of importance into account. Measuring implementation fidelity is a vital part of evidence-based practice for macro social workers. Without knowing the extent to which an evidence-based program has been implemented, it will be difficult, if not impossible, to say whether the program has been successful in transporting it to a different practice setting. Following the many steps mentioned in this section may take a long time, but the results will be compelling.

IMPLICATIONS FOR THE FUTURE Several implications emerge from the analysis in this article. They fall into three areas: practice, research, and policy. Practice If we truly believe that evidence-based practice is important lor macro social workers, it follows that we must begin to adapt the current EBP paradigm in ways that show its vitality and usefulness, such as incorporating crucial elements that might be missing links. We added a step in the problem-solving process by including the opportunity for clients to choose between equally salient interventions. We also added another step in this process related to assessing the implementation fidelity of the evidence-based intervention. One should not attempt to assess an intervention's outcomes without first determining that the intervention was properly implemented. Macro practitioners should document the steps they take to implement the intervention's model, differentiating between the theoretically derived program components that are essential and program components that are useful, but may vary from one context to another. In addition, as pointed out by the Center for Substance Abuse Treatment (2006), "most EBPs are not universally applicable to all

Richard Hoefer and Catheleen Jordan


i

565

communities, treatment settings, and clients" (p. 4). Several macrolevel issues identified by this center need to be addressed in using the paradigm of evidence-based practice: Client population characteristics. Staff attitudes and skills required by the EBP. Facilities and resources required by the EBP. Agency policies and administrative procedures needed to support the EBP. ; Inter-agency linkages or networks to provide needed additional services. State and local regulations. Reimbursement for the specific services to be provided under the EBP (p. 4).
I

Research Social workers in ^ e macro arena who are searching, appraising, and selecting evidence-based interventions want choices. Researchers must partner with front-line workers and managers in designing, implementing, and evaluating these interventions. Research on macro interventions (both community and administrative) has not been sufficiently conducted by| social work researchers, and the research from other disciplines has not been sufficiently well integrated into the social work knowledge base to provide additional guidance. All types of research on macro interventions need additional research support, ranging from more-or-less pure research that can be translated into practice terminology to more-or-less applied research with enough theoretical basis to bei generalizable to additional settings. Research from international sources should be another important resource for macro social workers in the United States. Policy ,

Social policy is often determined for non-empirically based reasons. Shifts in political party control of legislatures and executive offices bring policy changes regardless of the evidence base to support that change. In recent years, social workers have been increasingly marginalized as an anti-government philosophy has been used in political campaigns. Financial resources for human resources have decreased relative to need. Spending priorities have changed to support

566

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

faith-based organizations rather than secular agencies. Greater calls for accountability by social welfare agencies have been made by the public and its elected officials. The marginalization of social work-supported policy has been possible, in part, due to the lack of strong evidence to support the continuation of various programs and interventions. One of the important reasons to support EBP at all levels of social work is the hope that, at least in some political battles, facts and scientific information will play an important role in making decisions. In addition, information to improve advocacy and policy practice is available, but is often missing in the social work curriculum. CONCLUSION Thyer and Myers (2003) suggest that: If social work is to continue to enjoy substantial amounts of financial support from local, state, and federal sources, it is imperative that we be able to demonstrate with legitimate data that are credible to others, that we are genuinely capable of helping the clients we serve, (p. 268) There are crucial elements that are missing in the EBP process. First, clients must be allowed to choose between equally salient interventions. This practice principle adheres to the ethical hallmark of the social work profession related to client self-determination. Second, assessment of the fidelity to which an intervention, as designed, is implemented is a crucial element for macro practitioners to analyze. Outcome research studies related to both efficacy and effectiveness need to be conducted in order to build a knowledge base about evidence-based macro practices and interventions. REFERENCES
Birkland, T. (2005). An introdtiction to the policy process: Theories, concepts, and models of public policy making. Armonk, NY: M. E. Sharpe. Bond, G., Williams, J., Evans, L., Salyers, M., Kim, H., & Sharpe, H. (2000). Psychiatric rehabilitationfidelitytool kit. Cambridge, MA: Human Services Research Institute. Retrieved July 31, 2006, from http://www.tecathsri.org/malerials.asp. Carrilio, T. (2006). Looking inside the "black box": A methodology for measuring program implementation and informing social services policy decisions. In R.

Richard Hoefer and Catheleen Jordan


I

567

Hoefer (Ed.), Cutting edge social policy research (pp. 1-17). New 'Vbrk: Haworth Press. ' Center for Substance Abuse Treatment. (2006). Treatment, Volume 1: Understanding evidence-based practices for co-occurring disorders. COCE overview. Paper 6. DHHS Publication No. (SMA). Rockville, MD: Substance Abuse and Mental Health Services Administration and Center for Mental Health Services. Posted on the website June 8, 2006. Retrieved October 31, 2006, from http://www.coce. samhsa.gov/cod_resources/PDF/Evidence-basedPractices(OP6).pdf Centers for Disease Control (CDC). (2006, April). Provisional procedural guidance for community-based organizations. Retrieved October 9, 2006, from http://www. cdc.gov/hiv/topics/prev_prog/AHP/resources/guidelines/pro_guidance.htm. Cialdini, R. (2000). Influence: Science and practice (4th ed.). Boston, MA: Allyn Bacon. i Gambrill, E. (1990). Critical thinking in clinical practice: Improving the accuracy of judgments and decisions about clients. San Francisco: Jossey-Bass. Gambrill, E. (1999). Evidence-based practice: An alternative to authority-based practice. Families in Society, 80, 341-350. Gerston, L. (1997). Public policy making: Processes and principles. Armonk, NY: M. E. Sharpe. , Gibbs, L., & Gambrill, E. (2002). Evidence-based practice: Counterarguments to objections. Research on Social Work Practice, 12(3), 452-476. Herman, R., & Renz, D. (1998). Nonprofit organizational effectiveness: Contrasts between especially effective and less effective organizations. Nonprofit Management and Leadership, 9(1), 23-38. Herman, R., & Renz, D. (1999). Theses on nonprofit organizational effectiveness. Nonprofit and Voluntary Sector Quarterly, 28(2), 107-126. Hoefer, R. (1994). A good story, well told: Rules for evaluating human services programs. Social Work} 39(2), 233-236. Hoefer, R. (2000). Making a difference: Human service interest group influence on social welfare program regulations. Journal of Sociology and Social Welfare, 27(3), 21-38. 1 Hoefer, R. (2001). Highly effective human services interest groups: Seven key practices. Journal of Community Practice, 9(3), 1-13. Hoefer, R. (2002). Political advocacy in the 1980s: Comparing human services and defense interest groups. Social Policy Joumal, 7(1), 99-112. Hoefer, R. (2005). Altering state policy: Interest group effectiveness among statelevel advocacy groups. Social Work, 50(3), 219-227. Hoefer, R. (2006). Advocacy practice for social justice. Chicago: Lyceum Books. Hoefer, R., & Ferguson, K. (2007). Moving the levers of power: How advocacy organizations affect the regulation-writing process. Journal of Sociology and Social Welfare, 34(\), 83-108. Jansson, B. (2002). Becoming an effective policy advocate: From policy practice to social Justice (4th ed.). Pacific Grove, CA: Brooks/Cole/Thomson Leaming. McNeece, C. A., & Thyer, B. (2004). Evidence-based practice and social work. Journal of Evidence-Based Social Work, 1(\), 7-24. National Association of Social Workers (NASW). (1999). Code of ethics. Retrieved November 1, 2006, from http://www.naswdc.org/pubs/code/default.asp.

568

JOURNAL OF EVIDENCE-BASED SOCIAL WORK

Netting, F. E., Kettner, P., & McMurtry, S. (2004). Social work macro practice (3rcl ed.). Boston: Allyn & Bacon. Ohmer, M., & Korr, W. (2006). The effectiveness of community-based interventions. Research in Social Work Practice, 76(2), 132-14.5. Pressman, J., & Wildavsky, A. (1973). Implementation: How great expectations in Washington are dashed in Oakland. Berkley, CA: University of California Press. Rojas, R. (2000). A review of models for measuring organizational effectiveness among for-profit and nonprofit organizations. Nonprofit Management and Leadership, 77(1), 97-104. Sabatier, P (Ed.). (1999). Theories of the policy process. Denver, CO: Westview Press. Sackett, D., Richardson, W, Rosenberg, W., & Haynes R. (1997). Evidence-based medicine: How to practice and teach EBM. New York: Churchill Livingstone. Smith, D. H. (1999). The effective grassroots association It: Organizational factors that produce external impact. Nonpivfit Management and Leadership, 70(1), 103116. Substance Abuse and Mental Health Services Administration (SAMHSA). (2006). SAMHSA model programs. Retrieved October 9, 2006, from http://mod(;l programs.samhsa.gov/template_cf.cim?page=model_list. Thyer, B. (2001). Evidence-based approaches to community practice. In H. Briggs & K. Corcoran (Eds.), Social work practice: Treating common client problems (pp. 54-65). Chicago: Lyceum Books. Thyer, B., & Myers, L. (2003). Linking assessment to outcome evaluation using single system and group research designs, ln C. Jordan & C. Franklin (Eds.), Clinical assessment for social workers: Quantitative and qualitative methods (2nd ed., pp. 385-405). Chicago: Lyceum Books. Werner, A. (2004). A guide to implementation research. Washington, DC: The Urban Institute. Ziguras, S., & Stuart, G. (2000). A meta-analysis of the effectiveness of menial health case management over 20 years. Psychiatric Services, 51, 1410-1421.

S-ar putea să vă placă și