Sunteți pe pagina 1din 14

470496

2013

AUT17310.1177/1362361312470496AutismKasari and Smith

Special issue article

Interventions in schools for


children with autism spectrum
disorder: Methods and
recommendations

Autism
17(3) 254267
The Author(s) 2013
Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/1362361312470496
aut.sagepub.com

Connie Kasari

University of California, Los Angeles, USA

Tristram Smith

University of Rochester, USA

Abstract
Although researchers have identified many promising teaching strategies and intervention programs
for children with autism spectrum disorder, research on implementation of these interventions in
school settings has lagged. Barriers to implementation include incompletely developed interventions,
limited evidence of their utility in promoting long-term and meaningful change, and poor fit with
school environments. To overcome these barriers, interventions need to be detailed in manuals
that identify key components yet allow for flexibility, and studies need to evaluate long-term,
real-life outcomes. Innovative research strategies also may be important, particularly carrying out
research on new interventions in school settings from the outset, conducting partial effectiveness
trials in which study personnel administer interventions in school settings, using communitypartnered participatory research approaches, and redesigning interventions in a modular format.
Keywords
autism, implementation science, intervention, school-based intervention, treatment effectiveness
evaluation

Introduction
Much progress has been made toward identifying efficacious intervention programs for preschoolage children with autism spectrum disorder (ASD) (Interagency Autism Coordinating Committee,
2011), but, even with such intervention, ASD remains a lifelong condition for almost all affected
Corresponding author:
Connie Kasari, Human Development and Psychology, University of California, Los Angeles, 760 Westwood Boulevard,
405 Hilgard Avenue, Los Angeles, CA 90095-1521, USA.
Email: kasari@gseis.ucla.edu

Kasari and Smith

255

children (Howlin et al., 2004). Once children with ASD enter elementary school, the school
becomes the primary service provider, delivering large amounts of special education interventions
(Brookman-Frazee et al., 2009). The average cost of educating a student with ASD in the United
States was calculated at US$18,800 per year based on 19992000 data (United States Government
Accountability Office, 2005), equivalent to US$24,477 per year in 2012, adjusted for inflation
(Cost of Living Calculator, n.d.). Despite the high service utilization and associated costs of schoolbased interventions, many (perhaps most) individuals with ASD continue to require extensive supports as adults (Howlin et al., 2004).
These less than optimal adult outcomes imply that interventions children experience in school are
not effective enough to continue the trajectories launched in early intervention programs. Indeed,
parents and researchers lament the lack of evidence-based interventions in schools (Agency for
Healthcare Research and Quality (AHRQ), 2011; Interactive Autism Network, 2009). What seems
apparent is that research on efficacious interventions currently runs orthogonal to practices in the
schools. Thus, evidence-based, university-sponsored intervention research may not be evident in the
childs school or may be modified to the extent that it does not resemble the original intervention.
Below we describe some of the researchers main efforts to date to establish a strong evidence
base of effective interventions for children with ASD in school settings (using both single-case and
group designs), barriers to programmatic research in schools, and recommendations for next steps
to move effective interventions into school settings. In considering recommendations, we recognize there can be a disconnect between the evidence that experts consider as the gold standard
for considering an intervention as evidence based (e.g. AHRQ, 2011) and criteria that individuals
in community organizations (e.g. schools) might apply to the same evidence. As noted by
Damschroder et al. (2009), It is the latter perceptions, socially constructed in the local setting,
which will affect implementation effectiveness. In organizing our recommendations, we borrow
the structure from the Consolidated Framework for Advancing Implementation Science (CFIR)
(Damschroder et al., 2009). Recommendations are considered within three domains: intervention
characteristics, context, and process.

Current state of intervention research on interventions in


school settings
Single-subject experimental designs: most common approach to the evidence base
Research on interventions for school-age children with ASD consists largely of studies with singlesubject experimental designs (SSEDs) (Machalicek et al., 2008). In SSEDs, each study participant
serves as his or her own control. Usually, the research strategy involves comparing a baseline phase
in which the individual receives no intervention to one or more intervention phases, with data collected continuously through all phases (Bailey and Burch, 2002). The goal is to identify interventions that produce an immediate change in participants behavior. SSED studies have allowed
investigators to create and refine a wide array of procedures that can be used to teach specific skills
(National Autism Center (NAC), 2009). Examples of such procedures include reinforcement systems and schedules presented in a series of pictures.

Group designs: range of contexts, potential for moderator, and mediator


analyses
Group studies of ASD interventions, including randomized controlled trials (RCTs), are becoming much more common. Group designs have several potential advantages. Although SSED

256

Autism 17(3)

studies are valuable in identifying strategies that can change behavior, group studies such as
RCTs are better suited to testing long-term outcomes; group studies enable researchers to compare individuals with ASD who complete an intervention package to similar individuals who
receive a different intervention (or no intervention) and follow both sets of individuals over time
(AHRQ, 2011). Group designs of sufficient size also allow for systematically testing why an
intervention may provide benefit by identifying mechanisms underlying the intervention (mediators) and reasons why some children respond or do not respond to an intervention (moderators).
While SSED studies can also begin to identify moderators and mediators (e.g. through systematic replications across different conditions), group studies are especially important for this purpose because they involve larger numbers of participants and longer term evaluations of outcome
(Smith et al., 2007).

Efforts to solidify the research base


Although interventions using both SSED and group designs have shown efficacy with school-aged
children with ASD, there are debates in the literature as to what type of research constitutes legitimate evidence of efficacy (Smith et al., 2007). Reviews of intervention research often exclude one
type of research over another. Thus, reviews of specific teaching techniques (use of time delay or
peer-mediated strategies) often include only SSED studies (Odom et al., 2003), whereas reviews of
intervention packages (such as early intensive behavioral intervention or social skills groups)
include only group designs (e.g. AHRQ, 2011). However, some reviews have been directed toward
integrating findings from a variety of studies with the goal of making evidence-based treatment
recommendations as described below.
In one of the first efforts to synthesize findings on school-based interventions, Iovannone et al.
(2003) identified a set of six best practices revealed by SSED research: (a) individualized services
and supports (making use of the particular interests and learning styles of each child with ASD to
increase engagement in academic and social activities through interventions such as reinforcement
systems and incorporation of preferred activities into instruction), (b) systematic instruction
(selecting goals and instructional procedures based on an assessment of each student, monitoring
progress, and troubleshooting as needed), (c) comprehensible and structured environments (e.g.
using visual schedules to facilitate transitions and organized work spaces to facilitate task completion), (d) specific curriculum content to address the defining features of ASD, (e) functional
approach to problem behavior (assessing the function or purpose of the behavior and selecting
intervention strategies based on this assessment), and (f) family involvement to promote consistency between home and school, take advantage of the familys knowledge of the child with ASD,
and overcome difficulties that children with ASD are likely to have in conveying information from
one setting to another. Although the foregoing is a useful set of guiding principles, they are too
general to support specific recommendations for research or practice.
An important next step was a report by the NAC (2009), which conducted a larger and more
systematic review of both SSED and group studies than did Iovannone et al. (2003) and which
identified a more specific set of best practices such as the use of visual schedules and self-management techniques. Finally, the National Professional Development Center (NPDC; Odom et al.,
2012) conducted a more recent literature review and identified a larger set of best practices than
did Iovannone et al. (2003) and the NAC (2009). They also made an important advance toward
operationalizing these practices by compiling step-by-step guides for setting up and implementing each practice, developing written materials and videos to teach educators about the practices,
and offering some general comments about the kinds of problems that each practice can address
(Odom et al., 2012).

Kasari and Smith

257

Challenges facing adoption of current evidence-based


interventions and recommendations
While the efforts of the NAC and the NPDC are important advances, they may fall short in increasing the use of evidence-based practices in schools for a number of reasons that are consistent with
the theoretical framework of implementation science as detailed by Damschroder et al. (2009). For
our purposes, we will focus on characteristics of the interventions themselves, the context in which
interventions are adopted, and the process of implementation. Below we address these particular
challenges and offer recommendations for increasing adoption of efficacious interventions in
schools.

Characteristics of the intervention


Identification of key components and acceptable variance. In moving an intervention tested in research
contexts into a practice setting, such as a school, the intervention itself must be considered. Often
interventions are considered to have key ingredients or core components. These key components
are the essential (and indispensable) aspects of the intervention that must be present for the intervention to work. By way of illustration, the NAC and NPDC have identified some key elements
of interventions, but these elements lack standardization. Thus, packages of procedures differ from
study to study. For example, two evidence-based practices identified by the NAC (2009) are antecedent packages, which involve modifying a range of environmental events that may precede a
problem behavior, and behavior packages, which involve a complex combination (NAC, 2009:
45) of procedures intended to reduce a problem behavior and increase an alternative behavior.
Such packages are too variable for researchers to replicate in an RCT or for educators to use in their
classrooms as many details are missing that are necessary to guide implementation. Specifically,
there are no criteria for determining what package to use, how many elements to include, which
elements should always be included in the package and which can vary across individuals, in what
sequence to introduce them, or what recourse is available if the initial package is insufficient.
Therefore, it is difficult to pinpoint the active ingredient or critical elements of the intervention that
must be present.
However, it is probably not enough to identify an active ingredient without also identifying the
ways in which implementation of the ingredient can vary while maintaining its effectiveness. One
reason this may be difficult with most interventions identified by the NAC and NPDC is that the
active ingredient is often a strategy or a how to teach method (e.g. time delay) rather than content-specific elements (e.g. a point to request). Thus, curricular areas (the content of interventions) are often left vague. Because much more attention is given to the approach used to teach an
intervention (the how) or to the dose of the intervention (e.g. hours per week) than what is taught
(Kasari, 2002), the core areas of impairment may not be addressed for children with ASD. An
example of an area that is often neglected is the use of prelinguistic gestures such as pointing to
share and showing an object to someone else. These gestures represent joint attention skills; they
are commonly deficient in children with autism and have been shown to be very important to later
language acquisition (Mundy and Gomes, 1998).
Another reason for pinpointing both the active ingredient and the acceptable implementation
variance is that interventions often cannot fully address one of the most puzzling aspects of autism,
that of the inconsistency children demonstrate across and within domains of development. Frith
and Happ (1994) described this pattern of strengths and weaknesses in autism as fine cuts along
a hidden seam (p. 116). Children can be quite good, for example, at showing active sociability

258

Autism 17(3)

but quite poor at interactive sociability (Frith and Happ, 1994: 118), perhaps reflecting their
difficulty in determining when and how to apply skills that they know in real-life situations.
Targeting weaknesses that are subtly embedded within strengths requires defining intervention
ingredients precisely yet allowing for flexible implementation.
Importance of intervention manuals. The establishment of precise but flexible interventions likely
requires developing manuals that specify the key components of the intervention. Manuals can
provide strategies for deploying techniques effectively, efficiently, and credibly, even when confronted with real-world constraints such as resource limitations, variations in skill level and
enthusiasm of interventionists, and competition from alternative programs. Manuals standardize
interventions by giving step-by-step instructions for each stage of the intervention, along with
problems that the user is likely to encounter and possible solutions to these problems (Hibbs et al.,
1997). They also define the population or problem for whom the intervention is intended and the
qualifications that practitioners who implement the manual should possess (Hibbs et al., 1997). At
the same time, they allow constrained flexibility (MacMahon, 2004) by delineating a limited set
of acceptable variations. For example, the intervention may be divided into modules (small targeted set of strategies aimed at a particular problem) with decision rules or assessment procedures
for selecting which modules to implement and under what circumstances. The manual may describe
different ways for practitioners to deliver an intervention (e.g. procedures for implementing the
intervention one-to-one or in groups), incorporate the individuals own interests into activities, and
collaborate with the individual to set goals (cf. Kendall and Chu, 2000; Kendall et al., 1998).
Importantly, practitioners often view manualized interventions as consisting of a regimen that is
inflexible and difficult to implement; thus, a manual must highlight opportunities for flexibility or
individualization in implementation for practitioners to buy into and adopt an evidence-based practice (Borntrager et al., 2009).
Meaningful change and significant outcomes. Related to the active ingredients of an intervention are
the outcomes that are expected to change as a result of the intervention. To this end, unless there is
movement on goals that are significant for the child and family, an intervention is unlikely to be
sustainable. For example, many SSED studies have demonstrated increases in child responding
that is directly prompted by an adult, not initiated by the child (see Kasari and Patterson, 2013), yet
this improvement may not be viewed as significant because it is the childs initiation of behavior
that is likely to be noticed by others and necessary for active engagement in school activities. Further, improvement on a treatment goal may be so trivial as to be missed by others or may occur in
an intervention target that is a strength rather than weakness for children with ASD. For example,
most research on reading focuses on sight-reading (Browder, 2006), but this is what children with
ASD tend to be good at even without specialized instruction and is only one of many skills required
for literacy. Research on peer interactions often focuses on increasing the rate of initiations with
peers using a verbal script, but ASD involves a qualitative (not necessarily quantitative) difference
in engagement behaviors with peers. Improving the quality of peer interactions (e.g. sustained
conversation, shared affect and interests) may be more important for making friends and belonging
to a larger social network (Kasari et al., 2013). Thus, an intervention must be potent enough to
move children on goals that are meaningful and significant.
While SSED studies are often limited by outcome measures that assess behaviors of uncertain
clinical relevance, the issue for group studies is that the measures often consist of ratings from
informants who have limited firsthand knowledge of the behaviors of interest. Parents may be
unable to report on their childrens social behavior at school, as they are not present during recess

Kasari and Smith

259

with the child. Teachers may be in a similar situation if playground assistants are on the yard rather
than themselves. The child may have insufficient communication skills to report on events at
school leaving parents unsure of how the child is doing socially with peers or how happy the child
is generally. One recommendation is to collect multiple measures for behaviors that may be fleeting or inconsistent. Thus, childrens behavior on the playground may best be observed, as well as
reported on, by classmates and by playground assistants. Because of the inconsistency of child
behaviors, repeated observations may be required (Kasari et al., 2012). These observations can
provide information on whether children are indeed demonstrating improved skills in the context
in which improvements are most needed.
Another recommendation is to develop measures that have relevance for children in their everyday lives and that can easily be gathered in authentic environments. For example, social network
measures provide useful whole class information about social behaviors. Social network measures
are simple to collect; children are asked to identify who hangs out with whom at school, or they
are asked to sort names or pictures of classmates into groups of children whom they like to play
with. The resulting data provide a wide range of information on childrens peer interactions, including the size and salience of the childs social network, the number of nominated friends, and the
received friend nominations, and whether friendship nominations are reciprocated. This information can be particularly useful in identifying children in need of a social intervention as well as
peers who might be good peer models (Locke et al., 2012).
Focus on durable effects. SSED studies emphasize careful evaluation of an intervention with a small
number of participants. As such, they are a useful way to identify and refine promising intervention
procedures and provide the foundation for describing the procedures in detail in a manual. However, they are limited in determining the efficacy or effectiveness of interventions (Smith et al.,
2007) because they provide little information on whether intervention procedures improve longterm outcomes for children with ASD. For example, although SSED studies have shown that interventions such as visual schedules often lead to rapid improvement in a childs transition from one
activity to the next, it is unclear whether, over the long term, programs that implement such procedures enable individuals with ASD to increase their independence, become gainfully employed,
form a network of social relationships, join in community activities, or enhance their general quality of life. Single-subject studies are poorly suited for exploring these kinds of long-term, global
outcomes because they are designed to detect changes that occur as soon as an intervention begins.
Indeed, most SSED studies aim to analyze the immediate effects of a specific intervention technique on a particular target behavior. Thus, SSED studies may be an important first step in the
development of an intervention, but group studies are likely needed to determine the wide adoption
and the long-term effects of an intervention.
While group studies have potential to test long-term effects of an intervention, there are far too
many examples of group studies with no follow up data. They can therefore be as limited as singlesubject designs in demonstrating the stability of intervention effects over time or predicting longterm outcomes. Despite the current state of group-based intervention studies, they have more
potential for testing long-term effects for larger numbers of participants.

Contextual factors
Scarcity of school-based studies. Despite the potential of both SSED and group designs to inform
interventions for school-age children and youth with ASD across a range of academic and
social targets, most intervention studies are not actually carried out in school settings. Rather,

260

Autism 17(3)

interventions are carried out in clinics or laboratories. An interesting paradox is that often the main
goal of an intervention is to affect the childs behavior at school despite the fact that the intervention is carried out in another context. This paradox is particularly evident for the large category of
social skills interventions. For example, both the University of California, Los Angeles (UCLA)
Friendship Training and the Program for the Education and Enrichment of Relational Skills
(PEERS) teach school-aged children social skills (social etiquette) in a clinical setting with parents as active participants in the treatment (Frankel et al., 2010; Laugeson et al., 2012). Posttests
usually favor the experimental group on measures of childrens knowledge of social skills, demonstration of skills within the group, and social skill improvement reported by parents. A limitation
of these studies (and many other clinic-led social skills interventions) is that the childs social skills
are rarely observed in the environments where the child is expected to execute his or her skills, for
example, school (but see Frankel et al., 2011).
A barrier to implementing these social skills interventions is that they require highly trained
staff who are available to devote large amounts of time to deliver the intervention. One solution has
been to create an artificial but more natural environment in which to carry out an intervention
study. Examples are the analog classroom and summer camp (e.g. Corbett et al., 2011; Lopata et
al., 2008). An analog classroom or camp may be created specifically for a study, often located in a
clinic or laboratory. Such settings more closely approximate childrens everyday environments
than does a clinic-based social skills group. Still, a classroom contains teachers and children who
enroll in a study and thus differs from the childs real-life classroom experience. Similarly, a summer camp may have an overbalance of certain types of children (e.g. a camp for children with ASD
so that only other ASD children are available for interactions), unlike a childs real-life experience,
for example, an inclusive school program. In both cases, the children encountered could be completely different from the children in their real-life classrooms. Thus, generalization of newly
learned skills to actual environments may still be challenging for children with ASD.
Moving research into schools. Interventions in schools can take place in many locations. They can be
in special education or inclusive classrooms, on the playground, or in after-school programs. Context matters for intervention and for measuring outcomes. For example, if the goal is to help the
child transition between classes during the school day, then teaching the child away from the
school context may be less effective. Similarly, if the focus is on social skills and the childs challenges are mostly on the playground, then the intervention may demonstrate greater gains for
children if the treatment takes place on the playground.
One factor that hinders implementing interventions into school settings is the prevailing beliefs
of researchers themselves. Researchers often follow a tradition of moving interventions into practice settings by first testing the intervention in a tightly controlled environment (as in a singlesubject design, carefully measuring implementation and child response or in a randomized trial
conducted in the clinic). The belief is that the intervention can be adequately tested only by carefully managing all elements of the study protocol. Once efficacy is established, then researchers
may move into an effectiveness trial in a practice setting while still testing the fidelity of implementation, with training and ongoing supervision by the research team. The foregoing process can
take years before an intervention is deemed ready to be deployed into the community.
Yet there is debate about the usefulness of this sequence. Some researchers believe this is the
only way to adequately test an intervention, while others believe that forgoing some steps and testing an intervention in an authentic environment from the beginning is more efficient (Weisz, 2000).
A potential solution is to use a research strategy that takes into account context and is a hybrid
between efficacy and effectiveness to test an intervention. These partial effectiveness trials often

Kasari and Smith

261

involve carrying out the intervention in the school by a research team without transfer of the intervention to the school staff (e.g. Kasari et al., 2012; Weisz et al., 2012). In this way, the intervention
is directly tested in the context for which it was designed, and the efficacy of the intervention can
be tested. For some schools, this is a preferable choice since school staff may be unwilling to
change their practices until efficacy is demonstrated in the context of the school. One example of
this approach is a study by Kasari et al. (2012) in which an adult worked one to one with children
with ASD focusing on individualized and targeted playground challenges or an adult worked
through peers from the target childs classroom. The intervention took place at school and measured outcomes of engagement with peers on the playground and the connections of children with
ASD to other children in their class. Primarily, research staff measured outcomes, thus not requiring extra time and effort of school staff.
Despite some advantages of partial effectiveness trials, they can be insufficient in sustaining the
intervention once the intervention team leaves. Full effectiveness trials require that the intervention
be carried out by practitioners with the expectation that the researcher may have designed the
research trial, trained practitioners, and collected fidelity of implementation data (Howlin et al.,
2007). Even with effectiveness trials, sustainability of the intervention can be poor once the
researchers leave, often because of a mismatch between the intervention and the school context, as
discussed in the next section.

Implementation process
Alignment between research and school settings. As noted earlier, researchers and practitioners can
have discordant views about the same evidence, what is important, and the process needed to get
there (Damschroder et al., 2009). For instance, researchers often study interventions in a series of
systematic steps before deploying or widely disseminating interventions into community settings. This progression may begin with development and feasibility testing of an intervention using
single-subject designs and then proceed to an efficacy trial under highly controlled conditions
(Smith et al., 2007). An example is a social skills group that is carried out in a university setting
with strict inclusion and exclusion criteria. Only after efficacy has been established under these
controlled conditions is the intervention tested in natural contexts, such as in school, perhaps years
after initial efficacy testing. Thus, researchers often go through a long, painstaking series of studies
before they ever enter the school environment and start working with teachers. In so doing, they
may become committed to an intervention that is not well matched to the needs of children with
ASD and their teachers in the school.
As an example of a gap between research and school, many interventions are conducted in a
one-to-one therapy model (customary in an outpatient clinic or in homes with young children in the
traditional Early Intensive Behavioral Intervention (EIBI) model), rather than the whole group
instruction that is common in classrooms. These interventions will therefore need significant modification to be applicable in classroom settings. Without these modifications, and without input
from the teachers themselves, teacher buy-in may be low, reducing the success of the intervention.
Another issue can be the divergent priorities of researchers, who tend to be mainly interested in
alleviating core symptoms of ASD (especially in social communication), and teachers, whose primary responsibility is to teach academics. Even finding a way to schedule time to work on social
communication can be tricky with everything else in the schedule. Goals of researchers can also
clash with school staff views of needs of their children. While researchers want to see the child with
ASD engaged with his or her peers on the playground and working on friendship development and
social skills, teaching staff may see the need for the child to have a break and are not bothered by

262

Autism 17(3)

the childs running the periphery of the yard all alone or self-isolating in the library. One solution
for interpreting a childs behavior is to ask the child about his or her friends and what he or she likes
to do on the yard. Sometimes gathering more information can lead teaching staff and researchers
to find a compromise, such as giving the child a 2-min break to run before the staff help the child
join a group activity with other children.
The interventions may also have limited applicability to the actual students in a given classroom. Weisz (2000) commented that despite increasing numbers of intervention studies, only a
fraction of the children identified with any specific condition have been involved in a controlled
research experiment. This observation certainly holds true for children with ASD in which few
RCTs exist, and the large majority of studies are of small size (20 to 30 students per condition).
Another issue is that research studies tend to include homogeneous samples and often exclude
children with ASD who test as lower functioning, nonverbal, and non-English speaking and who
have multiple disabilities. Therefore, the large majority of children served in public schools are not
represented in studies typifying the evidence base.
School personnel and also families may have concerns about the logistics of carrying out the
research. They may not be keen on designs that involve randomly assigning students to intervention or control conditions, given the legal mandate and ethical obligation for schools to provide
appropriate education for all students. In addition, they may worry about the possibility of suddenly being asked to learn and administer a new intervention, on top of the often-overwhelming
responsibilities of overseeing the day-to-day operations of classrooms and monitoring the progress
of each student.
Two potential randomized controlled research designs may be easier for school staff to accept.
One is to use a wait list control group, thus ensuring that all children receive intervention, one
group just later than the other. Wait list designs can be useful in adequately staffing interventions
and in comparing the intervention to practice as usual. A second design is to compare two potentially efficacious interventions, thus providing intervention to all children at the same time. Head
to head comparisons are more stringent tests of the potency of a given intervention than merely
comparing an intervention to practice as usual. Indeed, we have few examples of these head to
head comparisons. In the school-aged population, interventions have been compared to practice as
usual (Gordon et al., 2011; Strain and Bovey, 2011). Only one recent RCT has compared different
interventions head to head (Kasari et al., 2012). More confidence in the efficacy of an intervention
is gained when the outcomes are superior for one treatment versus another.
Research strategies to facilitate implementation. Implementation science involves a focus on the feasibility, acceptability, and adaptation of interventions for the setting in which they will take place.
Although the use of implementation science in ASD research is relatively new (Dingfelder and
Mandell, 2011), it is well established in intervention studies on other mental or physical health
issues (McHugh and Barlow, 2012). Here, we suggest two strategies derived from implementation
science that may be especially relevant to ASD research. First, to streamline the process of developing interventions and ensure alignment with the needs of individuals with ASD and professionals in community settings, some researchers now urge conducting efficacy trials in the natural
environment from the outset (Weisz, 2000). Given that, as we have previously noted, schools are
the primary providers for most children with ASD, this approach holds considerable promise for
creating interventions that can be deployed effectively in school settings. Second, to increase the
sustainability of interventions after the researchers leave, some researchers call for shared decisionmaking approaches wherein the investigators and schools collaborate to carry out the research in the
first place. These approaches, which are collectively called community-partnered participatory

Kasari and Smith

263

research (CPPR), follow a set of guiding principles based on coequal partnership in all phases of
research development, implementation, and dissemination (Wells and Jones, 2009). As such, funding of the research is also shared between community partners and university researchers. CPPR
initiatives have generally been conducted in partnerships with community health centers to address
mental or physical health issues. There are few if any examples of CPPR for behavioral interventions for ASD (but see Stahmer and Aarons, 2009). Thus, the potential of this research strategy is
largely untapped.
A caveat is that CPPR may be difficult to execute given varying beliefs of the researchers and
practitioners involved. For example, researchers may believe this approach is not appropriate for
initial efficacy studies, often citing the need for fidelity to the developed treatment manual and the
worry that too many elements may be lost when moving directly into a practice setting. Additionally,
because CPPR emphasizes developing an intervention that is endorsed by both researchers and
community providers, outcomes of children may be less of a focus than the implementation process itself. This emphasis may be misplaced when interventions are implemented without strong
evidence of effectiveness, or when the intervention is implemented poorly. Using CPPR to implement a well-researched, evidence-based intervention with known active ingredients provides
greater confidence that the intervention will be effective in achieving meaningful child outcomes.
Indeed, it is the knowledge of the active ingredients of any particular intervention that allows
implementation to focus on those active ingredients while allowing other aspects of a treatment
approach to vary, in order to fit well within a practice setting. Sustainability and success of the
treatment is likely greatest when the fit is good within the practice setting. To this end, in addition
to a strong evidence base, the intervention must have both community partner buy-in to the treatment and input into its design and implementation.
Redesign of interventions. Another roadblock to implementation of interventions in school settings
is that while newly developed interventions have proliferated, most therapists and teachers rarely
adhere to only one model or approach to intervention. Thus, a special education teacher may use a
variety of approaches, including discrete trial training and more incidental teaching strategies for
academic skills, didactic interventions as well as naturalistic strategies for social skills, and programs for whole class instruction of all children as well as specialized remedial programs. With
their propensity to pick and choose, teachers may be reluctant to give up their current practice and
adopt an entire new approach.
Even if teachers are willing, deploying efficacious interventions into community settings such
as schools poses many challenges. The interventions are complicated to deliver, intensive, and
potentially expensive in terms of time and materials needed for implementation. Moreover, schools
are much more variable than research sites because of the wide range of children served, diverse
backgrounds and theoretical orientations of educators, disparities in resources, and differing district and school rules that must be followed (Dingfelder and Mandell, 2011). Fidelity of implementation of an intervention can be difficult to deliver due to these conditions. For example, Strain and
Bovey (2011) found that teachers required 2+ years to master the LEAP (Learning Experiences An Alternative Program for Preschoolers and Parents) model in their classrooms. The Mandell et
al.s (this volume) study revealed wide variability in fidelity of intervention implementation. Future
studies need to tackle the issue of fidelity of implementation and consider innovative means of
improving fidelity for long-term sustainability of interventions such as web-based platforms or
other virtual environments for researcher practitioner consultation.
Comprehensive treatments may be particularly problematic to deploy and evaluate in school
settings. Despite different theoretical orientations of teachers, many teachers share more

264

Autism 17(3)

similarities than differences in what they do day to day. For example, most teachers of preschool
and primary-aged children begin the day with a morning meeting followed by a mix of instructional formats (small and whole group, along with independent seatwork). Teachers may also be
bound by federal and statewide standards that have commonalities across programs and classrooms. Perhaps due to these similarities, different models of intervention when applied in realworld settings may result in similar outcomes on children as found in a recent study comparing the
(Treatment and Education of Autistic and related Communication-handicapped Children) TEACCH
and LEAP models of early intervention to standard practice preschool classrooms (Boyd et al.,
2012). Thus, despite different theoretical orientations in the delivery of early intervention, neither
branded model of early intervention was superior over standard, eclectic preschool practice.
Another possible reason why comprehensive interventions are more alike than different from
each other and fail to produce differential outcomes on children may be that different interventions
converge over time (Kasari, 2002). For example, intervention programs that emphasize discrete
trial training include more incidental teaching than they did in the past and increasingly target goals
aimed at core deficits of ASD such as theory of mind (Gould et al., 2011). Another reason is that
comprehensive interventions are often composed of many component parts, not all of which have
been determined to be efficacious or active ingredients of the intervention. Thus, the active ingredient of a particular intervention may be buried in components that are not necessary, actually diluting the potency of the intervention.
Some have criticized the standard practice of mixing approaches and models as eclectic with
the implication that eclectic approaches are ineffective (e.g. Dillenburger, 2011). However,
eclectic that is not random but informed is likely the approach that will have the greatest effect on
outcomes. Informed eclecticism can offer an individualized, modular approach whose sum is far
greater than the parts and an approach that has shown superior individual outcomes compared to
application of only one model (Weisz et al., 2012). Indeed, we have much to learn from the welldeveloped intervention research programs for other childhood disorders. For example, examining
child outcomes for children with anxiety, depression, or conduct problems, Weisz et al. (2012)
found that using a modular treatment approach that integrated components from three standard
treatment manuals was superior to standard manualized treatment and community practice as
usual.
Modular treatment approaches have potential for easier implementation and greater sustainability for several reasons. One is the recognition of the challenges involved in learning and implementing a comprehensive treatment especially when the treatment may not specifically fit the
needs of the child. Many children on the autism spectrum present with complicated needs, and
may require several different types of interventions or a systematic sequence of interventions.
Modular approaches fulfill a need for flexible, personalized, and individually tailored treatment
strategies. There is also evidence that modular approaches produce superior results on child outcomes compared to standard manualized treatments or treatment as usual (Kasari et al., 2006;
Weisz et al., 2012).

Conclusion
Intervention studies conducted in schools for children with ASD are all too infrequent. While
SSED studies make up a large part of this literature and have contributed valuable information,
they have limitations in moving the field forward as the only type of intervention research. Group
intervention studies, however, are often conducted off school campuses and can have the same
limitations as SSED studies. One thing is certainthe time lag is too long between the testing of

Kasari and Smith

265

efficacious interventions and their placement into schools. As recommended by Weisz (2000),
interventions should be tested in the context in which they are to be used from the beginning without the many controlled iterations researchers prefer to do away from this context. This approach
confers many advantages, including lessening the gap between research to practice implementation
and reducing the differences between traditional research participants and the majority of children
with autism. Research that increases diversity in research samples, addresses the daily challenges
children face in school, and assists school staff in implementing effective and personalized interventions should lead to better outcomes for children with autism.
Acknowledgements
We appreciate the comments and assistance of Stephanie Patterson and Caitlin McCracken.

Funding
This work was supported by the US Department of Health and Human Resources: Autism Intervention
Research Network for Behavioral Health (AIR-B) (UA3MC11055).

References
Agency for Healthcare Research and Quality (AHRQ) (2011) Therapies for children with autism spectrum
disorders: executive summary. Available at: http://effectivehealthcare.ahrq.gov/ehc/products/106/651/
Autism_Disorder_exec-summ.pdf
Bailey JB and Burch MR (2002) Research Methods in Applied Behavior Analysis. Thousand Oaks, CA:
SAGE.
Borntrager CF, Chorpita BF, Higa-McMillan C, et al. (2009) Provider attitudes toward evidence-based practices: are the concerns with the evidence or with the manuals? Psychiatric Services 60(5): 677681.
Boyd B, Hume K, Alessandri M, et al. (2012) Efficacy of the LEAP and TEACCH comprehensive treatment
models for preschoolers with ASD. In: Presentation at the international conference for autism research,
Toronto, ON, Canada, 17 May 2012.
Brookman-Frazee L, Baker-Ericzn M, Stahmer A, et al. (2009) Involvement of youths with autism spectrum
disorders or intellectual disabilities in multiple public service systems. Journal of Mental Health Research
in Intellectual Disabilities 2(3): 201219.
Browder D, Wakeman S, Spooner F, et al. (2006) Research on reading instruction for individuals with significant cognitive disabilities. Exceptional Children 72: 392408.
Corbett BA, Gunther J, Comins D, et al. (2011) Theatre as therapy for children with autism. Journal of Autism
and Developmental Disorders 41: 505511.
Cost of Living Calculator (n.d.). American Institute for Economic Research. Retrieved from http://www.aier.
org/research/worksheets-and-tools/cost-of-living-calculator.
Damschroder L J, Aron DC, Keith RE, et al. (2009) Fostering implementation of health services research
findings into practice: a consolidated framework for advancing implementation science. Implementation
Science 4: 50.
Dillenburger K (2011) The emperors new clothes: eclecticism in autism treatment. Research in Autism
Spectrum Disorders 5: 1111128.
Dingfelder HE and Mandell DS (2011) Bridging the research-to-practice gap in autism intervention: an application of diffusion of innovation theory. Journal of Autism and Developmental Disorders 41(5): 597609.
Frankel FD, Gorospe CM, Chang Y, et al. (2011) Mothers reports of play dates and observation of school
playground behavior of children having high-functioning autism spectrum disorders. Journal of Child
Psychology and Psychiatry 52(5): 571579.
Frankel F, Myatt R, Sugar C, et al. (2010) A randomized controlled study of parent-assisted childrens friendship training with children having autism spectrum disorders. Journal of Autism and Developmental
Disabilities 40: 827842.
Frith U and Happ F (1994) Autism: Beyond Theory of Mind. Cambridge, MA: The MIT Press, pp. 1330.

266

Autism 17(3)

Gordon K, Pasco G, McElduff F, et al. (2011) A communication-based intervention for nonverbal children
with autism: what changes? Who benefits? Journal of Consulting and Clinical Psychology 79: 447457.
Gould E, Tarbox J, OHora D, et al. (2011) Teaching children with autism a basic component of perspective-taking. Behavioral Interventions 26: 5066.
Hibbs ED, Clarke G, Hechtman L, et al. (1997) Manual development for the treatment of child and adolescent
disorders. Psychopharmacology Bulletin 33(4): 619629.
Howlin P, Goode S, Hutton J, et al. (2004) Adult outcome in children with autism. Journal of Child Psychology
and Psychiatry 45: 212229.
Howlin, P, Gordon, RK, Pasco, G, et al. (2007) The effectiveness of Pictue Exchange Communication System
(PECS) training for teachers of children with autism: a pragmatic, group randomised controlled trial.
Journal of Child Psychology and Psychiatry 48: 473481.
Interactive Autism Network (2009) Research report #9 family stress part 2work life and finances. Available
at: http://www.autismspeaks.org/news/news-item/ian-research-report-family-stress-%E2%80%94-part-2
Interagency Autism Coordinating Committee (2011) 2011 IACC strategic plan for autism spectrum disorder
research. Department of Health and Human Services Interagency Autism Coordinating Committee website. Available at: http://iacc.hhs.gov/strategic-plan/2011/index.shtml (accessed January 2011).
Iovannone R, Dunlap G, Huber H, et al. (2003) Effective educational practices for students with autism spectrum disorders. Focus on Autism and Other Developmental Disabilities 18(3): 150165.
Kasari C (2002) Assessing change in early intervention programs for children with autism. Journal of Autism
and Developmental Disorders 32: 447461.
Kasari C, Freeman S and Paparella T (2006) Joint attention and symbolic play in young children with autism:
A randomized controlled intervention study. Journal of Child Psychology & Psychiatry 47: 611620.
Kasari C, Locke J, Ishijima E, et al. (2013) Peer acceptance, social engagement and friendship: critical goals
for children with autism spectrum disorders. In: Gerhardt P (ed.) Social Skills and Adaptive Behavior in
Learners with Autism Spectrum Disorders: Current Status and Future Directions. Brooks Publishing.
Kasari C and Patterson S (2013) Interventions addressing social impairment in autism. Current psychiatry
reports14: 713725.
Kasari C, Rotheram-Fuller E, Locke J, et al. (2012) Making the connection: randomized controlled trial of
social skills at school for children with autism spectrum disorders. Journal of Child Psychology and
Psychiatry 53: 431439.
Kendall PC and Chu BC (2000) Retrospective self-reports of therapist flexibility in a manual-based treatment
for youths with anxiety disorders. Journal of Clinical Child Psychology 29(2): 209220.
Kendall PC, Chu B, Gifford A, et al. (1998) Breathing life into a manual: flexibility and creativity with
manual-based treatments. Cognitive and Behavioral Practice 5(2): 177198.
Laugeson E, Frankel F, Gantman A, et al. (2012) Evidence-based social skills training for adolescents with
autism spectrum disorders: the UCLA PEERS program. Journal of Autism and Developmental Disorders
42: 10251036.
Locke J, Rotheram-Fuller E and Kasari C (2012) Exploring the impact of being a typical peer model for
included children with an autism spectrum disorder. Journal of Autism and Developmental Disorders 42:
18951905.
Lopata C, Thomeer ML, Volker MA, et al. (2008) Effectiveness of a manualized summer social treatment program for high-functioning children with autism spectrum disorders. Journal of Autism and Developmental
Disorders 38: 890904.
Machalicek W, OReilly MF, Beretvas N, et al. (2008) A review of school-based instructional interventions
for students with autism spectrum disorders. Research in Autism Spectrum Disorders 2(3): 395416.
MacMahon RJ (2004) The Fast Track Project. Invited address given at the meeting of the NIH Working
Group on Methodological Challenges in Autism Treatment Research, Sacramento, CA, 6 May.
McHugh RK and Barlow DH (2012) Dissemination and Implementation of Psychological Interventions. New
York: Oxford University Press.
Mundy P and Gomes A (1998) Individual differences in joint attention skill development in the second year.
Infant Behavior and Development 21: 469482.

Kasari and Smith

267

National Autism Center (NAC) (2009) National Standards Project. National Autism Center.
Odom S, Brown W, Frey T, et al. (2003) Evidence based practices of children with autism: contributions
for single subject design research. Focus on Autism and Other Developmental Disabilities 18: 166175.
Odom S, Hume K, Boyd B, et al. (2012) Moving beyond the intensive behavior treatment versus eclectic
dichotomy: evidence-based and individualized programs for learners with ASD. Behavior Modification
36(3): 270297.
Smith T, Scahill L, Dawson G, et al. (2007) Designing research studies on psychosocial interventions in
autism. Journal of Autism and Developmental Disorders 37(2): 354366.
Stahmer AC and Aarons G (2009) Attitudes toward adoption of evidence-based practices: a comparison of
autism early intervention providers and childrens mental health providers. Psychological Services 6:
223234.
Strain P and Bovey EH (2011) Randomized, controlled trial of the LEAP model of early intervention for
young children with autism spectrum disorders. Topics in Early Childhood Special Education. Epub
ahead of print 25 May 2011. DOI: 10.1177/0271121411408740.
United States Government Accountability Office (2005). Special education: Children with autism (GAO
Publication No. 05-220). Washington, DC: Author.
Weisz J (2000) Agenda for child and adolescent psychotherapy research: on the need to put science into practice.
Archives of General Psychiatry 57(9): 837838.
Weisz J, Chorpita B, Palinkas L, et al. (2012) Testing standard and modular designs for psychotherapy treating
depression, anxiety, and conduct problems in youth. Archives of General Psychiatry 69: 274282.
Wells K and Jones L (2009) Research in community-partnered, participatory research. Journal of the
American Medical Association 302: 320321.

S-ar putea să vă placă și