Documente Academic
Documente Profesional
Documente Cultură
Human Services
The Journal of the National Staff Development and Training Association
SPECIAL ISSUE
Organizational Development
Human Services
The Journal of the National Staff Development and Training Association
In general, submissions should follow the Articles must be typewritten (MS Word) and
guidelines of the Publication Manual of the submitted electronically (e-mailed as an
American Psychological Association (latest attachment) to:
edition). Submissions should pertain to one of
the following areas: Dale Curry, PhD, LSW, CYC-P
Associate Professor, Human Development
• State of the Field and Family Studies
• Learning Activities Director, International Institute for
• Instrumentation Human Service Workforce Research and
• Conceptual or Empirical Development
School of Lifespan Development and
Learning exercises should include the Educational Sciences
following 11 areas: 1) introduction to topic Nixson Hall
(human service area learning points) and brief Kent State University
review of the literature (conceptual under- P.O. Box 5190
pinning to the learning activity), 2) learning Kent, Ohio 44242
activity title, 3) activity goals/objectives/ Email: dcurry@kent.edu
competencies addressed, 4) group size,
5) time required, 6) required materials,
7) physical setting, 8) procedures/process,
9) cautions (how might the activity be
“uncomfortable/threatening” to participants),
10) adaptations, and 11) supportive materials.
Human Services
The Journal of the National Staff Development and Training Association
CONTENTS
The National Child and Youth Care Practitioner Certification Exam .............................119
Child and Youth Care Certification Board
Training and Development in Human Services is the journal of the National Staff
Development and Training Association (NSDTA), an affiliate of the American Public Human
Services Association (APHSA). APHSA, founded in 1930, is a nonprofit, bipartisan
organization of individuals and agencies concerned with human services. The association’s
mission is to pursue excellence in health and human services by supporting state and local
agencies, informing policymakers, and working with partners to drive innovative, integrated,
and efficient solutions in policy and practice. NSDTA, founded as an APHSA affiliate in
1985, is an interdisciplinary professional organization comprised of training and development
professionals serving diverse populations in a variety of settings across the lifespan. Some of
these disciplines include social work, public administration, counseling, psychology,
recreation, education, child and youth care work, family life education, juvenile justice,
economic assistance, mental health, and other human service fields. Further, within the
training and development function are variety of roles: administrative support,
communications specialist, evaluator/researcher, human resource planner, instructional media
specialist, instructor/trainer, manager, organizational development specialist, and training
program and curriculum designer.
The mission of the NSDTA is to build professional and organizational capacity in human
services through a national network of membership sharing ideas and resources on
organizational development, staff development, and training. It has a vision of competent and
caring people in effective organizations creatively working together to improve the well-being
of society’s children, adults, and families. NSDTA accomplishes its mission by:
• Promoting a network of contacts to discuss and disseminate best practice methods and
strategies
• Providing a national forum for discussion of staff development and training issues
• Providing leadership in development of local, state, and federal programs and
procedures that enhance the skills of staff and develop standards and evaluation
criteria for training programs nationwide
• Developing public policy recommendations and advocating for staff development and
training issues
• Creating opportunities for continual learning and professional development for itself
as an organization and for its members
The publication is structured in a way that encourages the traditional conceptual and empirical
journal articles that contribute to advancement of the field. The journal also provides very
practical articles that can be immediately applied by training and development practitioners. It
is typically sectioned into four areas: (1) Conceptual and Empirical, (2) State of the Field, (3)
Learning Activities, and (4) Instrumentation and Methodology. This special issue focuses on
organization development (OD) in human services and contains articles on all but the
Learning Activities area.
The first section emphasizes empirical and conceptual articles that have the potential to
promote scholarly thinking and move the profession forward. The first article in this section
provides an overview of organization development principles and application to human
service settings. This is followed by an abbreviated reprint of the NSDTA competencies
model for the OD practitioner. The last article in this section provides a conceptual
understanding of issues pertaining to evaluation of OD activities.
The second section contains articles that provide benchmarks for the state of the field of
training and development in human services. This issue provides in-depth organizational
development case studies from four statewide human service systems and the Positioning
Public Child Welfare Guidance (PPCWG) Institute.
Overall, this issue contains articles that build upon the NSDTA roles and competencies
model. In particular, this issue emphasizes the broadened role of the human services
professional development and training practitioner, with a focus on organization development.
Several articles illustrate the use of OD principles to improve organizational effectiveness in
human service systems. In particular, examples from the field of the APHSA organizational
effectiveness model are provided. We hope that you will take advantage of the authors’
contributions to the journal and the field.
We welcome comments concerning the structure and content of the journal. Please consider
contacting journal staff with comments or contributions for future issues.
Articles in this section typically encompass a wide range of content areas in human services
training and development. Some of these areas may include issues related to curriculum
development, training delivery strategies, fiscal analysis and budgeting, research and
evaluation, training of trainers and training managers, and ethical issues in training and
development. Articles in this section are expected to either articulate a conceptual approach or
provide empirical support for theory related to human services training and development.
Both qualitative and quantitative evaluation and research are encouraged.
Since the field of training and development in human services is relatively young and still
emerging, further conceptual development and empirical validation of practice is much
needed. Practitioners are strongly encouraged to reflect upon their work and add to the
knowledge base by submitting articles to this journal.
Although millions of dollars are spent annually on human services training and development
activities, programs often are developed and implemented without the use of established
conceptual frameworks or empirical evaluation of program effectiveness. Continued
conceptual development and empirical inquiry in human services training and development is
essential.
The conceptual and empirical articles included in this issue provide a broadened view of
human services professional development and training and evaluation. The first article
provides a brief overview of the conceptual background that underlies many organization
development initiatives. The second article is excerpted from the NSDTA roles and
competencies model and delineates competencies for the organizational development
specialist. The last article engages the reader in a discussion of conceptual issues pertaining to
evaluation of OD activities.
Over the last several decades, the field of human services professional development and
training has moved from a narrow perspective of training to broader concepts of human
performance improvement, human resource development, and, increasingly, the environment
in which training and the application of training interventions occur. Furthermore, leaders
have advocated for a more strategic use of professional training and development activities,
necessitating a broader systems perspective including the identification of key players
affecting individual, team, organizational, and service delivery system outcomes
(Bernotavicz, Curry, Lawler, Leeson, & Wentz, 2010; Curry, Lawler, & Bernotavicz, 2006;
Lawson, Caringi, McCarthy, Briar-Lawson, & Strolin, 2006).
The National Staff Development and Training Association (NSDTA), an affiliate organization
of the American Public Human Services Association (APHSA), and APHSA’s Organizational
Effectiveness department embraced this broadened perspective in several ways. APHSA
developed an organizational effectiveness (OE) consulting practice that has delivered models,
tools, and facilitation services to more than 25 state public human services agencies over the
past seven years. (See OE field project list at http://www.aphsa.org/OE/OE_Services.asp.)
In addition, NSDTA developed and disseminated its roles and competencies model that
delineates competencies emphasizing nine key roles (administrative support, communication
specialist, evaluator/researcher, human resource planner, instructional media specialist,
instructor/trainer, manager, organizational development specialist, and training program and
curriculum designer). (See http://nsdta.aphsa.org/resources.htm#CompetencyGuides.)
One of these important roles pertains to organizational development (OD), the application of
behavioral science knowledge to enhance an organization’s effectiveness and efficiency. OD
focuses on organizational culture, processes, and structures within a total systems perspective
with an emphasis on planned organizational change (Huse, 1978; Jackson, 2006).
Many principles of OD practice are similar to principles of human service work, except that
they are applied to work teams, organizations, and other systems (e.g., problem identification,
NSDTA has supported the work of the APHSA OE department to promote the development
of tools and processes that in turn advance organizational effectiveness in human services
through strategic organizational planning and change/development processes. APHSA OE
team members have regularly presented workshops pertaining to OE at recent NSDTA
professional development institutes. Within this current journal issue, several organizational
case studies provide descriptive examples of OD principles applied to human service
organizations/systems.
This article provides an overview of several key conceptual approaches to OD, while
attempting to connect these organizational frameworks to approaches that are commonly used
in human services with individuals and families.
Conceptual Approaches
The related disciplines of organizational science, OD, and industrial psychology emphasize a
myriad of conceptual frameworks. While lacking a unified theoretical approach, conceptual
models guide the activities of OD professionals, though practitioners typically apply or adapt
a mixture of frameworks (Brown, Pryzwansky, & Schulte, 1987; Edmondson, 1996;
Piotrowski, Vodanovich, & Armstrong, 2001). A description of few conceptual frameworks
from some of the most common OD frameworks follows.
Typically, a basic OD process parallels the general problem solving process and begins with
(1) sensing a need for change, based on a problem or opportunity, followed by (2) developing
an organizational consultant-client relationship, (3) assessing the situation, (4) identifying a
desired organizational state/goal, (5) developing and implementing action change plans to
improve organizational effectiveness and (6) monitoring and evaluating the effectiveness of
the action plans. This process is repeated until the organizational goal is attained (Jackson,
2006).
A general problem solving process is commonly used in most human service settings with the
unit of analysis and target of change being the individual client or family. A typical case flow
process follows the general problem solving model. For example, Rycus and Hughes (1998)
describe the casework process in child protective services as including (1) establishing
relationship, (2) conducting assessment, (3) determining goals/objectives, (4) identifying
intervention activities, (5) implementing case plan, and (6) reassessing. The development of
In an OD context the unit of analysis and target of change becomes the team, organization, or
human service system. The importance of providing sufficient attention to each step (e.g.,
assessment phase) and avoiding the tendency to move to strategies/solutions too soon is an
important tenet of this approach. Although the problem solving approach is fairly
straightforward, it is not uncommon for teams and organizations to not have an explicit model
for solving problems and/or attaining potential opportunities. Problem solving in OD is
considered a strategic process (Jackson, 2006).
Systems Theory
Characteristics and principles of systems can be used as metaphors to help understand how
organizations function. Some of these include: isomorphism; purpose/goals; roles;
boundaries; wholes and parts; interrelatedness; input, output; equilibrium, self-
regulation/rules; equifinality and multifinality; and entropy, negative entropy, and
differentiation (von Bertalanffy, 1968; Garbarino, 1982; Mattaini, Lowery, & Meyer, 1988;
Melson, 1980; Senge, 1990; Zastrow & Kirst-Ashman, 1987). Below are descriptions of these
characteristics and principles as summarized by Holden and Holden (2000).
Isomorphism. This is the tendency for system principles to remain the same across systems.
For example, human service workers use system principles for understanding families. OD
practitioners apply many of the same principles to teams, organizations, and other performing
systems.
Purpose/goals. The purpose and/or goals among systems may be different. But all systems
have a purpose/function. Organizations often make the purpose formal through mission and
vision statements. Human service workers are also concerned with the functions of a system—
the family. How families fulfill their functions (e.g., economic provision, responsible
reproduction and parenting, supporting family emotional needs) is often the focus of human
service work.
Roles. In order to achieve system goals (above), somewhat predictable or expected patterns of
functioning often occur among the parts of a system. In organizations, patterns of functioning
of employees sometimes translate into formal job descriptions. However, informal roles may
also exist. In the absence of role clarity and commitment, role confusion and role
diminishment may occur. Again, human service workers may see the parallel in families. For
example, roles such as the disciplinarian, breadwinner, house manager, and comforter may
form to achieve family goals. Unfortunately, less desirable roles may also form such as the
family member designated as the person with the problem. Both positive and negative roles
can emerge in teams and organizations as well.
Wholes and parts. Zastrow and Kirst-Ashman (1987) define a system as a set of elements
which form an orderly, interrelated, and functional whole.
Interrelatedness of the parts. As stated above, the system elements influence each other.
Input. Energy and communication flows to and from (output) other systems/subsystems.
Equilibrium. Systems have a tendency to maintain a relatively steady state among the system
elements. Thus, initial system resistance to change can often be expected.
Equifinality and multifinality. These principles refer to the fact that there are many ways to
achieve the same purpose. A change in one part of the system, even if distant from the
perceived problem, may affect positive change in another part of the system. Thus OD
practitioners may attempt to positively influence organizational change in ways that may
seem less directly related to the perceived problem. For example, personality clashes among
employees may be dealt with by clarifying organization or team goals, roles, and rules rather
than intervening directly with the individuals perceived to have the problem. Similarly,
human service workers may focus on strengthening the marital relationship as a strategy to
improve a child’s misbehavior.
Another implication of this principle is that impacting one part of the system may have
several intended as well as unintended effects on the whole. In many ways, one cannot really
understand a system until one tries to change it (Lewin, 1951). In both OD and in family
engagement efforts this leads to a cycle of engagement and change that is developmental and
often relies on experiential learning.
Entropy, negative entropy, and differentiation. All systems have a natural tendency to
move toward disorganization, deterioration, and death. At the same time, there are countering
forces toward growth and a more complex existence.
Parallel processing again to human service work, we see the same individualistic versus
systems thinking influencing practice and interaction with the community. It is common for
many in society to view success and failure as an individual’s responsibility, failing to see the
influence of family, community, broader society, and the developmental process itself.
Statements such as “If she really wanted to leave that violent husband, she would have a long
time ago” are common among lay persons and even among some human service
professionals.
Field Theory
According to Lewin (1951, p. 169), “There is nothing so practical as a good theory.” Field
theory provides a simple, easy-to-understand approach to change, involving the interplay
between two opposing sets of forces. A field is a representation of the complete environment
of the individual and significant others—one’s life space. As individuals move through life
spaces (e.g., juvenile court, client homes, office), driving and restraining forces are influenced
by one’s perception of the current psychological field (e.g., support for organizational change)
and influenced by individual needs (e.g., relevance of training to work needs) (Lewin, 1951;
Smith 2001).
Individual, team, and organizational change occurs when equilibrium is disrupted. An existing
field of forces is changed by increasing driving and/or decreasing restraining forces for
change. The number and strength of driving and restraining forces will determine if change
occurs. If the strength of the total number of transfer driving forces is greater than the
restraining forces, change is likely to occur. If the total strength of the restraining forces is
greater or equal to the driving forces, new individual, team and/or organizational
behavior/change will not occur or will not be sustained (Broad & Newstrom, 1992; Curry, et
al., 1991; 1994, Curry, 1997; Curry & Caplan, 1996; Lewin, 1951). An example of an
application of field theory pertaining to transfer of training in human services is discussed by
Curry, Lawler, Donnenwirth, & Bergeron (in press) in this journal issue.
The behavioral change process model as described by Kurt Lewin (1958) and Edgar Schein
(1987) in its simplest form involves three basic stages (unfreezing, change, refreezing). This
change model has similarity to the levels of competence model that familiar to human services
training and development professionals (Caplan & Curry, 2001; Curry & Rybicki, 1995). In
order to unfreeze old habits of behavior, one must become aware/conscious of the need for
change (transition from unconscious competence to conscious competence). The creation of
psychological safety and reassurance is necessary to create the climate to unfreeze. Once
unfrozen, the new awareness may create sufficient motivation for change (learning readiness
according to the levels of competence model).
In the second stage (change), individuals develop new behaviors (the stage of conscious
competence in the levels of competence model). This is considered a critical stage where a
significant amount of both individual and organizational support for change is necessary. The
new behaviors must be congruent with individual and organizational values and norms
(Jackson, 2006; Schein, 1980).
In the third stage (refreezing stage), change is integrated to the unconscious level
(unconscious competence) and feels as natural as the previous behavior. In this stage, new
individual behaviors and organizational norms become more routinized, a part of everyday
work life. Although the concept of refreezing may sound rigid and unchangeable, if this
change cycle is ongoing, an internalization of the developmental process itself becomes the
norm, rather than a specific organizational change.
These same change principles can be seen in the individuals and families human services
workers serve. For example, a parent embracing, learning, and proficiently demonstrating new
parenting behaviors can be observed going through the unfreeze, change, and refreeze stages.
For example, helping a parent to recognize the need to discipline differently often requires
that the parent can feel psychologically safe enough to explore alternative strategies.
Similarly, when implementing new behaviors, a significant amount of support may be
necessary to maintain or refine and not relapse to previous behaviors.
Organizational Learning
Learning can occur at the individual, team, and organization/system level. Organizational
learning is the basis for organizational change and considered essential for organizational
survival. Organizational learning occurs when the organization develops systematic processes
to acquire, use, and communicate knowledge (Jackson, 2006). Senge (1990) describes five
essential components (disciplines) of organizational learning (personal mastery, mental
models, shared vision, team learning, systems thinking). Creating a culture and climate for
individual, team, and organizational learning is critical for effective implementation of
learning (transfer climate). Organizational support for professional development and training
Planned change, described as a cyclical process by Lewin (1948) and known as action
research, is fundamental to OD. Action research is a systematic method of data collection and
feedback, action, and evaluation based on further data collection—the process of problem
solving in action (Jackson, 2006; Lewin, 1948). According to McTaggart (1996) action
research is not a method or a procedure for research but a series of commitments to observe,
solve problems, and take advantage of organizational opportunities to develop through
practicing a series of principles for conducting social inquiry. AR involves both learning and
doing. According to Lewin, both the learning and doing are part of the same process.
A basic premise of this model is that the success of an organization is dependent upon the
degree of congruence between the organization’s core technology and the organization’s
social context (Glisson, 2007). In human service settings, examples of core technology
include practice/treatment models, assessment tools, case/care planning approaches, and case
review processes. The social context includes factors such as norms, expectations, and
informal procedures (the way things are typically done in organizations).
Recent research in human service settings indicates that organizational culture and climate
influence key service elements including employee morale, innovativeness, turnover, and the
quality of service delivery, ultimately affecting service outcomes for individuals and families
(Glisson, 2002, 2007; Glisson & Durick, 1988; Glisson & Hemmelgarn, 1988; Glisson &
James, 2002; Jaskyte, 2005).
Implementing best evidence based practice to promote positive service outcomes involves
planned intervention of a human service agency’s core technology within the organizational
social context.
In 2004 APHSA created an OE department to shift the focus of its leadership development
efforts from classroom-based curricular training to organizational development consulting and
facilitation. The APHSA OE approach has incorporated many of the concepts discussed
above. APHSA OE initiatives in human service organizations and systems attempt to integrate
principles of OD and implementation science to promote organizational effectiveness and
better service outcomes for individuals and families. These initiatives have resulted in a
comprehensive practice model and set of supporting models, tools, templates, and narrative
guidance that is, itself, being subjected to a learning-by-doing cycle of improvement and
further development. Case examples from several state human service systems (Pennsylvania,
Minnesota, Arizona, and Texas) are described in this journal issue.
Summary
The role of the human services professional development and training practitioner has
expanded from development of individuals to include assessment, intervention, and
evaluation at the team and organizational/system level. A well established knowledge base
that transcends fields and disciplines such as organizational and industrial psychology and
management science is being utilized in human services settings.
As an example, the establishment of the child welfare implementation centers by the U.S.
Department of Health and Human Services, Children’s Bureau provides support for planned
system-wide interventions in various regions. For instance, the Midwest Child Welfare
Implementation Center is helping to build Ohio’s capacity to implement evidence-informed
and promising child welfare interventions. Some of the activities supported by the center are
formal assessment of organizational culture and climate; development and installation of the
technical assistance technical assistance model; a rule review; implementation of
organizational structural and functional changes to facilitate the new technical assistance
model; and ongoing fidelity monitoring.
Another example is the learning leaders/learning community initiative by the Atlantic Coast
Child Welfare Implementation Center, which provides guidance on topical areas of interest
and explores new and creative approaches for peer learning to advance child welfare and
organizational implementation of best practices. Examples of leadership at the national level
for organizational development in human services settings are the APHSA OE initiatives and
the activities of the National Resource Center for Organizational Improvement at the
University of Southern Maine.
Human services professional development and training practitioners must recognize the
important role that has emerged to promote organizational effectiveness at the team,
organization, and broader system level. New challenges exist for understanding how to use
the existing OD knowledge base as well as add to that with examples from human services
References
Bernotavicz, F., Curry, D., Lawler, M., Leeson, K., & Wentz, R. (2010). A new key to
success: Guidelines for effective staff development and training programs in human
services agencies. Washington, D.C.: National Staff Development and Training
Association and American Public Human Services Association.
Broad, M.L. & Newstrom, J.W. (1992). Transfer of training: Action-packed strategies to
ensure high payoff from training investments. Reading, MA: Addison-Wessley.
Brown, D., Pryzwansky, W.B., & Schulte, A.C. (1987). Psychological consultation:
introduction to theory and practice. Boston: Allyn and Bacon.
Caplan, P. & Curry, D. (2001). The wonder years: Professional development in child welfare.
Proceedings of the Trieschman/Child Welfare League of America Workforce Crisis
Conference. Retrieved from http://www.cwla.org/programs/trieschman/2001fbwrecap.
htm.
Curry, D.H. (1997). Factors affecting the perceived transfer of learning of child protection
social workers. Unpublished doctoral dissertation, Kent State University, Kent, Ohio.
Curry, D.H., Caplan, P., & Knuppel, J. (1991). Transfer of Training and Adult Learning.
Paper presented at the Excellence in Training International Conference, Cornell
University, 1991.
Curry, D.H., Caplan, P., & Knuppel, J. (1994). Transfer of training and adult learning
(TOTAL). Journal of Continuing Social Work Education, 6, 8-14.
Curry, D. & Dick, G. (2000). The battered woman’s field. Training and Development in
Human Services, 1, 40-51.
Curry, D., Lawler, M., & Bernotavicz, F. (2006). The field of training and development in
human services: An emerging profession? Training and Development in Human
Services, 3, 5-11.
Curry, D., McCarragher, T. & Dellmann-Jenkins, M. (2005). Training, transfer and turnover:
The relationship among transfer of learning factors and staff retention in child welfare.
Children and Youth Services Review, 27, 931-948.
Curry, D. & Rybicki Sr. M. (1995). Assessment of child and youth care worker trainer
competencies. Journal of Child and Youth Care Work, 10, 61-73.
Edmondson, A.C. (1996). Three faces of Eden: The persistence of competing theories and
multiple diagnoses in organizational intervention research. Human Relations, 49, 571-
596.
Garbarino, J. (1982). Children and families in the social environment. New York: Aldine.
Glisson, C. (2002). The organizational context of children’s mental health services. Clinical
Child and Family Psychology Review, 5, 233-253.
Glisson, C. (2007). Assessing and changing organizational culture and climate for effective
services. Research on Social Work Practice, 17(6), 736-747.
Glisson, C., & Durrick, M. (1988). Predictors of job satisfaction and organizational
commitment in human service organizations. Administrative Science Quarterly, 33, 61-
81.
Glisson, C., & Hemmelgarn, A.L. (1988). The effects of organizational climate and
interorganizational coordination on the quality and outcomes of children’s service
systems. Child Abuse & Neglect, 22, 401-421.
Glisson, C., & James, L.R. (2002). The cross-level effects of culture and climate in human
service teams. Journal of Organizational Behavior, 23, 767-794.
Holden, J.C., & Holden, M.J. (2000). The working system. Training and Development in
Human Services, 1(1), 34-38.
Holton, E.F. III, Bates, R.A., & Ruona, W.E.A. (2000). Development of a generalized
learning transfer system inventory. Human Resource Development Quarterly, 11, 333-
360.
Huse, E.F. (1978). Organization development. Personnel and Guidance Journal, March, 403-
406.
Kolb, D.A., Rubin, I.M., & McIntyre, J.M. (1974). Organizational psychology: A book of
readings, 2nd edition. Englewood Cliffs N.: Prentice-Hall.
Lawson, H., Caringi, J., McCarthy, M., Briar-Lawson, K., & Strolin, J. (2006). A complex
partnership to optimize and stabilize the public child welfare workforce. Professional
Development: The International Journal of Continuing Social Work Education, 9(2-3),
122-139.
Lewin, K. (1948) Resolving social conflicts; selected papers on group dynamics. G.W. Lewin
(Ed.). New York: Harper & Row, 1948.
Lewin, K. (1951). Field theory in social science. New York: Harper and Row.
Lewin, K. (1958). Group decision and social change. In E.E. Macoby, T.M. Newcomb, &
E.L.Hartley (Eds.). Readings in Social Psychology. New York: Holt.
Mattaini, M.A., Lowery, C.T., & Meyer, C.H. (1988). Foundation of social work practice.
Washington D.C.: NASW Press.
Rycus, J.S., & Hughes, R.C., (1998). Field guide to child welfare. Washington D.C.: CWLA
Press.
Schein, E.H. (1980). Organizational psychology, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall.
Schein, E.H. (1987). Process consultation: Its role in organization development. Reading,
MA: Addison-Wesley.
Smith, M. K. (2001). Kurt Lewin, groups, experiential learning and action research. The
Encyclopedia of Informal Education. Retrieved from http://www.infed.org/thinkers/et-
lewin.htm.
Zastrow, C., & Krist-Ashman, K.K. (1987). Understanding human behavior and the social
environment. Chicago: Nelson-Hall.
As part of its responsibility to provide resources to the human services staff development and
training field, the NSDTA Standards Committee has developed a series of guidebooks that
define the roles and competencies needed to staff a training program. In 1997, based on a
review of the research and discussion with leaders in the field, the committee identified nine
major roles: Administrative Support, Communications Specialist, Evaluator/Researcher,
Human Resource Planner, Instructional Media Specialist, Instructor/Trainer, Manager,
Organizational Development Specialist, and lastly, Training Program and Curriculum
Designer. These roles are not synonymous with jobs or people and, depending on the size of
the organization, individuals may have only one or multiple roles.
As well as the competencies specific to each role, each guidebook includes a description of the
nine roles, the outputs, definitions of competencies, and an assessment tool. Training programs
can use these guidebooks in a number of ways including human resource planning, developing
job descriptions, recruitment and screening, performance management, and developing
curriculum to train on the roles. Copies of the guidebooks are available on the NSDTA
website.
The text that follows is excerpted from the 2002 publication Human Services Staff
Development and Training Roles and Competencies: Organizational Development Specialist.
While some of the competencies may be outdated (e.g., the minimal expectations in 26.00
Basic Computer Skills), the majority have stood the test of time and continue to reflect the
competencies needed for this role in promoting organizational effectiveness.
_____________________________
The role of the organizational development (OD) specialist in Human Services training and
development is a relatively new one and is evolving to meet the emerging needs of
organizations. The field began with a focus on classroom learning and in recent years
increased emphasis has been placed on the transfer of learning to on the job performance.
Gradually, there has been increased attention paid to identifying organizational strengths and
facilitating processes whereby employees create together organizational structures and work
processes that maximize personal productivity and organizational effectiveness, and improve
This model was developed by a group of experienced OD specialists who brought both their
theoretical and practical knowledge and experience to bear in identifying the competencies
needed in this role. In doing so, they built upon the existing competency models for the
Instructor role and the manager role and included data from other sources as shown in the
references. The list of competencies illustrates the many areas of expertise and knowledge
required of OD specialists. It is anticipated that the value placed on the various competencies
will differ from organization to organization and reflect an organization’s needs and purpose
at a particular time.
Administration
1. Organizational Ability
2. Human Service Policy and Framework
3. Training Administration
Communication
4. Facilitation Skills
5. Oral Communication
6. Interpersonal Communication
7. Nonverbal Communication
8. Cultural Sensitivity
Conceptual Knowledge/Skills
9. Problem Analysis
10. Judgment
11. Conceptual Thinking
12. Systems Thinking
Information Management
26. Basic Computer Skills
27. Management Information Systems
28. Accessing Information
Instructional Management
29. Training Systems
30. Assessment and Transfer
Learning Theory
31. Learning and Human Development
32. Learning Climate
Person/Organization Interface
33. Impact and Influence
34. Approaches
35. Initiative
36. Organizational Development
37. Effective and Efficient Work Process
Self-Management Skills
38. Self-Responsibility
39. Self-Concept
40. Self-Control
41. Flexibility
42. Job Commitment
43. Professional Standards/Ethics
Administration
Communication
Conceptual Knowledge/Skills
Information Management
Instructional Management
Person/Organization Interface
38.00 Self-Responsibility
Ability to engage in ongoing learning to improve professional capabilities.
38.01 Reflective Practice: Uses reflective practice in a regular and systematic way to
assess the effects of own choices and actions on others; recognizes espoused theories
versus theories-in-use; uses own performance to reframe issues based on feedback.
38.02 Self-development: Engages in continued efforts to clarify personal values and
to carry out plans for professional development to meet personal and agency needs.
38.03 Knowledge of Field: Stays up-to-date on policy and best practice.
39.00 Self-Concept
Ability to believe in own capabilities and judgment.
39.01 Self-Awareness: Identifies own personal values, needs, interests, style, and
competencies and their effects on others; recognizes when personal feelings have been
aroused; works within limits of capabilities; manages personal defensiveness.
39.02 Pride: Takes pride in own expertise and in ability to handle situations.
39.03 Feedback: Actively seeks and accepts feedback for improvement without loss
of self-esteem and without responding defensively.
39.04 Assertiveness: Advocates for professional standards, and for others and their
needs.
40.00 Self-Control
Ability to maintain emotional equilibrium and optimism.
40.01 Self-discipline: Manages biases; performs effectively in the midst of chaos and
in an atmosphere of ambiguity. Maintains self-control in high stress situations.
40.02 Checks Behavior: Inhibits impulses to do or say inappropriate things.
40.03 Self-Monitors: Monitors own personal values and biases so that they do not
undermine objectivity and professionalism.
40.04 Patience: Shows patience and tenacity in working for desired results.
41.00 Flexibility
Ability to respond to challenge and change.
41.01 Stress Reduction: Manages own well being; finds ways, such as humor,
to reduce or manage stress.
41.02 Coping Skills: Perseveres in the face of disappointment, hostility, or adverse
conditions; resists dwelling on disappointments; motivates self to make the best of
things.
41.03 Openness: Is open to new information and to changing own opinions; suspends
own judgment and helps others to learn to suspend their judgment; makes own mental
models explicit and doesn’t impose own mental models on others.
41.04 Flexibility: Is able to shift gears and redirect activities.
42.00 Job Commitment
Ability to demonstrates commitment to the role and responsibilities of an organizational
development specialist.
References
Boyatzis, R.E. (1982). The competent manager. New York, NY: John Wiley & Sons.
Hughes, R.C., and Rycus , J.S.(1989). Target competent staff: Competency-based inservice
training for child welfare. Washington, D.C.: Child Welfare League of America.
Klemp, G.L. (1981). Job competence assessment: Defining the attributes of the top performer.
Alexandria, VA: American Society for Training and Development.
Kinney, T., Coke, K.,and Fox, R. (1982). Public child welfare staff development: A role and
competency framework for curriculum development and instruction. Albany, NY:
Nelson A. Rockefeller College of Public Affairs and Policy, State University of New
York at Albany.
Lawson, T. E. (1989). The competency initiative: Standards of excellence for human resource
executives. Alexandria, VA: Society for Human Resource Management.
McLagan, P. (1989). Models for HRD practice. Alexandria, VA: American Society for
Training and Development.
Powers, B. (1992). Instructor excellence: Mastering the delivery of training. San Francisco:
Jossey-Bass.
Rothwell, W.J., Sanders, E.S., and Soper, J.G. (1999). ASTD models for workplace learning
and performance: Roles, competencies, and outputs. Alexandria, VA: American Society
for Training and Development.
Senge, P. (1990). The fifth discipline: The art and practice of the learning organization. New
York: Doubleday.
Spencer, L. and Spencer, S. (1993). Competence at work: Models for superior performance.
New York: John Wiley & Sons, Inc.
Varney, G., Worley, C., Darrow, A., Neubert, M., and Cady, S. (1999). Guidelines for entry
level competencies to organization development and change (OD&C). Available online
at http://www.aom.pace.edu/odc/report.html.
_____________________________
Members of the NSDTA Standards Committee who were involved in development of the OD specialist
competency model and their affiliations at the time:
Chairperson
Freda Bernatovicz, Institute for Public Sector Innovation, Muskie School of Public Service, University of
Southern Maine
Committee Members
Kathy Jones Kelley, Pennsylvania Child Welfare Competency-Based Training and Certification Program,
University of Pittsburgh
Michael J. Lawler, Center for Human Services, University of California, Davis
Gerry Mayhew, Bureau of Partner Services, Wisconsin Department of Workforce Development
James McGroarty, Division of Benefit Programs, Virginia Department of Social Services
David Wegenast, Buffalo State College
Rose Marie Wentz, Training for Change
Introduction
Human services agencies are becoming interested in organizational effectiveness (OE) for a
variety of reasons. As agencies move to implement new evidence-based practice models,
there is increasing interest in organizational interventions that align the day-to-day business of
the agency and support new ways of serving children and families. Agencies also are striving
to meet federal Child and Family Services Review (CFSR) standards related to child safety,
permanence, and well-being, and simply to do business more efficiently and effectively in
tough economic times.
Organizational systems thinking has also begun to influence key functions within agencies. In
recent years there has been increasing interest in moving from traditional training systems to
broad-based staff development systems aligned with—and supporting—agency priorities.
Such efforts have expanded the traditional scope of staff development to include (a) more
consultative and consumer-driven individual and group training and coaching, (b) facilitated
technical assistance provided around specific agency or individual needs, and (c) more
alignment with other functions such as human resources and quality assurance (Kanak, S.,
Baker, M., Herz, L., Maciolek, S., 2008; National Staff Development and Training
Association, 2010).
Evaluation can play a critical role in supporting organizational effectiveness through such
activities as identifying areas of need, monitoring the fidelity of implementation, providing
feedback for continuous quality improvement, and offering evidence of the effects of
organizational change on service provision and outcomes for children and families. However,
evaluation in the complex and ever evolving environment of a human services agency poses a
host of challenges, particularly when the goal is to attribute an outcome to a specific
intervention.
Much has been written on how to conduct program evaluations and on the role of evaluation
in implementation (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). The goal of the
current article is not to recapitulate that work, but to offer some reflections on how a set of
well established guiding principles for evaluation might look in the context of evaluation of
The fact that OE interventions operate within complex agency systems and attempt to make
sustained, systemic changes poses particular challenges for evaluation. These relate to the
broadly defined, fluid, and often diffuse nature of OE interventions, attribution of impacts on
outcomes to the intervention, and accounting for the complexities of the organizational
environment in design and analytic strategies.
OE interventions take place within complex systems and are themselves dynamic. Over the
course of implementing an OE intervention a typical agency environment may be
characterized by changes in staff and/or leadership, fiscal cutbacks, implementation of
multiple initiatives, and shifting policies and priorities. Agencies and the structural units
within them also are characterized by a distinct organizational climate and culture, and
differences in such factors as readiness and openness to change, patterns of communication,
leadership, and resources. These differences have been shown to affect the success of program
implementation under certain circumstances (Glissen & Hemmelgarn, 1998; Fixsen, et al.,
2005) and must be considered in evaluation designs. The dynamic and fluid context within
which interventions are implemented and evaluated is particularly challenging in the case of
OE interventions, which are not time-limited initiatives but seek to make lasting changes in
how an agency does business.
Organizations are systems and OE interventions are also systemic, systematic, and dynamic
(APHSA, 2010). Although they differ somewhat in specifics, models advanced for
understanding organizational development and change emphasize the multi-level nature of the
systems within which interventions are implemented and the dynamic nature of both the
larger system and the intervention itself. For example, Roth, Panzano, Crane-Ross, Massatti,
and Carstens, Seffrin, and Chaney-Jones (2005) outline a model for research and evaluation of
organizational change that considers factors at five levels: environmental (e.g., professional
norms); inter-organizational (e.g., the communication between an agency and a change
agent); organizational (e.g., organizational culture, hiring practices, workload); project (e.g.,
dedicated resources, commitment, and access to technical assistance); and innovation (e.g.
scientific and/or experiential evidence). Fixsen et al. (2005) see what they call “core
implementation components” (e.g., training) as central to changing behavior, and place them
within a context of organizational components that enable and support them (e.g.,
administration, selection and program evaluation) and “influence factors” (e.g., the social,
One implication of a systems perspective for OE evaluation is that it must take into account
that an intervention typically exists within a network of part-whole relationships. For
example, implementation of a practice model for child welfare may be nested within a human
services agency responsible for multiple programs and needs to be put in place across
different units with differing responsibilities, such as emergency response, in-home services,
or adoptions. These units may have differences in workloads, supervisory buy-in and
expectations, or culture and climate that affect the implementation and outcomes of the
intervention differently. Similarly, work may be done at the level of a core implementation
component (Fixsen et al., 2005) or strategic support function (APHSA, 2010) such as training,
but the success of the intervention may be influenced by factors at multiple levels, such as
communication and trust among units and agencies. Regardless of conceptual model,
evaluators must not only work to identify and account for organizational barriers and
facilitators within the immediate context of an intervention, but also consider the potential
influences of variables at multiple levels of the system. Evaluators must be aware of when it is
necessary to measure, and attempt to control for, factors at other levels or within other
functions of the organization, and to bear in mind that the appropriate unit of analysis may be
the individual, the unit, and/or the agency depending on the specific questions being asked.
A second major implication of a systems perspective is that OE interventions aren’t static, but
progress through stages of development from initiation to dissemination (Cornell Office for
Research on Evaluation, 2009; Fixsen et al., 2005; Roth et al., 2005). During initial
implementation, interventions tend to be changing rapidly. Procedures, activities, and
protocols are not firm and can be expected to undergo revisions, refinements, and adaptations.
The goal of OE interventions is not to put in place another initiative, but to change the way an
agency does business (APHSA, 2010). Similarly, evaluation should be thought of as an
ongoing part of the business of intervention and not as an add-on done after the program has
been implemented. Done throughout, evaluation can offer valuable insights in shaping the
intervention. In turn, the lessons learned from implementation can inform future evaluation
cycles and refinements in measurement tools and methods. Implementing and evaluating an
intervention share many of the same processes; for example, a need for broad stakeholder
input, and a shared understanding of a theory of change. As activities proceed in these areas in
parallel and over time, insights from one can inform and improve the other. When evaluation
Since OE interventions take place within a broad organizational context of multiple, nested,
and somewhat fluid components and relationships, it is critical to an effective evaluation to
define the boundaries of the intervention being evaluated. The situation is complicated further
by the fact that the development of organizational capacity is often not viewed as a goal in
and of itself but as the means to accomplishing other organizational goals (Mackay, et.al.;
2002). For example, consider an intervention to restructure training to include hiring and
training a cadre of mentors. These mentors would be located in county agencies and would
work with supervisors on developing assessment and coaching skills to address specific
performance issues or enhance transfer of learning from classroom training. The boundaries
of the intervention could be drawn narrowly to include just the establishment of the mentor
program and the activities of the mentors, or they could be drawn more broadly to also
include the activities of the supervisors or even the staff with whom the supervisors work.
There is no right or wrong way to draw the boundaries of an intervention, but where those
boundaries are drawn has clear implications for the scope of the evaluation (Urban &
Trochim, 2009).
It is also critical that the understanding of the boundaries of an intervention is shared among
agency leadership, the evaluator, and other stakeholders. According to Urban and Trochim
(2009) defining program boundaries helps participants clarify both stated and unstated
assumptions about the intervention and arrive at a precise description of what the intervention
is and what it is not. It also helps all concerned to arrive at a common language for
representing the intervention, and a common understanding of its scope and activities.
Boundary analysis makes it more likely that all critical components of the intervention are
considered when determining the scope of the evaluation. If an element is not included in the
design for the current evaluation cycle, this process helps to ensure that it is excluded based
on shared decision making, rather than simply overlooked. A shared understanding of
program boundaries may allow the evaluator and the agency to avoid the situation in which an
important evaluation question cannot be answered because it was not considered when
developing the evaluation design.
A theory of change, often operationalized by a logic model, has become a standard part of
program evaluations in a wide range of settings (W.K. Kellogg Foundation, 2004; Mackay,
2002). These models clarify how a program is expected to work and provide a basis for
Given the broad range of interventions that might be undertaken as part of organizational
development, and the varied effects that could be expected based on those interventions, it is
especially important to design evaluations based on a clear map of how OE activities are
expected to produce organizational change. A pathway model as articulated by Urban and
Trochim (2009), captures the complexity inherent in a systems change effort. It also allows
for a process of what these authors refer to as “hanging the literature” (p. 548) on the
relationships hypothesized between OE activities and outcomes. To the extent that literature
can be found that supports a hypothesized linkage between an intervention and an outcome,
the case is strengthened for the effectiveness of the local intervention, which is especially
important when following a local intervention to outcomes is not feasible. The specific
linkages between program activities and expected outputs and outcomes also help to avoid
potential confusion over the correct unit of analysis. Mackay et al. (2002) points out that the
issue of whether evaluation data should be analyzed at the level of the individual, the unit, or
the agency can be problematic in many OE evaluations and that changes measured solely at
the individual level do not necessarily mean that sustainable organizational change has
occurred. For example, questions of interest in an OE evaluation in human services might
focus measuring increases in communication between units within the agency, or increases in
interactions with community partners. A well developed visual-causal diagram can help to
make clear how the outputs of an OE intervention are expected to impact both individuals and
the organization, and how and where individual change is expected to impact organizational
functioning.
Evaluators frequently make use of multiple measures and data sources in answering a given
evaluation question (Chinman et al., 2004). The use of multiple measures and methods allows
for triangulating and cross checking findings, and helps in identifying important areas of
convergence and divergence based on different methods as well as different perspectives of
respondents. Multiple measures and data collection methods also allow the strengths of one to
offset the weaknesses of the other (Cresswell &Clark, 2007).
Triangulation of data from multiple sources has been suggested as a particularly important
element of evaluations of organizational development (Horton, Mackay, Andersen, &
Dupleich. 2000). Because of the nesting of the intervention within a larger systems
environment, OE evaluations are apt to touch a variety of individuals in different ways based
on their relationship to the project. Those most closely involved, for example as part of a
sponsor team or as champions for the intervention, may have more detailed insights to offer
than those more peripherally involved. Mixed methods designs (Creswell & Clark, 2007) that
involve the integration of qualitative and quantitative measures seem well suited to an
evaluation that simultaneously needs to collect rich and detailed narratives from a small
number of participants and also more limited and standardized information from larger
groups. Since OE interventions aim to change how an agency does business in some way, a
variety of archival, or existing data sources (such as outcome information from SACWIS
systems, work products, meeting minutes, policy manual updates) also are likely to be
important in measuring and documenting change.
There is a growing awareness in human services that a key factor in successfully instituting
and sustaining organizational and practice change is attention to implementation (Fixsen et
al,. 2005; Barth, 2009). The discussion is often framed around the fidelity of implementation
of an evidence-based practice (Fixsen et al., 2005). However, the ability to understand and
describe what has been implemented is also central to assessing the quality of program
delivery, to providing feedback to shape program improvements (Chinman et. al., 2004), and
to interpreting findings related to outcomes.
Although difficult to measure in the complex environment of a human services agency, the
ultimate goal of most interventions in the field is to show a positive impact on outcomes,
whether defined in terms of improved agency function or improvements in the lives of clients.
Use of traditional experimental designs that allow clear attribution of a change in outcomes to
the effects of the intervention is seldom possible in human services settings. In fact, some
(Mayne, 2008; MacKay et al., 2002; Horton, Mackay, Andersen, & Dupleich, 2000) have
argued that the traditional attribution paradigm is not appropriate for evaluations of
organizational development efforts. Instead they argue for a model of assessing the
contribution of the intervention to outcomes, recognizing that multiple factors are likely to
impact the same outcomes and that it is not possible in a typical agency environment to isolate
one particular program.
A final consideration is that the contribution story will only be as good as the evidence that is
amassed supporting the links in the chain. Although a contribution approach reframes the
attribution problem somewhat, the need remains for a rigorous approach to gathering evidence
to support the intervention’s role in outcome change. There are many quasi-experimental
designs that could be used in such evaluations, and discussion of them all is beyond the scope
of this article. However, I would like to highlight three possible approaches for consideration
in future OE evaluations.
One approach that may offer promise when OE interventions are expected to produce
relatively immediate impact, is a group of designs collectively known as single system designs
(Nugent, 2010). In these approaches the target of change in an organization is measured
repeatedly under differing conditions (for example, a baseline and an after-action phase). If a
change in the outcome of interest is observed after the intervention takes place, evidence is
provided for the effectiveness of the intervention. If the intervention is introduced to different
groups at different points in time, and an effect on outcomes is observed after each of several
implementations, the case for the intervention’s impact on outcomes is strengthened. These
designs are often used with a relatively small and manageable number of data points and may
lend themselves to examining changes in CFSR outcomes where data is collected routinely
and is available for an extended time period.
Another design option that has been recommended (Barth, 2009) for improving the rigor of
evaluation in human services is propensity score analysis. This technique is used to reduce the
effects of selection bias and to create a comparison group that is as similar as possible to
program participants. Regression equations are used to predict which individuals from a group
that does not receive the intervention would be likely to participate if offered a similar
opportunity. Propensity score analysis has the potential to help control for important
differences in contextual variables (such as individual attitudes and motivation, workload, or
perceptions of supervisory support) in cases where a design is used that compares outcomes in
sites where OE interventions are occurring to outcomes in sites where they are not.
Finally, because OE evaluation exists within a nested set of relationships (Individuals within
units, units within programs, and programs within agencies) lends itself to approaches to data
analysis such as hierarchical linear modeling (HLM). This type of analysis considers effects at
multiple levels, and thus is ideal for the type of questions OE evaluation deals with. However,
it requires large amounts of data and may not be practical except in large scale cross site
evaluations.
As human services agencies become more aware of the importance of systemic and
organizational approaches to implementing innovation, evaluators also must develop
strategies to meet the challenges of conducting evaluation in a complex and ever-changing
system. Traditionally evaluation has regarded complexity and programmatic evolution as
sources of error to be controlled, and has conceptualized a program as having an ultimate goal
of disseminating a well-defined service to a well-defined population of users. Newer
approaches to evaluation build on implementation science and recognize the need to
understand the mechanisms by which the context of an intervention facilitates or hampers
change efforts. However, much of the discussion regarding implementation science occurs in
the context of evaluating evidence-based practices. With these interventions, the ultimate goal
of evaluation often is conceptualized as developing an evidence base for a clearly defined and
standardized intervention. While most programs are not expected to be completely static, OE
interventions have a qualitatively different expectation regarding program development and
change grounded in the concepts of a learning organization and continuous quality
improvement. This has prompted some to suggest that new evaluation strategies are needed
(Horton et al., 2000) for evaluations of organizational improvement. As yet there is no magic
approach to the evaluation of OE interventions that solves all of the practical problems
associated with evaluation in this arena. There is much more work to be done to understand
how OE interventions can be evaluated in ways that are both rigorous and suited to their
complex and fluid nature. However, the lessons being learned from implementation science
and systems theory offer promising directions to explore.
References
Barth, R. P. (May 29, 2009). What lies in the future for evaluation in the field of child
welfare? Closing plenary presented at Child Welfare Evaluation Summit. Washington,
D.C.
Brown Urban, J and Trochim, W. (2009). The role of evaluation in research: Practice
integration working toward the “Golden Spike.” American Journal of Evaluation, 30,
538-553.
Chinman, M., Imm, P. and Wandersman, A. (2004). Getting to outcomes™ 2004: Promoting
accountability through methods and tools for planning, implementation, and evaluation.
Santa Monica, CA: RAND Corporation. Retrieved from http://www.rand.org/pubs/
technical_reports/TR101.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005).
Implementation research: A synthesis of the literature. Tampa, FL: University of South
Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation
Research Network.
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods
research. Thousand Oaks, CA: Sage.
Glisson, C., & Hemmelgarn, A. (1998). The effects of organizational climate and
interorganizational coordination on the quality and outcomes of children’s service
systems. Child Abuse and Neglect, 22, 401-421.
Horton, D., Mackay, R., Andersen, A., and Dupleich, L. (2000). Evaluating capacity
development in planning, monitoring, and evaluation: A case from agricultural research.
ISNAR Research Report No. 17. The Hague: International Service for National
Agricultural Research. Retrieved from http://www.ifpri.org/isnararchive/Publicat/PDF/
rr-17.pdf.
Kanak, S., Baker, M., Herz, L., Maciolek, S., (2008). Building effective training systems for
child welfare agencies. Portland, ME: University of Southern Maine.
Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC
Brief Number 16. Retrieved from http://www.cgiar-ilac.org/files/ publications/ briefs/
ILAC_Brief16_Contribution_Analysis.pdf.
National Staff Development and Training Association. (2010). A key to success: Guidelines
for effective staff development and training programs in human service agencies.
Washington, D.C.: American Public Human Services Association.
Nugent, W. (2010). Analyzing single system design data. New York, New York: Oxford
University Press.
Parry, C. and Berdie, J., (1999). Training evaluation in the human services. American Public
Human Services Association.
Roth, D., Panzano, P. C., Crane-Ross, D., Massatti, R., and Carstens, C., Seffrin, B., and
Chaney-Jones, S. (2005). The innovation diffusion and adoption research project
(IDARP): Moving from the diffusion of research results to promoting the adoption of
evidence-based innovations in the Ohio mental health system. In D. Roth & W. J. Lutz
(Eds.). New Research in Mental Health, 16, 78-89. Columbus, OH: Ohio Department of
Mental Health.
W.K. Kellogg Foundation (2004). W.K. Kellogg Foundation logic model development guide.
Battle Creek, MI: W.K. Kellogg Foundation.
The field of training and development in human services incorporates many disciplines,
different roles and responsibilities, and varied human services settings. But amid these
differences are binding similarities as evidenced by the interest in NSDTA’s annual
professional development institute.
The state of the field section is intended to share information concerning current issues,
trends, and potential future developments in the field of training and development in human
services. Its goal is to provide markers to the current developmental state of the field, specific
to human services training and development. While there are similarities between the training
and development functions in business and in human services, there are also many distinctly
different issues that human services training and development professionals encounter.
In the current issue, in-depth organizational system case studies from the Positioning Public
Child Welfare Guidance Institute and four statewide human service systems (Arizona,
Pennsylvania, Texas, and Minnesota) illustrate the use of organization development principles
in action.
In partnership with Casey Family Programs, the American Public Human Services
Association (APHSA) and its affiliate the National Association of Public Child Welfare
Administrators (NAPCWA) launched the Positioning Public Child Welfare Guidance
(PPCWG) Institute in the spring of 2010. The purpose of the PPCWG is to establish markers
of effective performance for the field of public child welfare. The institute provided a unique
opportunity for six selected agencies to learn to apply and embed the PPCWG through a
structure for change management and a continuous improvement process. Using content from
the change management section of the PPCWG, participating agencies engaged in processes
designed to make improvements in agency effectiveness and, by extension, in outcomes for
children, youth and families.
The PPCWG was completed by NAPCWA in 2010 to establish parameters and provide
guidance for effective actions to enable public child welfare agencies to achieve the goal of
providing effective services that improve outcomes for the children, youth, and families
served. It is the culmination of work by leaders and stakeholders in the public child welfare
field to define “what we do, how we do it, and how we must hold ourselves accountable.” The
PPCWG takes into account the unique and complex context in which public child welfare
work is done.
The PPCWG was developed though an initiative which began in January 2007, led by
NAPCWA in partnership with Casey Family Programs. It drew upon significant input and
support from state and local child welfare agency directors and administrators, front-line
social workers, support staff, advocates, academics, researchers, consumers, and
representatives from other human service systems that interface with child welfare, including
juvenile justice, education, and physical and mental health.
Each of these guidance domains was determined by the executive committee and advisory
committee to be a critical factor in agency performance. Taken together, these domains form a
systemic framework for agencies who are interested in making improvements that are
sufficiently comprehensive to have the desired impact. The intersections of the 14 domain
areas are depicted in the analytical framework shown in Figure 1.
Outcomes
Strategy
Budget and
Public Policy Research Technology
Finance
Disparity and
Disproportionality
The PPCWG website has sections about each of the 14 critical areas. Each section:
• Defines the critical area, explains why it is important, and enumerates the questions
the guidance will address
• Defines markers for effectiveness for each domain at different levels of how that
domain plays out within an agency: strategy, key processes, operations, and
implementation
• Contains links to other PPCWG pages and resources
The 2010 PPCWG Institute, developed and delivered by staff at APHSA, focused on enabling
participants to generate practical solutions and approaches to their agency and child welfare
practices, and on providing a structure for participants to make real changes in their agencies.
The institute engaged participants in a learning by doing experience that tapped into their own
experiences and provided opportunities to apply PPCWG content to plan, implement, and
monitor a comprehensive change effort. Major objectives of the institute were to have
participants
• Use the PPCWG to define areas of importance and assess the related strengths and
weaknesses of their state or local child welfare system.
• Build structures and processes for continuous improvement that would best support and
sustain the implementation of identified remedies
• Learn from one another and build networks for ongoing support
Design Principles
Both the PPCWG and the institute design were informed by four guiding principles:
• Be holistic. To be holistic means to look at the big picture whether it is the family or
the work of the agency, consider the context in which the work is done, meet the client
or the agency where they are, and put energy to it to close the gaps in performance and
capacity.
The institute’s structure and process was based on a learn by doing approach, consistent with
adult learning theory. Six state and local agency teams came together to participate in three
two-day sessions that began in May and ended in December. The sessions were generally
comprised of facilitated discussions versus content presentation. Before and between each
session teams applied the structure for change management and process for continuous
improvement explored during institute sessions. Institute staff were available for on- and off-
site consultation throughout this timeframe.
The institute curriculum provided content related to understanding the PPCWG, particularly
the change management guidance. The institute provided a structure for organizing
sponsorship and committees to support a change effort. It also covered a systematic
continuous improvement process through which participants learned how to define an area for
improvement that would reduce the number of children in out-of-home care. Once an
improvement priority was defined, participants learned how to assess their agency’s strengths
and gaps as well as the root causes and general remedies for those gaps. This provided a
tangible critical thinking basis for participants to plan for, implement, and monitor change
efforts.
The structure and process of the institute paralleled the way agencies work with the families
they serve. Just as the experiences of the families must be considered when planning for
change, the experiences of agencies must be taken into account when planning for change.
Participants’ experiences were shared in all face-to-face institute sessions to learn about each
other’s challenges and to assess what worked, what did not, and why. This enabled the group
to identify root causes of ineffective performance, capacity-related problems, and what could
be done to solve them.
Institute applicants were required to address their agency’s perspective on six dimensions:
1. Clear sense of challenges and benefits. Is there insight into the agency challenges
and areas needing improvement, coupled with an understanding of how the institute
could help meet the challenges to reach their proposed outcome goals?
2. General parameters. Is there evidence of compatibility between the agency and
PPCWG values and principles?
3. Readiness for change. Is there a general understanding of the systemic and systematic
nature of effective change efforts as reflected in past experiences?
4. Sponsorship and other resources. Are agency personnel and stakeholders who
would support the effort identified, including those with the authority to empower
change?
5. Team members. Are a project manager and other team members identified who have
the needed experience to identify problems, assess gaps and root causes, and plan,
implement and monitor change?
6. Program areas. Does the identified institute work pertain to Fostering Connections
and the safe reduction of children in care of the agency?
Applicant agencies identified areas of improvement that they wanted to impact through
participation in the institute, and letters of support were written by the leadership of the
agency affirming that each designated team member would participate in all four face-to-face
institute sessions, have time to prepare for each session, and receive the time, support,
resources, and authority they would need after each institute session to plan, execute, and
monitor a change process.
Participating teams ranged from three to five members and included program and
administrative staff from the executive director level to front-line supervisors. One team
included the leaders from partner agencies within their community.
During Institute sessions teams got to know, support, and learn from each others’ experience
as well as understand the nature and content of the PPCWG and how it could be applied to
day-to-day activities. In this way participants developed a clear picture of the strengths and
weaknesses of their organizations and designed strategies for change specific to their
situations.
Pre-Institute
Before the first session of the institute, participants were asked to read the strategy, change
management, leadership, and public policy sections of the PPCWG as illustrative of the
Teams were also introduced to the idea that effective learning and improvement relies on
teams that:
• Are sincere about their intentions for participating in learning and improvement
programs.
• Make things happen through a bias for action and learning by doing
• Make connections between what they want to improve today and what they need to
improve overall to achieve goals and sustain progress
• Know when they need on-the-ground support and ask for it
• Accept advice and direction on ways to do things differently
Participants were asked to think about and be prepared to discuss a critical area from the
PPCWG analytical framework that their team would work on during the Institute. This area
for improvement would be further defined in their charter with their sponsors, and that the
process of sponsor group formation and chartering would be an integral part of the first
institute session.
In addition teams were asked to start thinking about (1) a sponsor group of people with the
authority and willingness to the support needed to implement the work the team decided to
do, and (2) a continuous improvement team (CIT) that would be engaged to manage and
ensure work was done.
Session 1
The first session was structured for the participants to get to know each other better so they
would be more likely to work as a team as they developed their agency improvement plans. A
case study for each agency would be written to document challenges and approved by the
respective jurisdiction prior to its release by APHSA.; participants signed release forms for
video and pictures. Each team reported to the group on their perceived readiness and what
readiness factors might need to be addressed to meet the challenges that a change effort would
present.
The agencies showed considerable variation in level of readiness to change. Each identified
non-monetary resources they needed to bring change about, including human capital and
inter-organizational partnerships. There was a general acknowledgement that information
technology and human resources had to be part of any change process and be represented on
change teams. The teams also reflected that often change efforts are conducted without
consideration for what is going on culturally within the agency or what staff capacity is
The second day of the first session, centered on APHSA’s model for continuous
improvement, DAPIM This method focuses improvement teams on the following themes:
• Defining a problem leads to its clarification and the ultimate aim of an improvement
effort, positioning a team to increase engagement and commitment.
• Assessing a problem requires critical thinking to uncover root causes. Doing so saves
time and minimizes frustration by ensuring that activities undertaken for change are
relevant. Root cause analysis is critical for identifying improvements that have the
greatest chance of paying off.
• Planning is a preparation process in which concrete remedies are developed and tasks
and activities are mapped out to achieve goals and objectives.
• Implementing includes assigning tasks with leads and timeframes, and assuming
responsibility for managing the process toward heightened accountability.
• Monitoring promotes learning and adaptability and paves the way for updating plans
toward achieving the desired impact.
Define
Monitor Assess
Performance
& Capacity
Implement Plan
In the session’s wrap up and evaluation, participants reported that they liked:
They requested that subject matter content be presented for shorter periods and be
interspersed with more hands-on work so it would be more concrete and less theoretical. This
feedback affirmed the institute design approach of minimizing lectures.
Intersession
During the first intersession, facilitators assessed the strengths and struggles of teams and
developed curriculum for the next session accordingly. Support to teams was based on
varying needs and ranged from telephone calls, to site visits, to meeting with sponsor groups
and helping form and facilitate continuous improvement teams. Each team had their own
challenges and experienced varying degrees of success based on their grasp of the process and
challenges within the agency and community served.
Lesson Learned
This intersession experience was consistent with the learning-by-doing principle. As teams
began to struggle with new ways of doing things, the facilitator/consultants used that
experience for coaching, encouragement, and support. Intersession experiences provided real-
world material to process in the second institute session.
Session 2
The teams reported on their intersession activities and their assessment of what was going
well and what needed improvement. Reports were divided into two sections: (1) sponsor
group formation, chartering, and continuous improvement team formation and (2) defining
and assessing areas for improvement. It was anticipated that the teams would have
accomplished the first two steps of the DAPIM process.
The charter template was found to be helpful, but the teams’ charters were also in various
stages of completion. Teams approached the charter work in various ways—some as an
individual activity and others as a collaborative effort engaging colleagues, staff, and external
stakeholders. Some of challenges in completing the charters reflect the challenges to engaging
and finalizing the sponsor group and continuous improvement team rosters. Teams that were
able to present charters to their sponsor groups faced responses ranging from suggestions to
narrow the scope of the work to suggestions to broaden the scope of the work.
All participating teams initially focused on one defined area for improvement as outlined in
their application for the Institute. But as they assessed their organizations’ readiness, and
began to assess their strengths and gaps for their initially defined improvement area, many
teams elected to either broaden or refocus the scope of their project. This is consistent with
the DAPIM method, which operates as a fly wheel with interrelated steps, and not as a linear
method. For example, defining a topic for improvement drives what you would initially focus
on for assessment; conversely, as you begin to assess the initial identified area of
improvement, your findings may help you refine what it is you want to improve.
By the end of the first day in Session 2, it was clear that the teams had achieved varying levels
of success. Facilitators adapted the institute curriculum and engaged participants in a learning-
by-doing experience specific to the institute itself. In real time on day 2, the facilitators used
the process of DAPIM to make adjustment to the overall timing of the institute. Participants
learned more about how to take an issue, define it clearly, assess the strengths and gaps
related to the issue, plan for change, implement the change, and monitor its progress. Near the
end of the second day, teams worked in small groups with their APHSA staff liaisons to
develop customized action plans for meeting their commitments by the next session. In the
session’s wrap up and evaluation, participants reported that the “mini-DAPIMing” of the
institute itself was very important in the learning process, taking a theoretical concept and
applying it to a real-time experience embedded the concept.
Lessons Learned
While establishing a sponsor group and continuous improvement team roster may seem like a
simple task, it is one that is overlooked in many change initiatives. When these teams are not
Change management: used the principles of inclusiveness and being holistic in the
development of sponsor group and continuous improvement team rosters. For
example, support functions (HR and finance) that could conceivably impede project
progress for legal and resource reasons were deemed important to include.
Strategy: (1) Established both quantitative and qualitative outcome goals and
recognized the need for both (e.g., it is one thing to serve greater numbers and another
to ensure that the services are appropriate and adequate). (2) Embraced the concept of
slowing down to speed up. It is necessary to go slow enough to ensure sound planning,
to think methodically about outcomes, to promote buy-in with staff and stakeholders,
to ensure that all good ideas are considered, and to provide enough time for effective
implementation. This enables speeding up to properly reinforce a sense of urgency that
the improvement work is critical to outcomes for children.
Intersession
During this intersession APHSA liaisons maintained regular, ongoing engagement with their
assigned teams by of telephone contacts and site visits. The liaison intervention depended on
team needs and receptivity. On-site activities included DAPIM define and assess work
uncovering root causes by conducting focus groups with front-line staff. Planning and
implementation consultation was provided to the agencies that were further along in their
continuous improvement effort. The role of the facilitator was to share observations and
challenge participants’ decisions to enable their success.
Session 3
This session included extensive team reports covering their define and assess efforts as well
as the three plans: continuous improvement, communications, and capacity. Structured
presentation questions to be addressed were developed and provided to the participants to
guide and focus their reports.
In Session 3 it became clear that the all teams were adapting the institute’s materials to their
own particular situation. Institute staff endorsed this and reinforced support for the teams
moving at different paces. Doing so seemed to increase the teams’ energy and momentum for
complete their projects in a timeframe that made sense to them. The team reports clearly
It was also evident that the four PPCWG and institute principles that resonated with the
participants in Session 1 were embedded in their work. The participants were comfortable
critiquing each other as supportive peers. And the teams were both overtly or intuitively
applying mini DAPIMs to tasks within their plans. When this was pointed out the response
was “well it just became part of the way we do things.”
Teams also noted that they had become more mindful and intentional in their work,
recognizing, for example, that slowing down the change process may actually lead to better
outcomes. Teams acknowledged that they were each in a different place in the change
process, but felt that they were all moving in a positive direction. Each team was also able to
look at where they were on the DAPIM flywheel and clearly understood how they got there.
The schedule for day 2 was adjusted to accommodate participants’ having more time for
cross-team feedback. Facilitator presentations covered implementation, monitoring,
sustainability, and data.
Lessons Learned
As the teams began to internalize the PPCWG and change management techniques, they were
able to apply the content presented in prior sessions. Understanding content from the guidance
does not happen easily or immediately, but it is application that produces understanding and
continued use. Some of the content that was specifically utilized by the institute teams:
• Capacity assessment helps determine appropriate options when initiating and planning
a change effort.
• PPCWG critical areas of budget and finance, public policy, research, and technology
are the foundation for strengthening and supporting the work of others; play a vital
role in improving practices; make work easier, better, and faster; and in help allocating
resources.
• Quick wins need to be used fairly early in a change process, and providing stories of
how quick wins are having an impact helps sustain the momentum for change.
• Key messages within well-developed communication plans, backed up with solid data
and commentary, are a vital lever in sustaining the momentum and energy for a
change effort.
• One can hire staff for their core competencies, but making a difference in terms of
safety and permanency requires analyzing data in ways that drill down to the basic
units of accountability and improved case planning.
• Resistance to change can be constructive or non-constructive, and it is critical for
those leading change efforts to engage resisters, understand the reasons for their
resistance, and respond accordingly.
The major work during the intersession was to review the project progress, identify what still
needed to be done and most importantly link the activities to outcomes. APHSA staff engaged
with the teams as in previous intersessions, helping them prepare for the formal closure of the
institute and to plan for sustaining their momentum for change. All the teams were at different
stages of progress at this point, but each team’s efforts incorporated the following:
Session 4
Each team reported on their overall progress since the last session, including the barriers to
progress that they anticipated going forward. Each team also presented data dashboards for
connecting improvement efforts to outcomes, next steps for implementing these on an
ongoing basis, and a plan to monitor their and use it to make ongoing adjustments.
Implementation and monitoring concepts and tools were presented in an interactive lecture
format, and the issue of post-institute sustainability was addressed in a facilitative manner.
Barriers for sustainability were identified and potential ways to meet them were considered
with input from the teams, including:
• Support from above (sponsors, new leaders and others who may be needed). The work
already been done on the project and the PPCWG could be used to educate, inform
and connect with new leaders. Teams should stay persistent and tenacious and
maintain connections through all parts of the agency for support. Teams should use
outcome measures to support their projects and provide a compelling vision to those
with the authority to empower them.
• Workforce and continuous improvement team capacity and financial resources. This
must be continuously evaluated as to its impact on the change initiative because of the
current economic climate of budget cuts and staff layoffs. At times, change efforts
should be limited or slowed down in line with available capacity to support them.
• Fidelity to the PPCWG/APHSA organizational effectiveness model for change
management and continuous improvement. This may be difficult without someone
checking in, asking questions and providing feedback; the peer pressure of needing to
The session and institute came to a formal close with a celebration of the progress each team
had made, and statements from team members regarding how the institute experience had
helped them and their agencies.
Agency readiness assessment assumed much greater importance as the institute unfolded. The
related tools resonated with the participant teams and appeared to the APHSA
facilitators/liaisons to be a useful gauge for determining the speed, scope, and required
support for a change effort. Participants commented that readiness assessments provided
insight into the scoping of improvement priorities, and they suggested that such tools be used
before and during a change effort.
It is critical not to supply too much information to participants before they have a chance to
apply it. Participants were prepared to move quickly through the curriculum and wanted to
move quickly to break-out activities. The first day of Session 1 was about 50 percent lecture
and group presentation and 50 percent discussion. Lecture periods did not exceed 20 minutes,
but participants indicated they thought the lecture component was too long and theoretical. It
became clear that the greatest learning takes place among peers, so even brief lectures should
be punctuated with discussion periods.
There must be detailed exploration of how well the participants understand what they are
supposed to do in an institute well before the first session.
Conducting a mini-DAPIM on the institute itself assumed great importance as a way to model
and reinforce the need to be systematic throughout any improvement effort.
Though each agency focused on different areas for improvement, the changes that each
envisioned through applying the PPCWG analytical framework and domain content,
supported by the institute's change management structure and continuous improvement
process, resulted in enhancements in agency capacity or practice. These enhancements are
perceived by the participants as positively impacting the capacity of families to provide safely
for their children in their own home.
Each participating agency team perceives a cause and effect link to the reduction of children
in out of home placement though improved practice:
• Kansas: By the close of institute this team was poised to integrate the institute’s
change management structure and continuous improvement process into daily
operations and at all agency levels (state, regional, and local). In June 2011 this team
reported that each region was employing the structure and process, and that the
approach was taking hold on an informal, grassroots level in many local settings.
Throughout the time that they participated in the institute, the project team set targets
for regions to work toward. The goal was to move a stalled but successful process
forward for safely reducing the number of children in out-of-home care. In state fiscal
year 2011 two of the six regional offices achieved their benchmarks. Between July 1
and November 30, 2010, Wichita reduced the number of children in care by 62, from
998 to 936. During the same period, the southeast region reduced the number of
children in care by 38, from 520 to 482.
On an aggregate level, there were 8362 Kansas children in care at mid-year 2009,
8275 at mid-year 2010, and 7685 at the end of April 2011.
• Larimer: The Larimer team has been working with their community’s Family
Consumer Council (FCC) to become more effective and a stronger community asset.
In June 2011 the team reported it was very much engaged in the effort to strengthen
the FCC and was applying structure and process lessons from the institute in various
and novel ways.
Larimer reported that in July 2009 there were 251 children in care. In July 2010 there
were 196 children in care and as of May 2011 there were 199 children in care. During
that two-year period the recurrence of abuse rates remained virtually the same. It is the
hope that once the FCC’s strategy is fully realized, the community will be more
responsible for caring for the children and families within it, safely reducing the
number of children in placement with the public agency.
Results through early 2011 were promising. There has been a 20 percent annualized
decrease in foster care placements and a 22 percent annualized increase in discharges
to permanency through March. In 2010, the agency facilitated one family partnership
meeting compared to 11 in early 2011. At the beginning of 2010, Charlottesville had a
total of 215 children in foster care and in December of 2010, the number of children in
foster care had decreased to 106.
• Calaveras: This team believes that by lowering the number of referrals coming into
the hotline for protective services, through the analysis of why the rate of calls is so
high and looking at where the services needs may be met by other community
agencies, staff capacity will be increased to provide essential services to families
where the risk of out-of-home placement is the greatest. With improved appropriate
services to these families, more children will most likely be able to remain safely at
home. This will hopefully result in a reduced number of children in out of home
placement.
Calaveras is a small locality, and from December 31, 2009 to December 31, 2010 the
number of children in out of home care remained the same at 74. At the end of May
2011 that number had increased to 79.
As of January 31, 2010, the County of Baltimore had 595 children in care. On June 30,
2010, there was a decrease of children in care to 560. On May 31, 2011, the number of
children in care had increased to 572.
By successfully embedding the agency’s practice model in a way that engages the
entire agency, DC believes that they will establish better collaboration between its
casework functions and administrative functions (such as legal, fiscal, and human
resources). They believe that this, in turn, will increase the capacity of the direct
service staff to improve their services, parental capacity to provide for children safely
in their own homes, and reduced placement for children.
As of December 31, 2009, the number of children in out-of-home care was 2,103 and
as of September 30, 2010, the number of children in out-of-home care decreased to
2,092. Since DC initiated improvement plans in the spring of 2011, trends are
expected to become evident later in the year.
Looking Forward
Casey Family Programs is providing funding through 2011 to do follow up with the agencies
engaged in this first institute and to provide assistance as needed to sustain improvements.
Depending on the needs of the agency, the following type of assistance will be offered:
Any future PPCWG institute will be re-designed based any new information gleaned from this
year of follow-up and on lessons learned from the 2010 cycle. Changes based on this institute
would include:
• Increasing pre-institute preparation to move the teams further into the process prior to
the first in-person session
• Providing an even clearer and more realistic preview of the work effort required by the
institute
• Clarifying intersession responsibilities to ensure that participants understand fully and
are able to connect their ongoing efforts to the PPCWG
• Conducting the institute over a full year rather than nine months to ensure that all
agencies, regardless of their readiness, have the opportunity to get to the
implementation phase of the work
• Reducing up-front reading of the guidance, focusing more on how to navigate the
PPCWG website and find the parts within each domain area that are applicable to a
selected project as participants proceed through improvement efforts
Background
Since the mid-1990s Arizona’s Nutrition Assistance program has been one of the fastest
growing in the country. It has experienced many ups and downs.. The federal Food and
Nutrition Service (FNS) supervises the Supplemental Nutrition Assistance Program (formerly
known as food stamps). In 1996, the FNS levied a hefty penalty on Arizona for substandard
performance during the prior three years. In 1999 and 2003, Arizona earned bonus funding for
being among the top performing states. In 2007, Arizona was penalized $1,500,000 for being
one of the poorest performing states during the preceding two year period.
FNS and Arizona settled on an agreement for the agency to invest 50% of the sanction, or
$750,000, on new strategies for performance improvement, with the remaining 50% held in
abeyance pending the results of this investment. The state also faced class-action litigation
regarding its food stamp application processing.
Through an internal analysis, FAA determined that its staff recruitment, development, and
retention were deficient and in need of improvement. FAA also determined that the key
positions of first level supervisor and manager had the greatest impact on performance and
APHSA’s Approach
In early 2007 DES learned of a relatively new APHSA continuous improvement program that
had met with early success for a few state agency teams that had implemented it. The initial
request for APHSA support was for a two-day intensive continuous improvement
seminar/workshop for all FAA supervisors in Arizona’s most populated county.
APHSA’s initial consulting approach was to back away from the suggested intervention and
take things from the top—to probe for the agency’s concrete goals, objectives that the agency
believed would best address these goals, current strengths and gaps related to these objectives,
root causes for the identified gaps, and potential remedies for addressing the root causes.
Through a structured process of reflection, it became clear that FAA was certain about its
goals, objectives, and related gaps, but less certain about the root causes and remedies for
their primary gap—supervisory effectiveness. Through discussion of the initial request, it
became clear to FAA that while effective supervision was an important gap that needed to be
addressed, there were many reasons besides lack of training for the gap. Further, those reasons
were likely different and nuanced for each local office management team, subject to the
motivations and collaborative spirit of each of these teams. It also became clear that the
remedy employed would need to be focused on working directly with these actual teams to
continuously improve themselves, in essence “learning by doing.”
FAA formulated a plan to employ this program and contracted with APHSA to deliver
consultative services through direct facilitation work with the initial 11 office management
teams. In July of 2007 FAA kicked off the program with a full day workshop wherein the
concept of learning by doing as a method of continuous improvement was introduced to the
management teams for the first five participating offices. Also in attendance were a number of
management personnel representing policy development, training, systems, quality control,
and quality improvement. This initial workshop provided a forum for identifying and
understanding both agency and office priorities, which established the groundwork for each
team’s activities.
This DAPIM model provided a systematic approach within which team members learned
continuous improvement by actually solving real problems they had identified. Many times
throughout the process, as teams employed the DAPIM method a tool was introduced from
the consultant’s tool kit or emerged from the team’s own work as a solution to a particular
problem. This is where more traditional training content fits into the DAPIM approach,
ensuring immediate application in the real world.
Another key aspect of the DAPIM method is to initially identify and work toward quick wins
that can be achieved within a month. Experiencing early success serves to reinforce the
approach and motivate team members to continue working towards mid- to long-range
solutions that may be more complex and time-consuming. For continuous improvement, the
analogy of turning a heavy flywheel was given to describe the difficulty and effort necessary
to make the first turn. Once momentum is established, the second and subsequent turns of the
flywheel become increasingly easy and intuitive.
One team, for example, created a time management tool to help them understand why
supervisors were not able to meet all of their job responsibilities during the work day. By
charting the time they spent on each activity, it became evident that supervisors had been
devoting their time to work that belonged to their staff. This phenomenon, described by
APHSA as “unconscious demotion,” had eroded the available time supervisors had to actually
supervise—limiting staff development, increasing the urge for supervisors to “take over” the
work of their staff, and intensifying this negative cycle.
Reversing this cycle became a central focus of this office’s supervisory team, initially in ways
that were quick wins, including improving meeting management, staging some fun and stress-
reducing activities, and eliminating some low-priority reporting activities. Over time, though,
the team began to consider strategies for mentoring and developing staff that would take more
time—rotating leadership responsibilities among staff members, developing trust through
thoughtful recognition, communication and teamwork activities, having more effective
“crucial conversations” with staff, establishing clear and consistent supervisory standards
throughout the office, delegating decision-making authority where appropriate, and
strengthening their performance management practices with support from HR.
Phase I
The APHSA consultant’s Phase I work with the first five teams was completed in October
2007, with all five teams reporting successful results from their quick win solutions and
putting longer-term continuous improvement plans in place as well. Most importantly,
participants on these teams had learned the DAPIM continuous improvement methodology
and continued to add strategies to their tool box on their own. Finally, each team enhanced
their level of trust, with other teams and with upper management by having open and
constructive discussions, often about difficult and revealing subjects.
These observations were made by the participating teams themselves, as well as by their
managers, who participated in the facilitated sessions as observers, and most of FAA’s senior
management, who participated in monthly monitoring sessions. These sessions and the
participating management, including DES human resources and training staff, came to be
known as the “sponsorship,” better ensuring sustainability in terms of priority, time, and other
resources.
Phase II
FAA sponsorship and APHSA began Phase II of the effort by further considering the question
of sustainability and how to replicate the learning-by-doing approach as broadly and quickly
as was practical. Upon identifying the six additional local office teams to work with the
APHSA consultant, four agency employees who had served as observers during Phase I were
designated to be co-facilitators. The intent was for co-facilitators to implement the DAPIM
model at subsequent sites and, in effect, learn by doing themselves. Session observers were
expanded to include members of each office’s regional management team, ensuring a stronger
linkage between improvement and alignment efforts at different tiers of the organization.
Phase II of the effort met with similar success at the local office level, this time with
significant interest from a number of participants to become DAPIM facilitators in the future.
The APHSA consultant was therefore tasked to first demonstrate the DAPIM and facilitation
methods involved, and eventually to train internal FAA staff to facilitate the DAPIM process.
People who were chosen to learn the facilitation process were familiar with FAA, the offices,
and the people. Their role as a DAPIM facilitator was in addition to other job duties. Most had
little or no experience facilitating groups, but their commitment and passion for the DAPIM
process as a way to improve organizational effectiveness made them eager learners.
The process to train a DAPIM facilitator took three to six months. After observing the
process, the new facilitator was teamed up with an experienced facilitator to work with an
office as co-facilitators. Each prospective facilitator, then, had the opportunity to observe an
experienced facilitator work with an intact workgroup from start to finish. They also
At the Phase II sponsorship meetings, 10 additional internal facilitators were identified for
subsequent phases of the effort, including three staff from the DES training function. In
jointly planning for Phase III, to begin in March 2008, FAA sponsors and APHSA also
decided to develop an organizational effectiveness (OE) hand book to support FAA
facilitators over time to continue their practice well after APHSA had stopped providing
direct support. This handbook would contain all of the models, tools, templates, activities,
examples, and narrative guidance that would be needed for an internal continuous
improvement facilitator.
At this point, FAA was viewing the DAPIM and learning by doing methods as an emerging
way of doing business for all of its management levels to learn and employ. In addition, after
receiving DES executive leadership support, plans were launched by the DES training
function to incorporate the approach in other areas of staff development throughout the
agency.
Phase III
In Phase III, four FAA facilitators from the first cohort worked alongside the new facilitators
in the local offices, with the APHSA consultant observing on an ad hoc basis in more
challenging local office settings or where internal facilitators were not yet available. The local
offices selected were a combination of five from the first 11 offices that still needed further
support, and three new local office participants.
At this point there were enough internal facilitators involved with the effort to also stage bi-
monthly sessions where the FAA facilitators’ practice itself was formally subject to DAPIM
itself. Each facilitator would bring successes and challenges to the table and the collective
team would then consult to one another on strengths, gaps, root causes and remedies, and
plans to continuously improve their individual and collective facilitation effectiveness.
Reports from the participants themselves and at the sponsor meetings cited this activity as
vital peer-to-peer support and development. These sessions were also instrumental in drafting
and revising key sections of the OE handbook, as many of the remedies identified constituted
team activities whereby the group created a new facilitation tool or guidance for addressing
facilitator challenges.
Phase IV
To kick off Phase IV of the effort in September 2008, the first edition of the Organizational
Effectiveness Handbook the 16 internal facilitators developed during this effort. From this
At the beginning of this phase the sponsor group and director of FAA established eight 2008-
09 goals and objectives for sustainability:
During Phase IV feedback on the first edition of the OE handbook was converted into a list of
improvements that were delivered by APHSA at the beginning of 2009. The APHSA
consultant was also asked to work with the regional program manager team regarding the
continuous improvement of their working together and with their supervisor, the director of
FAA. Finally, the APHSA consultant began working with a project manager hired in
November 2008 as part of the goals and objectives for strengthening the sustainability of this
effort.
Unfortunately, heading into 2009 DES and FAA entered a period of sudden and severe
budgetary crisis and the onset of the recession. The ensuing changes resulted in a 65%
increase in applications at the same time as a 35% decrease in staffing. For primarily these
reasons, Phase IV wrapped up in January 2009 with a tenuous hold on sustainability.
Sponsorship for the effort at the FAA leadership level, DES executive level, from the training
function, and from a number of the FAA regional managers was not sustained, and the project
manager position created and staffed in late 2008 was eliminated. Today there are solid
pockets of DAPIM and learning by doing being practiced within FAA and some interest in
reviving the effort, but no formal plans for doing so are in place.
DAPIM Facilitation
Local FAA offices were chosen to learn the DAPIM process based on the need to improve
their timeliness and accuracy numbers as well as on the size of their office. Offices were
A key to the successful transfer of the DAPIM process to each management team and office is
the DAPIM facilitator. The role of the facilitator is to create a safe environment in which
participants can learn the process through first-hand experience and continue using the
process on their own. A critical piece to this facilitation is to get the participants to reflect
during each phase. This produced two positive results. First, participants became aware of
how their behaviors had either a positive or negative impact on the office climate and culture.
When participants chose to change behaviors, the climate and culture improved along with the
processes. Second, participants learned that when an implemented plan did not work, they
could learn from it and move forward to try again. Over time, participants came to realize the
process is continuous.
• Work directly with intact teams that work together every day.
• Build a safe, high-trust, team-oriented learning environment.
• Encourage teams to tackle real life work challenges through creativity and
experimentation.
• Facilitate continuous improvement for the aspects of performance of greatest
significance to the teams themselves.
• Build the capacity of participating teams to handle new and emerging challenges as on
ongoing way of doing business.
• Use team members’ expertise and insight about their own challenges to determine
which developmental models and tools should be introduced and when.
• Use an organizational needs assessment to determine developmental priorities in
alignment with organizational goals and objectives.
• Measure success by identifying concrete improvements to learners’ performance on
the job and to the lives of the organization’s clients.
Prior to the first session, each management team completed a survey and a one-on-one
interview with a DAPIM facilitator. This assisted the facilitator in determining what the
overall perception of the office operations was to each team member and the level of mutual
trust and cohesiveness of the office staff. A key principle of DAPIM is to work on both tasks
and relationships as a holistic approach to organizational effectiveness.
The DAPIM process included a standard set of work products that the teams developed in
their sessions. Work products included identifying initial feelings about the work, setting
ground rules for relating with one another during the sessions, identifying areas for
In the first few sessions, the DAPIM facilitator introduced the DAPIM process to the
management team, working with them to identify improvement areas for the office in terms
that were operationally specific (define phase). Each targeted improvement area was then
assessed by its current state against the desired state, with strengths, gaps, root causes for
priority gaps, and related general remedies all identified (assess phase). A work plan for long-
and short-term remedies was then established (plan phase). By the third or fourth session,
implementation of each targeted area had begun (implementation phase). Many management
teams by this time were including their front-line staff in contributing ideas for the process
improvements. By the fifth and sixth sessions, the implemented short-term goals were
monitored and adjusted as needed (monitoring phase), while the team continued to develop
and implement their longer-term goals.
The implemented strategies that came out of each office varied even though the desired
business outcome was the same: to improve supervision and improve timeliness and accuracy
numbers. The variance in the strategies demonstrated that when the work teams were taught to
use the DAPIM process, the freedom to be creative, innovative, and open to experimentation
surfaced. This was a very important aspect of the process to the individual offices. By
allowing each office to use DAPIM to solve their own problems rather than a one-size-fits-all
solution, each office was able to address specific issues and problems for their location and
demographics.
Results
Many of the local offices involved in this effort report that they’ve experienced significant
positive results in supervisory effectiveness and office results. Over time, several local office
management teams continued to use the process on their own without the need of a
professional facilitator. Many supervisors developed self-confidence through their successes
with the DAPIM process. Frontline staff often was brought into the process to improve office
operations, resulting in increased employee engagement in several local offices. Many
This approach to learning was a win-win for the participants and the organization. It
facilitated learning, improved individual and organizational performance, and reached the
desired business results of improved timeliness and accuracy numbers.
Throughout the plan phases and sponsorship meetings, the APHSA consultant charted local
office progress in terms of the facilitated work products that each team was expected to
generate through their session efforts:
Initial Feelings
Ground Rules
Improvement Topics Defined
Strengths
Needs/Gaps
Root Causes
General Remedies
Quick Wins
Longer-Term Plans
Monitoring and Adjustments
Team Activities
Staff Communication
Staff Input and Management Follow-Up
In Phase I, all five participating teams accomplished all of their deliverables (although one
team struggled somewhat to establish effective two-way communication with their office
staff). Many of these teams and the observers of these teams also reported personal and
collective breakthroughs in their teamwork, trust, and mutual respect. Some offices reported
major improvements to both timeliness and accuracy within six months, and one office
In Phase II, all six participating teams accomplished their deliverables through general
remedies, and two of the six teams accomplished all of their deliverables, with each reporting
significant personal and collective breakthroughs in teamwork, trust, and mutual respect. One
office that had particular success implementing a range of quick and mid-term initiatives
reported, “I have been the Local Office Manager in [my office] for over 12 years and was
unable to accomplish what DAPIM did in a matter of weeks and months.”
For the four teams that did not accomplish all of their work products, two of these teams fell
short on longer-term planning but were otherwise moving forward effectively, while two
others seemed to be stuck. The primary root causes involved were determined to be(a) the
level of support provided by the local office manager or the regional manager, and (b) the
relative effectiveness of the (in development) facilitator involved.
In Phase III, internal FAA facilitators became responsible for monitoring the progress of the
local offices with which they were working, using an internal evaluation format that focused
on timeliness, accuracy, customer service scores, and local office manager self-reporting.
The APHSA consultant focus shifted to guiding the progress of the internal facilitators, who
also reported their progress during the facilitator DAPIM sessions and sponsor sessions. Most
of the internal facilitators reported acquiring a working knowledge and hands-on proficiency
with DAPIM and learning by doing, this was affirmed and reinforced during the collective
continuous improvement sessions with the APHSA consultant and demonstrated by the work
products yielded by the local office teams with whom they worked with one of the most
proficient internal facilitators left the agency, while two others struggled to acquire a working
proficiency. Paradoxically, these two individuals worked in trainer roles, while all but one of
the other internal facilitators was either a local office manager or in FAA’s regional
management.
During Phase IV, the initial plan was to transfer ownership of the effort to the regional
offices. Monitoring and evaluation efforts were very limited, and the effort was no longer a
formal one by February 2009. The comprehensive set of goals and objectives for sustaining
this effort through 2009, while adopted by the full sponsor group in September 2008, were
never formally monitored.
By the end of 2007 FAA improved Nutrition Assistance program accuracy to 95.13%, quickly
fulfilling the sanction settlement agreement requirements. New and more ambitious accuracy
goals were established for 2009. FAA received a substantial monetary award in 2008 for
results well beyond FNA standards during 2007, maintaining high levels of both accuracy and
The FAA program results achieved in 2007 have not been sustained since. According to FAA,
2008 to 2010 error rates increased from the 2007 rates due to the state budgetary crisis.
Approximately 20 FAA local offices were closed, and workers were moved to other offices to
handle the increased workloads stemming from the recession and other factors. Front-line
workers were unable to keep up with the number of incoming applications both from on-line
and in person applications. FAA management’s energy went into the reconfiguration of
offices and dealing with the spiral upward of applications.
APHSA’s evaluation framework can be summarized through the following four statements:
In the case of Arizona’s FAA, the facilitation was typically good to excellent, but with some
planned variation due to the fact that the organization was developing internal facilitators as it
progressed through the project phases. The OE tools were also a work-in-progress for the
internal facilitators who were developing their practice. Top sponsor willingness to make the
changes needed for the organization to accomplish its goals was typically good to excellent,
though that same level of willingness became inconsistent and questionable when the
ownership of the effort began to shift from a central point of sponsorship to ownership at the
regional level.
Many organizations experience change as an event initiated at the top, regardless if the people
in the front lines are ready, willing, and able. In a spirit of innovation, FAA tried a different
direction to get to the organization ready and willing to adapt to changes they might face. The
DAPIM process was given to the front line offices first rather than the mid to executive
management teams.
Starting at a grassroots level, the expectation was that “seeding” the DAPIM process as a way
of doing business at the local office level would produce the desired individual and
organizational performance improvements across the division. The intent was to give front-
line offices the DAPIM tool first to help them improve their internal office operations so they
could respond proactively to the needs of their specific local communities while meeting
federal and state mandates.
Facilitators faced resistance at first and worked hard to build relationships and trust with the
office management teams. Only when the management teams began to see some positive
results from the work did they come to believe that the DAPIM was a tool to help them rather
than a punishment and the mid to executive leaders were really trying to support them in a
positive way. The trust began to increase and the “us and them” mentality began to decrease.
The willingness and readiness to change also increased.
Over the first year, the kick-off and debriefing sessions of office results eventually began to
build organizational enthusiasm and excitement. Management teams hearing of the positive
results started to request that the DAPIM process be brought to their offices. The mid to
executive level management sponsors were happy with the relational and operational
improvements that were occurring horizontally through the program and that the desire to
learn the process was growing. But a disconnect still existed between the local offices and the
sponsors.
For the most part through the first three phases of this effort, the results were positive, as
expected. Some early phase participants reported the experience as a “breakthrough,”
“transformative,” or “career-changing.” One of the more successful local office managers was
promoted, another with significant personal identity concerns was recognized for much
improved performance, and one of the early facilitators became far more effective in all work
efforts as reported by their supervisor.
Beginning in the fourth phase the results were far more negative and not sustained, as would
be expected when sponsorship wanes (see lessons learned for more on this). Other
contributing factors to Phase IV difficulties included the coming budgetary crisis and,
ironically, a sense that the effort had paid off and did not require as much ongoing support
and vigilance.
B. The work products, communications, and related activities coming from the
learning-by-doing process will result in improvements to the agency’s capacity to
perform.
In the local offices where the learning by doing and continuous improvement methods were
embedded and the related work products were accomplished, many improvements to the local
office climate—teamwork, trust, communication, and morale—were noted by the local office
In the offices where many improvements to climate, teamwork, trust, and communication
were noted, improvements were also evident in staff efficiency, customer service scores, staff
retention, absenteeism, and staff complaints about their management. APHSA notes that a
task-relationship balance is very useful to teams that are selecting topics for improvement and
identifying root causes and associated remedies. DAPIM facilitators are trained to scan for
imbalances in a team’s or agency’s culture in this regard, and “lean the other way” to ensure a
balance begins to emerge.
In retrospect, there are number of reasons why the DAPIM process did not develop deep roots
within the organizational culture and practices of its senior leadership. First, the organization
was spread out from one end of the state to another. Second, the need to go back and follow
up with some offices slowed down the schedule to get the remaining FAA local offices
around the state. Third, the new DAPIM project manager position became a casualty of the
budget cuts along with the lack of DAPIM facilitators with time that could be dedicated to
work with offices.
When the state budget deficit came into full effect in late 2008 and early 2009, no DES
human service program was left unmarked. Most state agencies had a reduction in their
workforce either by leaving vacancies unfilled or through terminations. Some programs even
lost funding or ran out of funds before the end of the fiscal year. All FAA local offices had
not been exposed to the DAPIM process yet; the practice was not rooted enough to become
part of the culture throughout the state and it lost momentum. The good news is that the
DAPIM process is still used by a few offices as a way to solve local office problems, and
those managers trained as DAPIM facilitators continue to use it as a way of doing business
when they see an opportunity to do so.
As improvement efforts took hold at the local office level in many settings, and despite limits
to the effort’s overall sustainability, the impacts on client experience, timeliness, and accuracy
were experienced in these settings. This may argue for beginning broad agency improvement
efforts where they will have the most likely impact on those being served on the front lines.
However, sustainability was limited by this very fact: since the effort was primarily focused in
local settings, ongoing sponsorship at the regional and central office levels was not as strong
as if these had been the first or primary improvement teams being facilitated.
Lessons Learned
First, and as described above, the role and importance of sponsorship cannot be overstated in
terms of the sustainability of this type of intervention. Where those in authority were
committed to the value of this effort, results were far better than in settings where this was
not the case. As APHSA has evolved its practice within other agencies, it has become far
more focused on this very condition of sponsorship and its impact on sustainability, and will
continue to be so.
The FAA executive leadership and mid-management teams might have gone through their
own DAPIM process earlier, perhaps even at the same time as the first set of offices. Maybe
knowing that their leaders were also going through DAPIM would have lowered the
resistance of the first offices; indeed, whenever the FAA director communicated about her
own “personal DAPIM work,” the APHSA consultant and internal facilitator team noted a
significant increase in energy for the effort overall.
Also, the strategies and results coming from the local offices could have been more concretely
aligned with the improvement strategies and results of the mid and executive teams. Each
level of the organization had its own DAPIM work to do, and it all needed to intersect and
work together for the overall good of the program and strengthen the relational link between
all levels of the program.
If the DAPIM process had been disseminated from top down and across the program at the
same time, the chances of it truly becoming part of the organization’s culture could have
created a proactive and dynamic organization that could have withstood the reduction in the
state’s budget and its effect of a reduced workforce, because the program would have be
ready and willing to act proactively at the first sign of trouble.
Second, and not discussed above to any significant extent, is the role and importance of
internal project management for an effort such as this. In Phases I-III of the effort, a project
manager with solid related skills and relationships was assigned, and the effort benefitted
Third, the experience in Arizona has informed APHSA’s ongoing efforts to develop internal
DAPIM facilitators in other agencies.
When FAA facilitators started to learn to facilitate the process, there were no reference tools.
This had been purposefully done so that facilitation did not become a structured pattern that
had to be followed. DAPIM is a fluid and flexible process. The best analogy to describe this is
the difference between classical music and jazz. Classical music is played by musicians as
written; no notes are added or omitted when a composition is played. In contrast,
accomplished jazz musicians improvise along with other musicians and still produce a
harmony with an appealing sound. With that analogy in mind, APHSA wanted the experience
of the DAPIM process to be the central focal point rather than a rigid or rote process on paper
that had to be followed step-by-step.
Most of the FAA facilitators felt somewhat lost without something written that they could
refer to during their work sessions. A facilitator’s handbook was developed which included
tips and techniques in facilitating the DAPIM process. There was disagreement among the
FAA internal facilitators as to the timing of when to give a new facilitator the handbook: as
they begin to learn or after they have learned so they can have had time for their own DAPIM
experience.
APHSA continues to strive for a balance between providing structure and clarity as new
facilitators develop their practice and allowing for each facilitator to make the practice their
own so it is intuitive, accessible, and delivered in a dynamic and responsive manner.
An intriguing result was that two of the three internal facilitators selected from the training
function were not successful. Further study is recommended regarding the differences in skill
and temperament between what a classroom trainer does when performing well and what a
consultant or facilitator does. As described above, the best analogy may be one of playing
either classical trumpet or jazz horn: while built upon similar foundational skills and practice,
each type of performer’s temperament and motivation for playing music may impede their
progress in the other form of playing. In this case, the APHSA consultant concluded that one
of the trainers was too rigid about a instruction-centered approach to acquire a more
facilitative technique. The other trainer was too content-centered—accustomed to presenting
conceptual training content for its own sake—versus facilitating in a manner that yields
customized and concrete work products from a team. In contrast, the trainer who was
successful was highly skilled in adaptive and participant-centered training techniques. A
model that elaborates upon trainer characteristics in these ways may be useful to the field.
It is also important to understand that FAA internal facilitators were conducting these sessions
in addition to performing their normal job duties. The facilitation of the DAPIM process was
voluntary, with the approval of their immediate supervisor. As workload increased for many
of the facilitators, in large part due to the budgetary crisis and increased applications that
ensued beginning in late 2008, their availability to facilitate decreased. In some cases they
were pulled from providing facilitation assistance altogether.
A final lesson learned is that despite the many positive impacts of embedding continuous
improvement methodologies at the supervisory level of an agency, a major trauma to the
broader organization and environment can limit full implementation of DAPIM.
The Pennsylvania Child Welfare Training Program (CWTP) is a collaborative effort of the
University of Pittsburgh School of Social Work, the Pennsylvania Department of Public
Welfare, and the Pennsylvania Children and Youth Administrators. It was established to train
direct service workers, supervisors, administrators, and foster parents in providing social
services to abused and neglected children and their families. The CWTP is centrally managed
and regionally administered by the University of Pittsburgh School of Social Work.
During the first round of Child and Family Services Review (CFSR) in 2003, the CWTP was
recognized to be in the top tier of child welfare training systems by the Administration for
Children and Families. Despite the national recognition, the CWTP engaged in a
transformation initiative following the first round of CFSR The goal was to expand a strong
classroom-based system for developing staff knowledge and skill to a system that also would
have the capacity to support agencies in improving overall organizational effectiveness (OE).
To achieve the goal, the CWTP collaborated with the American Public Human Services
Association (APHSA). The CWTP documented its success and challenges throughout its
transformation through an informal tracking system. In 2009, the CWTP re-engaged with
APHSA with the goal of familiarizing OE staff with updated APHSA OE models and tools.
At the same time, Pennsylvania was engaged in the second round of CFSR.
What follows is a description of the process the CWTP engaged in to build OE capacity as
well as the impact of OE facilitation in one local county agency. Also discussed are lessons
learned and planned next steps to support positive outcomes for the children, youth, and
families served by the Pennsylvania child welfare system.
Background
Pennsylvania’s child welfare system is state supervised and county administered via 67
counties agencies. The CWTP provides both training and technical assistance to county
OE facilitation was first introduced to county agencies by the CWTP in 2004. At that time,
OCYF was interested in the CWTP supporting practice improvement efforts identified in the
Program Improvement Plan (PIP) such as family engagement, youth engagement, building
systems of care, staff retention, visitation, and risk/safety assessment.
The CWTP partnered with APHSA to develop practice improvement specialists through
coaching and mentoring by APHSA. The OE efforts were designed to transform the CWTP
from a system that delivered traditional classroom training based on individual needs of staff
toward a system that offered a continuum of products and services that ranged from
traditional classroom training to OE related technical assistance based on organizational
needs.
At the core of this transformation was the development of training and technical assistance
teams to support local county agencies. The practice improvement specialists become key
members of the teams along with transfer of learning specialists and independent living
specialists. The regional teams were supported internally by departments accountable for
curriculum and trainer development, coordination of training delivery, and technical and
administrative support. Since 2004, teams have collaborated with counties agencies to
complete organizational needs assessment to identify the training and technical assistance
needs. The assessment leads to an individualized training and technical assistance plan that
the team uses to coordinate a continuum of services to support county agencies toward
improving outcomes for children, youth, and families.
OCYF and the Pennsylvania Children and Youth Administrators view this continuum of
service as critical in supporting county agencies in using OE models and tools to support
quality improvement efforts. When developing the PIP in 2009, Pennsylvania linked the
APHSA OE approach to the newly developed CQI process that was embedded in the PIP as a
primary strategy to support change at the local level. As part of this linkage, the CWTP is
viewed by both state and counties leaders as a key resource in supporting county agencies in
the development of continuous improvement plans to improve services to children, youth, and
families.
Since 2004, APHSA has continued to develop OE models and tools and to support
organizations in building OE capacity. In 2009, the APHSA OE department compiled the
models and tools into an OE handbook. The overarching purpose of the handbook is to serve
DAPIM is the primary APHSA OE model. DAPIM enables work teams to drive continuous
improvement. The approach involves defining what the organization wants to improve;
assessing strengths and gaps in performance capacity, performance actions, outputs, and
outcome; planning for remedies; implementing plans for maximum impact and
sustainability; and monitoring progress, impact, and lessons learned to sustain follow-
through and continuous improvement.
Teams engaged in a facilitated learning-by-doing process using the DAPIM model become
familiar with OE models, tools, templates, and methods to continuously improve in priority
areas. Work products that result from the continuous improvement effort include
development, implementation, and monitoring of quick wins; mid- and long-term continuous
improvement plans; and related communication plans. Monitoring techniques are used to
assess progress and adjust continuous improvement work as needed.
In 2009, the CWTP sought support from APHSA to continue the professional development
efforts of their staff by introducing them to updated APHSA OE models and tools. At the
same time, the CWTP just completed a year-long effort to restructure. The on-site effort took
place from September 2009 through March 2010.
The CWTP identified sponsors that included members of the CWTP’s leadership. The
sponsors worked with APHSA to develop a work plan to achieve desired results.
The pilot group included all staff from the CWTP’s Western Regional Team, select staff from
each department within the CWTP, WCCYS staff, OCYF regional staff, and statewide
technical assistance partners. It should be noted although a conscious effort was made by the
From September 2009 through March 2010 the following was completed:
• Pilot group members were familiarized with APHSA’s updated OE models and tools.
• The pilot group engaged in DAPIM and a learning-by-doing session to become
familiar with the OE handbook, develop a desired future state for the Western
Regional Team in supporting county agencies, and plan for application of OE
handbook narrative guidance, models, templates, tip sheets, and team activities in
WCCYS as part of their CQI efforts.
• A continuous improvement plan was developed to support the Western Regional Team
in achieving its desired future state.
• Using lessons learned from the pilot, a plan was developed for engaging all CWTP
staff in learning-by-doing continuous improvement efforts. The plan was designed to
introduce all staff to OE models and tools and the content of the OE handbook in order
to make OE a way of doing business within the CWTP. The success measure
identified in the plan was for each of the four regional teams to use updated OE
models and tools in their work with county agencies resulting in county continuous
improvement plans. In addition, all staff members would use the OE models and tools
in planning for their day-to-day work tasks—modeling best practice for state and local
leaders. The pilot group planned to use the learning-by-doing approach versus
traditional training-on-content approach to the content of the OE handbook. The pilot
group served as the implementers for building internal OE capacity. As part of the
plan pilot group members facilitated each regional team and all departments in
learning-by-doing sessions.
• The pilot team engaged the WCCYS sponsor team and continuous improvement team
in DAPIM learning by doing, resulting in a continuous improvement plan for the local
county.
In June 2010, APHSA conducted focus groups and interviews with the pilot group and
WCCYS leadership and staff to gather lessons learned and assess the impact of the continuous
improvement efforts in the CWTP and WCCYS. The data collected focused on the following
key areas:
At the time the data was collected the CWTP was still implementing its internal plan to build
OE capacity and the Western Regional Team was continuing to support WCCYS in OE
efforts. The following outlines the findings from the focus groups and interviews.
Understanding of Purpose
There was consensus among pilot group members on the purpose and anticipated outcomes
for the pilot. Generally pilot group members understood the purpose of the pilot was to
introduce participants to the updated OE models and tools, enhance the OE facilitation skills
of all regional team members, engage all CWTP staff in OE as a way to model and promote
continuous quality improvement, and that “OE was the way we were to do business in
Pennsylvania counties.”
As the pilot progressed, an awareness developed for members of the pilot group around the
connections OE would have to the Pennsylvania’s CQI process; specifically the pilot brought
to light the critical role the CWTP practice improvement specialists would have in CQI as OE
facilitators.
Value of Learning by Doing and Impact of OE Models and Tools on the Practices of
Regional Teams
Members from the pilot group agreed that the learning by doing approach was valuable to
them. One member commented, “I did not appreciate the learning by doing approach until
about midway through the process. It is powerful and infiltrates the process in subtle ways.”
Another commented, “The learning by doing approach allowed for the opportunity to learn
about the models while getting the feedback from the county to frame how we should do our
work.”
Pilot group members observed that the parallel process of learning about the OE models and
tools while applying them in WCCYS had a positive impact. Processing the use of the tools in
the county agency allowed them to understand how to effectively facilitate OE based on
county input. The learning-by-doing approach “allowed the opportunity to learn while getting
the feedback from the county; it framed how to do the work.” Processing the work in WCCYS
modeled a team approach to working with counties.
The DAPIM model was viewed by all pilot group members as valuable and critical part of
their work moving forward. Pilot group members’ comments about DAPIM included:
“The DAPIM model has had a big impact. I have a clearer understanding of what I am
trying to do in a county. My role is to help them find their own solutions versus
jumping to solutions for them. This approach to problem solving and project
management works.”
“The biggest thing for me was the define work. We were not stopping to define. We
just jumped to solutions.”
“If we really want to change we need to know where we are going. We need to do
really good define work.”
“[The] process has helped us to ask others: what is the target, what is your desired
future state?”
One system partner who participated in the pilot commented “as a partner in the process it
became clearer what OE is and how we are there to support the OE efforts after the D[efining]
and A[ssessing] work is done. We can focus on remedies to gaps.”
Members of the pilot group also noted the value of root cause and remedy work, specifically,
how remedies were organized into recommendations, commitments, and work team activities.
The pilot group saw this as a way to organize the work for county agencies into manageable
“things they can do and what needs to be bumped up as recommendations.” Root cause and
remedy work was also viewed as a strength-based way to advance the needs of the county
agencies to OCYF. The pilot group viewed the recommendations that would surface during
DAPIM as a positive way to inform statewide planning efforts “versus planning then telling
the counties to implement.”
Pilot group members commented on developing an appreciation for “monitoring” and using
lessons learned to adjust plans versus “just moving forward.” Developing concrete work plans
and then monitoring the work plan helped pilot group members see the impact on outcomes. It
was reported that monitoring is currently being used in the day-to-day work of pilot group
members. Pilot group members also reported encouraging all CWTP staff to use monitoring
techniques.
The capacity planning team activity allowed pilot group members to define the tasks they
should be doing and then reflect on the tasks they actually do. This team activity resulted in
the development of a work plan tool. The tool was viewed as an important data collection tool
moving forward. Pilot group members felt the data could be used to make decisions on
agency structure and work load assignments. One pilot group member commented “the tool
would allow decisions based on data not assumption.” In addition it was reported the capacity
planning team activity has been completed in county agencies with successes.
Communication was identified as a gap for the CWTP. It was reported that the
communication planning template was being used by pilot group members in their day-to-day
work. The tool was reportedly assisting CWTP staff in thinking through all of the elements to
consider when planning for communication.
The trust model and safety and accountability model provided a way for pilot group members
to think about root causes for some of the organizational gaps they face. The models gave
pilot group members a safe way to talk about sensitive topics. One pilot group member
commented, “The power of using the tools was an important part of the process.”
The markers for effective OE facilitators helped pilot group members to understand the
difference between training and facilitation. The markers are currently being used to assist
with professional development of the Western Regional Team. The CWTP is creating markers
of effectiveness for other roles in the CWTP to assist in role clarity.
The readiness model was viewed as a tool that could assist the CWTP in selecting county
agencies for CQI reviews.
Pilot group members reported the structure of the continuous improvement effort has had a
positive impact on the Western Regional Team. Examples of the impact on the team included
the following:
• Using the DAPIM model in team meetings to discuss and plan for work with counties
• Monitoring the continuous improvement plan developed during the pilot at team
meetings
• Using the markers for effective OE facilitators to develop professional development
plans
The supervisor for the Western Regional Team expressed the pilot provided the opportunity to
develop a supervision style to support the Western Regional Team using OE models and
tools.
Some pilot group members felt the structure of the pilot allowed those not on regional teams
to develop a better understanding of OE and how it can support county agencies in developing
and delivering quality services to children, youth, and families. One pilot group member
commented “just spending time together with OE staff helped to share a unity of purpose.”
The pilot structure brought to light the importance of OE to the CQI efforts in the state. OE
was viewed as the way to facilitate the changes needed to ensure quality services to families.
Pilot group members expressed having a clearer understanding of how important practice
improvement specialists would be to CQI.
The system partners that were members of the pilot group felt the structure allowed them to
have a “stronger relationship in the west.” It was reported that pilot group members are
continuing to collaborate on work with county agencies and that there is a greater awareness
of the need to connect with each other as technical assistance and training providers.
Concerns about the structure of the pilot focused on communication, lack of involvement by
all regional team supervisors, and lack of role clarity on using OE models and tools in the
day-to-day work of the CWTP.
Pilot group members did express concern that the structure of the continuous improvement
effort resulted in lack of buy-in from all CWTP staff. It was reported that “staff have been
slow to respond.” Some felt the reason for lack of buy-in was poor communication during the
pilot process itself. Despite communication planning at the end of each pilot session, internal
communications about the activities taking place during the pilot appear to have been lacking.
Now that the pilot was completed, pilot group members felt that communication plans
developed and implemented during the pilot did not have the intended impact. It was reported
that CWTP staff generally did not make the connection between the OE kick-off session in
September, which all CWTP staff attended, and the pilot. It was also reported by pilot
members that staff thought OE would only occur in the OE department. Comments on
breakdown in communication include:
“We missed a great opportunity to talk about OE in our staff meetings and day-to-day
interactions.”
Pilot group members reported that not having regional team supervisors participate in the pilot
placed supervisors in a challenging position. The impact has resulted in supervisors not all
operating from the same knowledge based of the OE models and tools. Some pilot group
members expressed a belief that a lack of understanding OE by supervisors was creating an
“uncomfortable environment for supervisors.”
Some pilot group members recommended that role clarity for supervising regional teams
using OE models and tools be completed. Some felt the work of role clarity around
supervision should happen immediately in order to support the professional development of
supervisors.
Pilot group members expressed if the supervisors and managers continued to receive support
in developing an understanding and working knowledge of how to use OE models and tools in
planning for the organization and coaching staff, the CWTP would remain viable to the
county agencies and state.
As part of the pilot, a plan was developed to engage all staff in the OE effort. The plan
included a full-day meeting to re-introduce the OE models and tools to all staff. It also
included plans to facilitate learning by doing sessions to complete a DAPIM with each
department and regional team from May through December 2010.
The full-day session with staff was well received. It was reported that some staff speak of
DAPIM often while others continue to see the effort as an OE department initiative. A follow
up learning by doing session have been slow to be planned with the exception of the
Statewide Quality Improvement Department and two regional teams. Root causes for the
reactions from staff appear to be “capacity issues” and “a feeling of not being included in the
pilot.” Leadership is viewed as committed to the OE work.
As the plan was being implemented, some pilot group members expressed concern about
maintaining the fidelity of the model. Some pilot group members have expressed a lack of
skills and knowledge among the CWTP staff to do OE facilitation effectively. One team
member commented, “We have work to do to develop these [OE facilitation] skills in our
facilitators but we are embracing OE and are committed. Monitoring our plan will be
important.”
Understanding of Purpose
WCCYS leaders saw participation in the pilot as a way to support them in managing the
changes they were experiencing. They saw OE as a way to help them identify structural areas
to be enhanced in departments and units of their agency by engaging staff as part of the
process. Leaders specifically wanted to improve practice by improving attitudes of staff and
engaging families.
WCCYS staff who participated in the pilot viewed the OE efforts as an initiative of the new
leader: “new leader, new direction.” Staff also saw the purpose as a way to create a better
internal working environment. Staff viewed the effort as important, stating “the OE effort is
important; we think this can work.”
WCCYS leaders felt the learning-by-doing process was helpful in “understanding what the
agency is hoping to accomplish.” The leadership indicated that they “started large” by looking
at global issues in the agency. The leadership recently realized if they could start over they
would start DAPIM work in “smaller areas” with a more specific focus. WCCYS leadership
stated they would start by working with the management team and expressed a belief that a
stronger management team would support the agency in OE efforts.
WCCYS leadership also reported the agency has a focus on OE: “The agency is capable of
positive change.” The OE technical assistance process will support them in change.
WCCYS staff shared the views of leadership on the importance of the OE work. One staff
member commented “It is important to know that we do think this [OE] can work. We care
about what happens in our agency.”
WCCYS staff said the OE process and activities have had an impact on their day-to-day
practice. They talked about being “more open-minded, aware of layers in the agency, and
more respectful for other staff.” One supervisor reported using DAPIM in supervision.
Staff participating in the pilot suggested the WCCYS sponsors of the OE work meet with
them to monitor work completed to date and discuss the recommendations they had for
moving forward with the OE work. WCCYS staff suggested working on methods for
engagement and teaming internally “just like we do with families.”
The OE models and tools viewed as having the most impact on the organization include the
team activities on role and authority; developing a mission, vision, and set of values; creating
a visual case flow; frequently asked questions tool; and the safety and accountability model.
WCCYS leadership was asked specifically about linking OE to the CQI. WCCYS leadership
suggested starting with a readiness assessment to determine if the agency has a strong
foundation to start CQI, and if not, start with OE to build readiness. Leadership felt OE could
help them prepare the agency for improving practice by creating a “strong foundation to
support change and then building strong practices on top of the foundation.” They were
hopeful that linking OE and CQI would lead to a single plan that is “do-able” for the county
agency. Leadership also expressed an interest in and commitment to incorporating DAPIM
into their internal quality assurance reviews on case records to build improvement plans.
Both the WCCYS leadership and staff felt the facilitator had a positive impact on the OE
efforts in the county agency. The facilitator was viewed as “straightforward, engaging, and
knowledgeable of the state system, understanding of county needs and barriers and a good
translator from implementation team to sponsor team.” They felt the facilitator has been able
to balance the process and is attempting to meet the needs of both leadership and staff. They
described the facilitator as being available and creating a trusting environment with no hidden
agenda. Some staff shared this view: “Difficult messages are easier talk about with Jen here.”
Staff specifically reported that the facilitator “does well with clarification and causes them to
think outside the box.”
Based on data collected in interviews, focus groups, and surveys of the pilot team and
WCCYS staff, as well as APHSA recommendations, CWTP leadership continued to support
internal efforts to build OE capacity. APHSA support included a day-long session with all
staff accountable for facilitating OE to review models, tools, and skills of effective
facilitation. In addition, quarterly meetings were planned throughout 2011 with APHSA and
the OE facilitators to capture lessons learned and provide technical assistance on skill
development.
In March 2011, APHSA conducted an after action review with all CWTP staff and leadership
from WCCYS to collect lessons learned on continuous improvement efforts, progress toward
stated goals, and updates on implementation of recommendations from the final APHSA
report.
During the after action review staff reported it has been difficult to separate the impact of
their OE efforts from a major restructuring of the CWTP that took place at the same time.
Some staff felt the DAPIM model should have been used for the restructuring.
• The DAPIM model helps provide a common language, and the structure of DAPIM
makes it “do-able” to learn.
• The tools were viewed are helpful to have when engaged in discussion and planning
efforts; specifically some staff found the communication plan, project management,
and data analysis tools useful.
• Developing a desired future state keeps staff focused on the direction of the CWTP.
• Developing clear work plans has made the CWTP more effective in monitoring and
communication. Monitoring activities have shown staff “we have gotten things done.”
Many felt accountability was improved, and there was a renewing of commitments by
staff.
• A common language now existed within the CWTP for approaching and
communicating with counties about OE.
• Engaging in DAPIM sessions helped with role clarification and “bringing us together
as a team.” The sessions helped to “connect the organization, brought us out of silos,
and brought administrative support staff into the organization.”
• Engaging in DAPIM helped to balance a focus on both the product and process.
• Overall there was sense for many staff the process works. They reported the model
“gives you a way to think critically about what a county is saying.” Staff seemed
excited about using DAPIM in counties and when completing organizational needs
assessment with them.
Staff felt having the OE department engage second would have allowed all regional teams to
benefit from the experience of the western regional team and developed additional internal
capacity to facilitate OE at a faster speed. Staff also felt this would have prevented regional
teams from moving at different speeds in learning about OE models and tools, which had a
negative effect on the momentum and progress for some teams.
Some staff reported the effort was focused on task and lacked attention to the relationships
(trust of internal facilitators and the overall process). Internal facilitators reported having a
new sense of and understanding for the “struggles faced by other teams and departments.”
Some staff were concerned the DAPIM focus might take away some of the critical thinking
skills of staff, stating “we are dependent on the OE handbook.” And some staff were
concerned the CWTP had taken on the DAPIM model and excluded other models. One
observation: “It is causing us to recreate things that already exist.”
Despite a written plan to engage the entire CWTP in OE efforts, there was confusion by some
staff about the goals and benchmarks for the effort. Some staff did not remember reviewing a
written plan.
Moving forward, the staff would like to pay close attention to the organizational readiness
model, reporting “it would have been helpful prior to developing the work plan.”
In the after action review WCCYS leaders observed that the OE efforts have brought out the
strengths of staff and their dedication to delivering quality services to children, youth, and
families. Leaders report it helped them in making decisions about roles managers and staff
could play in the organization. Leaders report the process helped with decision making and
getting buy-in from staff on change efforts. They also have found monitoring of quick wins to
be successful, but are struggling with how to monitor impact of mid- and long-term action
plans. Leaders did report meeting more frequently with staff has helped with monitoring.
Leaders have observed decreased energy in staff not engaged in the OE effort and plan to
broaden involvement.
Recommendations WCCYS leaders have for the CWTP in implementing OE in other counties
include (a) providing additional clarity upfront on goals and objectives and (b) more direction
on selection of the sponsor team and internal continuous improvement team. They suggested a
visual road map to show agency leaders the potential impact of “starting big” or focused on
small items. And lastly, they suggested sharing examples of successes and challenges from
other counties to assist them in decision making.
Introduction
It is typical of APHSA to use this inside-out approach to OE. Once an organization begins to
improve one area of performance, other areas of performance are affected and improved as
well, leading toward positive momentum for continuous improvement. Once continuous
improvement efforts take hold, it is no longer an initiative or externally facilitated effort. It
becomes but the way business gets done, the way decisions get made, and the way problems
get solved in the organization, leading to improved performance throughout. To reach this
goal in Texas, plans were made not just to lead staff members through OE processes and
procedures but also to build capacity for sustainability of OE leadership.
This article gives a chronology of OE efforts in Texas, describes attempts to build the
capacity and sustainability of OE work in the state, and reviews some specific examples of
change plans and lessons learned along the way.
History
In 2007, Texas’ Child Protective Services (CPS) partnered with Casey Family Programs and
APHSA to launch a continuous improvement initiative for its regional directors, program
APHSA came to the partnership with strong experience helping state and local human
services agencies continuously improve organizational effectiveness and outcomes. Casey
Family Programs brought significant financial, logistical, and project management support
and experience helping CPS innovate its front-line child welfare practice. CPS came forward
with a committed core group of managers dedicated to staying the course for building
significant changes in their system. The partnership set the stage for a multi-phased work
effort to improve retention of front-line staff, a specific area of focus linked to both Casey’s
2020 outcomes and Texas’ proposed program improvement plan.
Key individuals from the partnership formed a sponsor group to provide oversight for the
work effort. The sponsor group secured resources; ensured strategic alignment with the
vision, mission, and outcomes of CPS; and obtained buy-in from regional leadership teams.
Essential to the success of the partnership was the direct and honest communication between
the members; respect for the strengths each member brought to the table; effective use of all
strengths throughout the planning, implementation, and monitoring phases; and the constant
flexibility all members demonstrated in order to meet the needs of the regions.
Unique to this effort was the planning approach facilitated by APHSA with the sponsor group.
Core to the approach was the reflective thinking engaged in by the sponsor group during each
phase of work as it was occurring, and the ongoing adaptability of everyone involved. This
allowed for decision making in real time to plan for future phases.
Traditional planning processes look at overarching vision and strategy and then plan
initiatives and activities to engage in over a multi-year time frame. The inside-out approach
allowed the sponsor group to begin by clearly defining a vision for continuous improvement
work. This vision supported staff begin solution focused and using the expertise that they had
themselves to define problems, seek root causes for these problems, and develop and
implement plans that would lead to sustainable system improvements. The initial phases of
work were then planned. The vision remained in the forefront for the sponsor group as data
was generated from statewide data sets and regional teams. The sponsor group used this data
to monitor the impact of continuous improvement work on retention of front-line staff and use
of the continuous improvement process by regional leadership teams. This ongoing
monitoring led to incremental, dynamic, and phased planning to support the success of the
regions.
From May to June 2008, a second phase of work was completed with regional directors and
their management teams who were new to their roles. At that time CPS was interested in
providing them with a leadership development session that would be foundational for their
work moving forward. The session resulted in the completion of strategic readiness
assessments and short-term plans in each region. Implementation of the plans prepared the
new regional directors and their teams for the third phase of work involving all regional
directors and their management teams within CPS.
From August to November 2008 a statewide leadership institute was conducted. This third
phase of work built on the gains from the first two phases.. During the institute regional
management teams along with statewide disproportionality specialists and state quality
assurance specialists gathered together to learn the OE methods and choose areas of
improvement that were most relevant to them. (Disproportionality specialists are staff
assigned in all 11 Texas regions to provide consultation regarding reducing the
disproportionate placement of children of color into foster care and on disparate outcomes
related to service provision) The sponsor team decided to use existing quality assurance staff
who were subject matter experts in the Child and Family Services Review and investigations
rather than staffing this initiative with university- based experts. This decision was made to
assure that this change was more sustainable and that CPS would build this capacity truly
from the inside out. The primary focus of the institute was to provide regional teams with the
tools necessary to engage in continuous improvement work that would lead to reduced
turnover statewide.
At the institute each region was given the opportunity to focus on one continuous
improvement topic area related to reasons staff turnover. Using the Retention Fact Tip Sheet
from APHSA, a tool used to help groups determine the underlying causes of retention-related
issues, regional teams identified strengths and gaps in the system and root causes for gaps.
Specific topics identified for regional work included staff development; communication;
recruitment and hiring practices; and work environment.
In planning for the third phase of work, the sponsor group prioritized sustainability of the
continuous improvement effort once APHSA facilitators completed their role at this particular
Institute. Quality assurance specialists emerged as the best candidates to form a sustainability
team within CPS. Quality assurance specialists attended sessions with APHSA with the goal
of developing the skills necessary to perform as the statewide OE Sustainability Team. The
sessions provided clarity around this new role and a list of quick win commitments to support
their development as an OE Sustainability Team.
From February to June 2009 the final phases of the statewide leadership institute were
completed. The OE Sustainability Team completed individual skill assessments and
established their individual development plans with team leads. The team then attended a
four-day session designed to support their skill development. The first two days, presented by
CFP, addressed basic facilitation and platform skills. The last two days, facilitated by
APHSA, provided coaching on the use of OE models, tools, and templates; conducted skill
practice; and introduced the OE Sustainability Team to the content of the APHSA
Organizational Effectiveness Handbook. The session resulted in furthering the commitment of
the OE Sustainability Team to their role in continuous improvement work. Overall the four
days created a sense of both safety and accountability for performing as OE facilitators.
From March to June 2009 regional sessions were facilitated by the OE Sustainability Team
with coaching and support provided by APHSA and the team leads. Specific outcomes and
measurements were determined by each region. The sessions resulted in an increased number
of formal continuous quality improvement initiatives, including cross-departmental work
groups, chartered improvement projects, communication plans, and continuous improvement
plans.
As 2009 came to a close, OE efforts began to spread across the state, resulting in connections
from CPS to the state’s Center for Learning and Organizational Excellence (the staff
development arm of DFPS), as well as presentations to front-line practice supervisors who
were encouraged to use APHSA’s DAPIM model as a supervision technique and a frontline
casework practice model that encouraged family engagement, critical thinking, and problem
solving. (The DAPIM model involves defining what an organization wants to improve;
assessing strength and gaps in performance, and identifying root causes and remedies;
planning remedies that include quick wins, medium- and long-term improvements and
addressing root causes; implementing plans for impact and sustainability; and monitoring
progress, impact and lessons for continuous improvement.) At the beginning of 2010, a team
of 11 facilitators was in place and working throughout the state in with regional management
The OE model was used by Texas CPS as the primary process for developing an
implementation plan an enhanced family safety decision making initiative in coordination
with the National Resource Center for Child Protective Services. It was envisioned that the
OE team would play a key role in implementing improved practice processes and techniques.
As the work spread throughout CPS, project sponsors wanted to further develop the capacity
for OE work within their system and ensure the sustainability of the OE effort within the state.
There was also the desire to expand the OE capacity beyond CPS and into the larger DFPS
organization including adult protective services, child care licensing, and operations. To that
end, APHSA proposed to support DFPS by helping to build capacity for OE work by
increasing the number of trained OE facilitators; improving the sustainability of OE work by
providing a written guide for training future OE facilitators and developing leadership staff to
oversee the OE work across the state; and working to improve the skill level and capacity of
the original team of OE facilitators.
Over the course of 2010, APHSA, with the support of Casey Family Programs, was able to
fulfill these commitments and train 11 additional staff members from across the spectrum of
DFPS programs to become in-house OE facilitators. Further, APHSA staff worked with the
tenured OE facilitators to develop a curriculum for on-boarding new OE facilitators as the
state determines necessary. There is a plan to develop an additional class of OE facilitators in
2011 that will bring the total number of OE facilitators in Texas over 30. Finally, APHSA
worked with CFP and DFPS to build and support a team of OE liaisons who will oversee the
assignment and monitoring of the OE work in the state, assure communication with the
project sponsors, collect outcome data, facilitate OE sessions as needed, and serve as
champions of the OE effort, maintaining positive momentum for using OE as a way of doing
business in Texas.
Through 2010 Texas OE facilitators lead over 100 on-site continuous improvement sessions
developed continuous improvement plans on such topics as staff retention, communication,
work environment, hiring practices, disproportionality, supervisor development and
mentoring, welcoming new staff, meeting management, supervision workloads, leadership,
decision making, and ensuring child safety.
Lessons Learned
“When you give someone a fish, you feed them for a day, when you teach someone to fish you
feed them for a lifetime.”—ancient proverb
One of the primary values of APHSA’s OE department is to help clients become self-
sufficient building their internal capacity and continuously improving their own performance.
As this effort unfolded, APHSA became challenged to develop for the skills of teaching
others to do what APHSA OE staff had learned to do. Performing a skill and teaching others
to perform the same skill are two very different actions. Along the way APHSA staff came
across many lessons about becoming better teachers, better performers, and better partners.
Below are some of the lessons learned while working with Texas DFPS.
Concurrent to the work in Texas, APHSA became much more conscious of the need to assist
organizations with understanding their readiness for change prior to beginning major
improvement efforts. While an organization may feel the need to improve certain areas of
performance, not every organization is prepared to move forward immediately with change
plans. APHSA does not use readiness assessments to rule out working with an organization,
but instead as a tool to see if there are foundational areas that an organization might need to
address as an initial stage of continuous improvement work.
The readiness assessment tool helps focus on whether an organization is ready to move
forward based on a current evaluation of areas such as decision making, role clarity, use of
data, teamwork, use of strategic support functions, and organizational climate. While building
OE capacity in Texas (teaching others to fish), APHSA learned to assess individual readiness
to facilitate and lead others through continuous improvement processes. By developing an
understanding the skills needed to teach others to facilitate, APHSA was able to develop a
deeper understanding of individual readiness to perform these leadership roles.
Some of the skills developed in staff and are needed to be ready to perform OE facilitations
include:
1. Facilitate versus lead the sessions; avoid being prescriptive and overly directive.
Instead, guide clients based on a balance between their energies and need to complete
work products. Do this while developing trust and respect of participants and
maintaining focus in the group.
2. Adjust the session agenda in real time, balancing the speed the team can reasonably
achieve goals with the ultimate objectives of the project.
3. Actively engage and listen to others.
4. When teams go off on tangents, provide them with a line of sight for their discussion
to come back to the problem they are working on and the discussion at hand.
OE work is hard. To truly achieve results, to be accountable for work products, and to
improve outcomes takes significant effort and commitment to the task. If an organization or
individual is unwilling to perform to this level, it is very important for an OE facilitator to
determine why (root cause) the resistance is present. Ultimately, if resistance is constructive
(when an organization or an individual intends to continuously improve but has a different
perspective as to how or what the current priorities are), that resistance should be listened to
and included in the discussions and decisions that follow. If resistance is not constructive
(meant to maintain a status quo or comfort zone), lack of willingness will only impede OE
efforts in the future.
When building OE capacity for an organization, the success of the work lies not simply in the
teaching done in the classroom, but also in the commitment of leaders and sponsors to support
the OE work moving forward and allow OE work to permeate the organization.
To embed a new practice across an organization as large as Texas DFPS (which has 254
counties across 11 regions covering over 25 million people), commitment of funds and
resources is certainly a necessary step for success. Beyond that step, having leaders cascade
down a commitment to this new process and to respecting continuous improvement plans and
teams is the most precious resource the sponsors and leaders can provide. Sponsors and
leaders need not initially have the skills and natural proclivity to facilitate continuous
improvement sessions, but they must demonstrate openness to the process, a trust in the
models, and the desire for continuous improvement efforts to occur under their watch to
insure that the process will take hold. Ultimately, these sponsors and leaders will need to
develop the skills to lead through facilitated problem solving. For leaders who are used to
controlling meetings and directing decisions; this will be a learn-by-doing experience.
OE is not a quick fix. It can be used (and frequently is) to develop immediate quick wins,
steps an organization can take to immediately improve an area of performance or demonstrate
those efforts to staff to build buy-in and create positive momentum for change. But generally
speaking, OE is a process to create long-term plans that will result in improved outcomes
based on addressing the root causes of problems, creating long term sustainable change.
OE sessions themselves require a commitment of time to develop not just simple plans, but
clear definitions of problems, specific and observable findings of strengths and gap areas of
performance, and deep discussions as to the root causes for these gap areas.
These discussions require building trust and a shared sense of safety in the team that only
comes from a shared commitment to improving the organization. There is not a requirement
that everyone agrees immediately to a plan, but there will be the need that everyone agrees to
fully participate in the process. This takes time and a desire to do what is best for the
organization moving forward, not a particular individual goal or department initiative.
When performance is assessed, when outcomes data is collected, and when public
commitments are made for performance, follow through is more likely to occur. With OE
facilitators, maintaining a commitment to monitor change plans as well as personal
performance is the only way to continuously improve work products and the skill and
performance of individuals. As we engage in this work, we should honor our commitment to
support plan monitoring for our clients and with those we work with, but should further honor
our commitment to monitor our own performance, and seek to continuously improve in our
performance and outcomes.
Summary
The unfolding progress of Texas CPS successes will allow APHSA to test its evaluation
framework and see what outcomes can be achieved by “teaching others to fish.” APHSA has
already seen successes in Texas CPS with the first round of OE facilitators in 2009. By
monitoring Texas’ future progress, we can test evaluation hypotheses, understand how
APHSA facilitation and OE models and tools combined with agency readiness and
The growth of OE across Texas from a targeted effort in one region to a statewide initiative
that cuts across programs and levels of the organization is an example of APHSA’s inside out
approach, and demonstrates how improving one area of effectiveness for an organization can
grow and serve to improve an organization as a whole, resulting in sustainable change and
improved outcomes for the children, youths, adults, families, and communities served.
The Minnesota Department of Human Services (DHS) has a long history of reform efforts
impacting practice and policy to support positive outcomes for children, youth, and families
served by the child welfare system. Throughout reform efforts DHS maintained the value that
a well-developed, clearly defined child welfare practice model is the basis for reform efforts.
DHS viewed the training unit of Child Safety and Permanence Division (CSP division) as key
support to county child welfare agencies and tribes in implementing the practice model.
In November 2008 the CSP division engaged in a system readiness assessment to define the
purpose and role of the training system in supporting reform efforts. The assessment was
facilitated by the American Public Human Services Association (APHSA) Organizational
Effectiveness (OE) staff and supported with resources from the National Resource Center on
Organizational Improvement (NRC-OI).
A key finding from the readiness assessment was that the state lacked a clearly defined
written practice model to provide guidance to support functions on how to align their products
and services to the vision, mission, and values of DHS. Based on this key finding, and prior to
moving forward with a re-design of the training system, the state engaged in the development
of a written practice model in February of 2009. The process engaged state and local leaders
and front-line staff in a facilitated process to assure the practice model would have direct
application to practice.
Following the development of the state’s child welfare practice model, training staff engaged
in OE efforts to define how the training system would support reform efforts by aligning the
key program areas of the training system to the practice model. This effort resulted in a
continuous improvement(C) plan outlining quick wins and mid- to long-term planning
activities. It also raised the awareness of training leaders of the resources and OE supports
needed internally for training staff to achieve the goals in the CI plan and ultimately provide
This article describes the process the training staff engaged in to align to the practice model
and their efforts to build internal OE capacity. It covers lessons learned and challenges faced
by those who are embracing a major transformation effort they call the “re-design of the
training system.”
Background
The training system provides classroom training for county and tribal social workers,
supervisors, managers, directors, and resource families. Minnesota statute mandates that all
new child protection social workers attend foundation training within their first six months of
employment. Minnesota statute and rules both require that all child protection social workers
to complete at least 15 hours of continuing education or in-service training relevant to
providing child protection services every year. This training provides tools to help social
workers to work more effectively and better serve families.
When attending foundation training, social workers learn the fundamental and essential skills
necessary for caseworker practice. Foundation courses include information on child abuse and
neglect, family dynamics, child development, attachment issues, and legal concerns. Upon
completing core courses, social workers are encouraged to attend specialized skills and/or
related skills training. Supervisors, directors, and managers may also attend leadership
training courses on managing and supervising child welfare employees. Specialized skills and
related skills training is available on a variety of topics. County and tribal foster, adoptive,
and kinship parents are encouraged to attend pre-service and/or specialized skills and related
skills training.
In 2008, leaders within DHS, the CPS division, and the training system wanted to build on the
strengths of the training system. The decision was to expand its scope from a primary focus
on traditional classroom training to include organizational effectiveness-related technical
assistance and support to counties and tribes.
The CSP division and training system leadership were particularly interested in ensuring that
the training system continued to operate at peak effectiveness and according to current best
practices in child welfare training while aligning with and building capacity and strategic
readiness to support the model of practice being implemented through child welfare reform.
Since 2004, APHSA has continued to develop OE models and tools and to support
organizations in building OE capacity. In 2009, the APHSA OE department compiled the
models and tools into the OE Handbook. The overarching purpose of the handbook is to serve
as a resource for organizations that are making continuous improvement a way of doing
business.
DAPIM is the primary APHSA OE model. DAPIM enables work teams to drive continuous
improvement. The approach involves defining what the organization wants to improve;
assessing strengths and gaps in performance capacity, performance actions, outputs, and
outcome; planning for remedies; implementing plans for maximum impact and
sustainability; and monitoring progress, impact, and lessons learned to sustain follow-
through and continuous improvement.
Teams engaged in a facilitated learning-by-doing process using the DAPIM model become
familiar with OE models, tools, templates, and methods to continuously improve in priority
areas. Work products that result from the continuous improvement effort include
development, implementation, and monitoring of quick wins; mid- and long-term CI plans;
and related communication plans. Monitoring techniques are used to assess progress and
adjust continuous improvement work as needed.
The continuous improvement planning process facilitated by APHSA, engaged training staff
in a learning-by-doing approach using DAPIM engagement of staff throughout the continuous
improvement planning process was a priority for leaders as they viewed this as a primary
vehicle for developing the internal skills and capacity of the training staff to apply the OE
model in ongoing work.
In designing the structure of the effort, CSP Division and training system leaders served as
sponsors. The sponsors, in conjunction with APHSA and NRC-OI, decided that the training
staff would meet as a group for four days—two 2-day sessions held a month apart. These
sessions were scheduled in May and June 2009. APHSA facilitated the sessions. DHS,
APHSA and the sponsors collaborated to develop the agenda and handouts for each session to
assure agreement as to the work to be done, the action plan for doing the work, and desired
outcomes of the sessions.
The agenda utilized a learning-by-doing approach to applying the DAPIM model to drive
continuous improvement efforts in the following key areas of work for the training system:
• Assessment
• Curriculum development
• Training delivery
• Trainer and writer development
Two remaining key areas were identified for future work by the training system: developing
OE facilitators and evaluating delivery of training and technical assistance. Sponsors and
training staff agreed alignment to the practice model in assessment, curriculum development,
training delivery, and trainer and writer development was a critical priority. This alignment
needed to be achieved before expanding services to include OE related technical assistance
and support.
Participants reflected on the practice model developed in 2009 and system readiness
assessment conducted in 2008 to produce work products in each phase of DAPIM. Work
products included the following:
Throughout the planning process, training staff recognized the need for additional funding
and/or re-distribution of existing training funds to fully transform the training system. As part
Training unit leadership submitted the CI plan to DHS leadership for approval and ongoing
support of planned training unit re-design efforts. DHS leadership was in agreement with the
funds needed to fully transform the training system and approved of training staff phasing
work efforts by first completing no-cost or low-cost tasks while additional funds were
identified by DHS leadership. DHS also requested training system staff complete the quick
win around use of current resources and submit a plan for consideration.
Training staff organized the CI plan into a work products and remedies chart to allow them to
begin implementing and monitoring progress on planned commitments and activities. The
chart outlined work efforts to be completed from July 2009 to March 2010 and included both
approved activities and those still needing funding. Training staff planned to meet routinely to
monitor the impact of plan implementation and collect lessons learned.
An after action review was done immediately following the facilitation process that used the
DAPIM model and a learning-by-doing process to guide training staff through the continuous
improvement effort. Training staff reported that it provided them with clear direction and
focus. Participants commented on the importance of having a clearly defined desired future
state for each area of planning. They found the facilitated reflection on current system
strengths and gaps to be valuable. Specifically, participants commented that they had never
had the opportunity to critically look at their training system as a group, think about what was
going well, what could be improved, and how they might suggest it be improved. Participants
were observed to be actively engaged in identifying root causes and remedies for gaps.
Specific examples of this include: all participants contributed to the conversation, verbal
acknowledgement of each others thoughts and suggestions, and openness toward accepting
ideas for improving areas of work completed by the training unit even when it was an
individual’s current area of responsibility. For remedies outside their control, participants
demonstrated commitment to thinking of solutions that could be advanced to DHS leadership.
Participants also willingly committed to performing task that were within their control.
Participants commented that the process created a safe and supportive environment for
sharing ideas. They reported having experienced peer learning and unit-driven solutions. The
group agreed that spending time together on their strengths and gaps was useful and reported
it was the first time such an opportunity had been provided. Reflecting as a group on strengths
and gaps allowed them to see the work of the training system from different perspectives and
reconsider how work should be completed to become more effective in supporting outcomes
for children, youth and families. Participants also commented that going through the process
Participants commented how the DAPIM and learning-by-doing became easier to use as they
went through the planning process. There was recognition from some participants on how the
DAPIM process would be transferable to work with counties and tribes. Links were
specifically made to the use of DAPIM assessment work with counties and tribes.
The facilitator observed that participants were interested, enthusiastic, creative, and
demonstrated positive attitudes towards the overall transformation of the training system and
the use of the DAPIM model and tools as a team. Participants verbalized a need for the
training system to expand its services to meet the needs of counties and tribes. They also
verbalized how experiencing the DAPIM in a learning-by-doing session allowed them to see
how a similar process could support counties and tribes in becoming more effective.
The facilitator found it valuable to have the practice model and system needs assessment
documents available as tools to guide reflective thinking and assure the team was clearly
aligning to the broader strategy of DHS. Some participants expressed concern about what the
practice model looked like in front-line practice with children, youth, and families and were
unsure if the practice model served its original purpose of providing guidance to the training
unit re-design.
Some staff also reported concerns that the re-design efforts focused too much on the
traditional training provided by the training system and did not address how the Social
Services Information System (SSIS—Minnesota’s SACWIS system) and resource family
training fit into the broader re-design efforts.
What became apparent to DHS leaders and sponsors during this process was the importance
of phased planning, implementation, and monitoring of work. Leaders expressed it was
helpful to have an overarching plan for the re-design but that flexibility to implement planned
activities and commitments in phases made the plan “do-able.” Leaders noted the importance
of who to involve in which phases of systems change work. They expressed how beneficial it
was to have a facilitator to plan the phased work and to provide input in decisions about who
to involve in each phase of work.
From July 2009 to April 2010, the training staff implemented quick wins and mid-range
remedies while DHS leadership explored funding options to fully operationalize the CI plan.
In March of 2010, leadership identified the need to reallocate existing resources within the
training system since no additional funding options were available. Training staff re-engaged
The team participated in four sessions over a five month period. They prioritized work to be
completed from the CI plan, placing the majority of the planned activities and commitments
on hold until reallocation of funds was completed. The training system then developed an
eight month reallocation plan that would be implemented from May to December 2010. The
plan identified tasks the staff would need to engage in as resources were re-aligned to support
the implementation of the training system re-design. The following work products were
completed:
The unit monitored commitments during each session to ensure progress was being made in
their redesign efforts and to capture lessons learned. The unit recognized that communication,
decision making, and trust issues were preventing them from being the high performing team
they desired to be. Some team members reported not having a clear understanding of what
they were doing and why. Some expressed they did not have regularly scheduled meetings to
review planned activities and monitor progress. Some reported when they did have a meeting,
not all staff placed a priority on attendance. Many staff felt OE efforts were only discussed
when the facilitator attended meetings. Many staff felt communication around the purpose of
the meetings prior to the facilitator attending was not clear, making participation a challenge.
To improve communication, the unit agreed to meet weekly to review commitments and share
results. A commitment was made to make decisions regarding the unit during the meeting so
that everyone was aware of decisions and planned next steps. The weekly meeting did show
improvements in communication and follow through on decisions. The unit also participated
in sessions with an internal DHS facilitator to work on trust issues.
In December 2010 and January 2011, sponsors of the effort made the decision to develop four
internal staff as OE facilitators. This would ensure the availability of internal facilitators to
support staff in returning to implementation the CI plan for aligning to the practice model.
Two of the four staff selected were responsible for SSIS and resource family training. The
decision to address concerns in these areas of work was not fully embraced in the training
unit. APHSA coached the four staff and one supervisor in preparing to become OE
facilitators.
Also, in January 2011 APHSA met with all training unit staff to review the efforts they have
engaged since November 2008 to re-design the training unit.
For some, the January session helped them remember decisions made to place CI plan
activities and commitments on hold during the re-allocation of resources. Some staff
expressed it was unclear to them when decisions were being made and why, and they were
having difficulty understanding the OE efforts. Some staff requested going back to the notes
from early meetings linking their work products and remedies chart to the original desired
future state, gaps, and root causes. At the time the chart was developed it made
implementation and monitoring of planned activities easier; however, a lesson learned for the
unit was by moving remedies to their own chart they could not remember why they were
doing certain activities. APHSA has since developed an overview document that reconnects
remedies to the priority gap areas and root causes. The document has been provided to the
training unit leadership.
At the time this case study was prepared, internal staff were preparing to facilitate the
implementation and monitoring of mid- and long-term activities and commitments from the
CI plan related to assessment and curriculum development. The internal staff developed to
support OE facilitation in SSIS and resource families continue to have capacity issues in
performing day-to-day tasks. This is because staff vacancies and high work loads associated
with reallocation of resources mean they have not been able to engage in OE facilitation
associated with the SSIS and resource families areas in the re-design.
In an after action review, staff reported liking the DAPIM flywheel and learning-by-doing
process. Many staff felt it was positive to have all training unit staff engaged in the re-design
efforts and that it created a “shared sense of responsibility.” However, many training unit staff
felt they “took on too much.” Reporting this resulted in difficulty with maintaining focus on
the unit’s progress and a loss of momentum for the planned work. Looking back, a lesson
Some staff found the language used in OE to be confusing and prevented them from fully
understanding the work they were doing. Some newly hired staff agreed that they found the
language to be confusing.
Many staff reported they felt it was too early to provide feedback on the impact on the
training system of OE models and tools and their planned activities and commitments. Given
the need to re-allocate resources, staff reported they were just now starting to implement the
mid- and long-term activities of the CI plan related to the areas of work completed by the
training unit.
The training unit supervisor acknowledged the impact re-allocation of resources had on the
training unit. He reported “it became the plan, the driving force, but was originally only a
small piece of the plan.” He also reported the one thing he would do differently “is to hire the
second supervisor first.” He believed that a second supervisor could have assisted in
addressing communication and decision-making gaps and that both current and newly hired
staff would have been better supported during training unit re-design.
A second supervisor was added to the training unit in March 2011. The current supervisor was
looking forward to having the additional supervisor supports in place that staff need for this
type of major transformation. He reported the new supervisor could help “to gain speed—we
have been idling.”
Training system staff remains committed to the re-design of the training unit. They are
looking forward to the newly developed internal facilitators supporting them in OE efforts and
implementation of their CI plan. Sponsors of the efforts continue to encourage and support the
staff in the unit with re-design efforts. They acknowledge the staff openly for their continued
commitment to aligning to the practice model and building OE capacity to support counties
and tribes.
Sponsors also remain committed to hiring additional staff to facilitate OE technical support
sessions with counties and tribes. They remain committed to using lessons learned to face
challenges they encounter while implementing a major transformation effort—the re-design
of the Minnesota training system.
The intent of this section is to (1) provide examples of useful instruments as well as other
training assessment and evaluation methodological approaches, (2) provide a venue for the
exploration of appropriate applications (as well as potential misapplications) of various
instruments being used in human services training and development programs, and (3)
advocate for responsible application of instruments and other assessment/evaluation
methodologies by training and development professionals.
Instrumentation articles written for this section are expected to provide information on the
following areas:
The articles in this issue provide examples of how instruments can be used to promote
positive organizational and system change. The first article describes a situational judgment
assessment approach used in the development of the National Child and Youth Care
The second article is a shortened version of the Transfer Potential Questionnaire. It stresses
the importance of assessing individual, organizational, and training design factors that
research has identified as affecting transfer of learning. The authors suggest that the user-
friendly instrument described in the article (Application Potential of Professional Learning
Inventory—APPLĪ 33) can help human services training and development professionals
assess a worker’s potential to use new learning on the job. Suggestions to improve the
potential for effective transfer of learning also are provided.
Conceptual Background
The certification exam was developed to address broad cross-field (e.g., afterschool,
residential treatment, juvenile justice) competencies that were identified from a meta-analysis
of 87 sets of competencies used in North America (Eckles, et al., 2009; Mattingly, Stuart &
VanderVen, 2002). The competencies are organized into five competency domains
(professionalism, applied human development, relationship and communication, cultural and
human diversity, and developmental practice methods).
A situational judgment exam was developed that requires practice judgments from the
examinee based on case scenarios. The scenarios were developed from case studies of actual
incidents elicited from the field from a variety of practice settings. A situational judgment
approach (SJA) to assessment emphasizes the use of realistic scenarios, typically asking test-
takers to identify the best alternative among the choices offered. The most correct answer for
each item is determined by a panel of subject matter experts. The SJA was used because
research has determined that SJA tends to have a high level of face and content validity, less
adverse impact by gender and ethnicity, can measure a variety of constructs including
interpersonal competencies that are crucial in child and youth work settings, are relatively
easy to administer in bulk (paper and pencil or online), and typically have acceptable validity
correlations with job performance (Chan & Schmitt, 2002; Clevenger, Pereira, Wiechmann,
Schmitt, & Harvey, 2001; McDaniel, Morgeson, Finnegan, Campion, & Braverman, 2001).
Finally, situational judgment was felt to be a means of addressing arguments of those who
resist exams as being poor indicators of competence. A situational judgment test can present
complex situations and multiple questions to assess the candidates' ability to deal with
complexity and make independent decisions based on theory as well as context, thus
exhibiting professional judgment necessary to be effective in child and youth work settings
(Curry, Schneider-Munoz, Eckles, & Stuart, in press).
Description of Instrument
The exam consists of 75 situational judgment multiple choice items pertaining to 17 case
studies that were elicited from a variety of actual practice settings and situations. Each item
The following is an example of a case and item that requires practice judgments pertaining to
the competencies from the Standards of Practice/Competencies for Professional Child and
Youth Work.
Competency IB5a. Access and apply relevant local, state/provincial and federal laws,
licensing regulations, and public policy (e.g., staffing ratios, confidentiality, driving
laws, child abuse/neglect reporting, children, youth and family rights).
Competency IB4c. Apply specific principles and standards from the relevant Code of
Ethics to specific problems.
You are a practitioner working in an inner-city emergency shelter that primarily serves
homeless youth. The shelter serves youth who are 14 to 21 years old. Legally, in this
state, runaways under the age of 16 must be reported to authorities. One evening a
young-looking female youth comes in and makes inquiries as to the services available
in the shelter. She tells you she is 18, but you strongly suspect she is much younger,
possibly 13 or 14. As you interview her, she reveals that she ran away from home
about a year ago and has been working as a prostitute for the past 6 months. She
refuses to tell you her real name or where she is from. When you ask her what she
needs from the shelter, she tells you that she doesn’t know but she sure could use a
place to stay overnight. You end up in the kitchen talking with her as she eats the soup
you heated for her.
As a practitioner, you:
a) Have a legal obligation to talk her into staying at the shelter until a longer-
term program can be worked out or she can be reconnected with her family.
You have no obligation to contact the authorities.
b) Have a legal obligation to make the shelter services available to her and
check to be sure she is aware of the risks involved in her lifestyle.
c) Have a professional obligation to contact the appropriate authorities if she
leaves the shelter.
d) Have no legal or ethical obligation beyond making services available to her
that she has specifically asked for.
There may be no ideal answer provided. But the examinees are expected to choose the best
answer from the alternatives provided.
The exam is used as a one of the components of a national certification program for child and
youth workers from a variety of settings (community-based, out-of-home care), who work
with various populations, (typical and atypical such as youth experiencing child maltreatment
or mental health concerns), and with youth of various ages (early childhood through
adolescence). Certification applicants must take and pass the exam prior to submitting a
completed application that includes an extensive supervisory assessment and electronic
portfolio (see http://www.cyccertificationboard.org).
Although not the primary function of the exam, it is possible that the exam can also be used to
assess the effectiveness of comprehensive education and/or child and youth work training
programs. Several organizations (e.g., Casa Pacifica) are currently exploring use of the exam
for this purpose. In this way, the certification exam in conjunction with comprehensive
training and development initiatives can become more fully integrated into a systematic
organizational development plan.
The exam may also be used for research purposes (e.g., examining the relationship between
professional child and youth care judgment and various constructs—personality, empathy,
relationship building, intervention style, retention, etc.).
Administration of the exam occurs both in written (paper and pencil) and electronic (web-
based) formats. The total time permitted to complete the exam (including the instruction
overview and obtaining identifying information for examinees) is 150 minutes. This is a
proctored exam, administered at approved proctored sites including various child and youth
work conference venues (e.g., Conference of the American Association for Children’s
Residential Centers, International Child and Youth Care Conference). Agencies may inquire
about becoming an approved proctored site. See contact information at the end of this article.
Sample Norms
Norms are established from a sample of 775 child and youth workers from 29 sites in six
states (Maryland, Pennsylvania, Ohio, Oklahoma, Texas, and Wisconsin) and two Canadian
provinces (Ontario and British Columbia). The sample was very diverse, representing various
segments of the child and youth care worker population. However, the most frequent
characteristics of the sample were that the individuals were female (61%), African American,
(45%), spoke English as a first language (97%), practiced in a residential treatment setting
(46%), worked as a direct care worker (49%), considered themselves professional child and
youth care workers (95%), and held a baccalaureate degree (36%). The average examinee's age
was 37 (ranging from 17 to 76), and 10 years was the average number of years of experience as
Exam scores ranged from a low of 16% of the total 75 items to 96%, with a mean total raw
score of 47.66 and standard deviation of 11.646. The standard error of measurement (SEM)
was 3.68 or 4.9% of the total questions.
Frequency Percentage
___________________________________________________________________________
Sex
Male 301 39
Female 470 61
Race
African American 337 44.9
American Indian or American Indian First 5 .7
Asian 7 .9
Caucasian 320 42.7
Hispanic 56 7.5
Multi-ethnic (more than one race) 23 3.1
Other 2 .3
First Language (English) 749 97
Country
U.S.A. 735 95.3
Canada 36 4.7
Frequency Percentage
___________________________________________________________________________
Prevention/Intervention Programs 122 15.8
Street Outreach 35 4.5
Developmental Disabilities 22 2.9
Early Intervention 45 5.8
In-home Detention Programs 6 .8
Physical Disabilities 13 1.7
Recreation 38 4.9
In-home Family Care & Treatment Services 45 5.8
Organizations (YMCA, Scouts, etc.) 38 4.9
Clinic-based Day Treatment Services 26 3.4
Practice Settings (Other) 57 7.4
Type of Position
Direct Care Worker 370 48.7
Educator 38 5.0
Supervisor 102 13.4
Administrator 62 8.2
Counselor 84 11.1
Therapist 6 .8
Foster Parent 1 .1
Other 93 12.2
Professional CYC
Yes 729 95.0
No 33 4.3
Education
None 99 13.6
Associate 97 13.4
Baccalaureate 263 36.2
Masters 87 12.0
Doctorate 2 .3
No degree but coursework 177 24.4
___________________________________________________________________________
Mean SD
N=775; Settings are not mutually exclusive. Respondents may have selected more than one
setting. Also, the relatively large number of participants indicating “other” for “type of
position” is in part due to blended/hybrid positions (e.g., lead worker with supervisory
responsibilities, social worker/direct care worker).
Reliability
Internal consistency of the exam is considered excellent (Chronbach’s alpha is .8999). The
standard error of measurement is 3.68 (4.9% of the total items).
Validity
Content Validity
There is a high level of content validity as a result of using case scenarios from the varied
practice fields and developing each item in response to a specific competency area that was
judged (by expert panels) to be best assessed by the exam (compared to the other measures of
supervisor assessment and portfolio).
Face Validity
A survey instrument was constructed to receive feedback from the examinees that included
four items assessing face validity (Chronbach’s alpha for the four items = .77). Almost all
(97%) of the examinees completed the face validity and feedback questionnaire. Findings
indicate that the vast majority of respondents (90%) perceived that the items in the exam
accurately assess important aspects of child and youth care work and the case examples
provide realistic samples of child and youth care work.
These findings strongly endorse a belief that the exam seems to be measuring the essential
elements of child and youth care work. Somewhat less (80%) indicated that the content in the
exam is similar to their actual job duties, and only 59% stated that they believe that their
performance on the exam is an accurate indicator of their actual performance on the job (34%
indicated that they neither agreed or disagreed). Apparently, the examinees viewed the exam
as an excellent indicator of child and youth care practice; however, they appeared to have less
confidence that how they performed on the exam is indicative of their job performance. Since
the exam covered practice areas from a variety of settings and ages, some of the participants
may have perceived their job duties in a more limited manner (e.g., confined to working with
The total exam scores were correlated with the examinees supervisors’ assessment of worker
competence on the job. The supervisory assessment was a composite variable comprised of
six items. One item pertained to each of the five major competency domains and one item
referred to the worker’s overall competence. The item anchor descriptors ranged from
“consistently demonstrates competence” to “does not demonstrate competence.” The
composite competence score (the sum of the six items) was used as a concurrent criterion
measure of job performance (Chronbach's alpha of .94 for the six items).
A validity coefficient (r=.26) was obtained. The strength of the correlation between the SJA
exam score and supervisor ratings is comparable to findings in other studies considered to
underestimate the true strength of the relationship between the exam score and on-the-job
performance. The relationship is underestimated due to range restriction (e.g., very poor
performers tend to be weeded out, thus limiting the range of performance assessed), and there
is little variability in the supervisory assessments. The supervisory ratings from this sample
primarily ranged from 3 to 5. For example, there were no ratings lower than 3 on the overall
competence item and only 10% gave ratings of 3 (inconsistently demonstrates competence).
According to Sackett, Borneman, and Connelly (2008), both range restriction and criterion
unreliability (typical of efforts correlating exam scores with supervisor ratings of worker
performance) lead to an underestimation of validity. Clevenger et al. (2001) estimate that
typical SJA validity correlations (in the .20 to .40 range) to be significantly higher and among
the best validity coefficients when compared to personnel selection approaches.
Construct Validity
Factor analysis (principal components analysis) was applied to the test data to investigate
possible dimensionality of the test. Although the test items were derived from competencies
organized into five domains, the analysis did not support a five-factor domain structure. The
test appears to primarily measure one general construct (professional child and youth care
judgment). A visual inspection of the item loadings and scree plot was conducted to identify
the factors upon which most of the items loaded. The scree plot (Figure 1) indicates the
12
10
8
Eigenvalue
0
46
49
4
40
43
22
25
28
61
64
67
85
97
88
52
91
55
94
31
58
82
34
37
76
79
7
70
73
1
13
16
19
10
Component Number
presence of one primary trait or construct. Therefore, subscale scores are not determined; only
the overall total score (percentage) is reported to examinees.
Cautions
The nature of child and youth care work is complex and difficult to assess through traditional
measures. Many child and youth care leaders consider the field to be more art than science.
There is no “silver bullet” assessment strategy. The exam is considered to be one component
of a more comprehensive assessment approach including the use of portfolio analysis and
supervisory assessment. Initial research indicates that the multi-measure assessment approach
is a better predictor of competence on the job than use of a single indicator (Curry, et al., in
press). Other potential uses of the exam (e.g., evaluation of training) should be considered
exploratory.
The case study approach to the exam may suggest further use of case scenarios in the
education and training of child and youth work personnel. Feedback from some of the
examinees indicates that the exam helped them to think in greater depth about these
However, use of the exam results to provide specific direction for training within the five
competency domains may not be indicated. Initial research findings indicate that the exam
appears to asses an overall general factor of professional judgment in child and youth care
work. Only the overall score (percentage) is reported to examinees. Sub-scores by domains
are not reported.
Finally, exam users (e.g., examinees, evaluators, researchers, administrators) should recognize
that a cut score of 65% does not equate to a “D” grade as many of us have been conditioned to
interpret throughout our schooling. The cut score was determined by the best estimates of a
panel of experts (along with additional statistical cut score analysis) to determine the
minimum level that should be obtained by a professional level child and youth work
practitioner.
How to Access
Contact the national office of the Child and Youth Care Certification Board at
http://www.cyccertificationboard.org or (979) 764-7306.
References
Chan, D., & Schmitt, N. (2002). Situational judgment and job performance. Human
Performance, 15, 233–254.
Clevenger, J., Pereira, G. M., Wiechmann, D., Schmitt, N., & Harvey, V. S. (2001).
Incremental validity of situational judgment tests. Journal of Applied Psychology, 86,
410–417.
Curry, D., Eckles, F., Stuart, C., & Qaqish, B. (2010). National child and youth care
practitioner certification: Promoting competent care for children and youth. Child
Welfare, 89, 57-77.
Curry, D., Qaqish, B., Carpenter-Williams, J., Eckles, F., Mattingly, M., Stuart, C., &
Thomas, D. (2009). A national certification exam for child and youth care workers:
Preliminary results of a validation study. Journal of Child and Youth Care Work, 22,
152-170.
McDaniel, M. A., Morgeson, F. P., Finnegan, E. B., Campion, M. A., & Braverman, E. P.
(2001). Use of situational judgment tests to predict job performance: A clarification of
the literature. Journal of Applied Psychology, 86, 730–740.
Mattingly, M., Stuart, C., & VanderVen, K. (2002). North American Certification Project
(NACP) competencies for professional child and youth work practitioners. Journal of
Child and Youth Care Work, 17, 16-49.
Sackett, P.R., Borneman, M.J., & Connelley, B.S. (2008). High-stakes testing in higher
education and employment: Appraising the evidence of validity and fairness. American
Psychologist, 63, 215-227.
_____________________________
Child and Youth Care Certification Board and Research Committee members contributing to this article include:
Conceptual Background
Baldwin & Ford (1988) provided a useful framework for understanding transfer of learning.
They emphasized the importance of individual characteristics (e.g., motivation to learn and
apply), the work environment (e.g., supervisor support for learning and application), and
training design factors (e.g., identical elements—the degree of similarity of the learning
task/environment and the application task/environment). Within the field of human services
professional development and training, Curry and colleagues (1991; 1994) expanded upon
this framework and emphasized the importance of incorporating these factors with field theory
(Lewin, 1951) into a Transfer of Training and Adult Learning (TOTAL) approach that assesses
transfer driving and restraining forces before, during, and after a training event. The
identification of key persons (e.g., supervisor, worker, trainer, co-worker) within the worker’s
learning and application environments (transfer field) is another key element of this transfer
model (Figure 1).
Field theory provides a relatively simple approach to change, involving the interplay between
two opposing sets of forces. Change/transfer of learning occurs when equilibrium is disrupted.
An existing field of forces is changed by increasing transfer driving forces and/or decreasing
transfer restraining forces. The number and strength of driving and restraining forces (before,
during, and after training) will determine if transfer occurs, as well as the extent of transfer. If
the strength of the total number of transfer driving forces is greater than the restraining forces,
transfer will occur. If the total strength of the restraining forces is greater or equal to the driving
Based upon this approach, the Transfer Potential Questionnaire (TPQ) was constructed in an
attempt to (1) identify factors affecting the transfer of learning of child welfare workers and (2)
predict high versus low users of training on the job. The TPQ identified 11 transfer factors and
was predictive of subsequent transfer of learning in Ohio child welfare practice settings (Curry,
1997). The TPQ was subsequently found to significantly correlate with transfer of learning with
other human service professionals (e.g., employment and training, eligibility workers) in
California (Curry, Donnenwirth, & Lawler, in press; Lawler, Donnenwirth, & Curry, in press).
Transfer will occur if the total number and strength of driving forces is greater than the
restraining forces.
Client
(restraining force)
Trainer
(restraining force)
Co-worker
(restraining force)
Administrator
(restraining force)
Supervisor
(restraining force)
Worker
(restraining force)
Description
The APPLĪ 33 is a 33-item, five-choice agree/disagree Likert scale questionnaire that was
developed from the full-length TPQ. The items sample 11 transfer of learning factors identified
by Curry (1997) (See Table 1).
Intended Use
The overall mean score on the APPLĪ 33 can be used to predict later transfer of learning of
human service workers. In essence, it can be used as a transfer of learning proxy measure,
providing a training effectiveness indicator in addition to traditional reaction measures at the end
of training.
Exploring items associated with individual, organizational, and training design factors can
provide suggestions for how to increase the potential of more effective transfer of learning. For
example, regional training administrators can explore differences in overall transfer potential or,
more specifically, supervisor and administrator support for transfer of learning by agency
(suggestive of different transfer environments). The degree of perceived applicability of training
may reaffirm or suggest necessary changes to the training design. Use of the APPLĪ 33 can
suggest ways of increasing and decreasing transfer driving and restraining forces.
Sample Norms
Frequency distributions for the Ohio and California samples are provided in Tables 2 and 3.
Total mean APPLĪ 33 scores for the Ohio sample range from 1.97 to 5.0 with a mean and
median score of 4.0 and standard deviation of .5. The distributions for the California sample
range from 1.88 to 5.0 with a mean score of 4.10, median score of 4.24, and standard deviation
of .5.
Reliability
Internal reliability is very strong with Chronbach’s alpha of .95 for both the Ohio and California
samples.
Validity
Correlation of the APPLĪ 33 with the full scale TPQ is .97 (Ohio) and .98 (California),
suggesting that full and reduced scales assess the same construct of transfer potential.
Correlation of the APPLĪ 33 with the transfer criterion is .60 (Ohio) and .57 (California). In
addition, the TPQ was found to be predictive of long-term retention of child protective services
(Curry, McCarragher, & Dellmann-Jenkins, 2005). Also, the mean score of the items composing
the APPLĪ 33 is associated with long-term retention of child protective services professionals,
but it has yet to be determined with another human service population sample.
Administrators and supervisors may be reluctant for professional development and training
practitioners to be aware of the participants’ assessment of supervisor and administrative
support. Care must be taken to describe how the results will be used as well as communicate the
importance of assessing individual, organizational, and training design factors that affect transfer
of learning. Similarly, the participants may be hesitant to rate their supervisor and administrator
negatively (particularly those who are still on their probationary period). They will need to be
assured of how the results will be used and who will have access to the results. See standards P.
2 and PR. 8 in the NSDTA Code of Ethics for Training and Development Professionals in
Human Services (Curry, Brittain, Wentz, & McCarragher, 2004).
Further research is necessary to determine the use of possible subscales. Research with the
APPLĪ 33 should also be conducted with additional human service populations as well as
additional transfer criterion measures.
References
Baldwin, T.T., & Ford, K.J. (1988). Transfer of training: A review and directions for future
research. Personnel Psychology, 41, 63-105.
Broad, M.L. & Newstrom, J.W. (1992). Transfer of Training: Action-Packed Strategies to
Ensure High Payoff from Training Investments. Reading, MA. Addison-Wessley.
Curry, D.H. (1997). Factors affecting the perceived transfer of learning of child protection
social workers. Unpublished doctoral dissertation, Kent State University, Kent, Ohio.
Curry, D., Brittain, C., Wentz, R. & McCarragher, T. (2004). The NSDTA Code of Ethics for
Training and Development Professionals in Human Services: Case Scenarios and Training
Implications. Washington, D.C.: American Public Human Services Association and the
National Staff Development and Training Association.
Curry, D. & Caplan, P. (1996). The transfer field: A training exercise. The Child and Youth Care
Leader, 7, 28-30.
Curry, D.H., Caplan, P., & Knuppel, J. (1991). Transfer of Training and Adult Learning. Paper
presented at the Excellence in Training International Conference, Cornell University.
Curry, D.H., Caplan, P., & Knuppel, J. (1994). Transfer of training and adult learning (TOTAL).
Journal of Continuing Social Work Education, 6, 8-14.
Curry, D., Donnenwirth, J., & Lawler, M.J. (in press). Scale reduction: Developing user-friendly
human service training and development assessment instruments. 2010 Proceedings of the
Curry, D., McCarragher, T. & Dellmann-Jenkins, M. (2005). Training, transfer and turnover:
The relationship among transfer of learning factors and staff retention in child welfare,
Children and Youth Services Review, 27, 931-948.
Lawler, M.J., Donnenwirth, J., & Curry, D. (in press). Evaluating transfer of learning in public
welfare training and development: Factors affecting the transfer of learning of California
public welfare workers. Project synopsis in the 2010 Proceedings of the National Human
Services Training Evaluation Symposium. University of California Berkeley. Retrieved
from http://calswec.berkeley.edu/CalSWEC/03_2010_NHSTES_All_Synopses_
FINAL.pdf.
Lewin, K. (1951). Field theory in social science. New York: Harper and Row.
Stanton, J.M., Sinar, E.F., Balzer, W.K., & Smith, P.C. (2002). Issues and strategies for reducing
the length of self-report scales. Personnel Psychology, 55, 167-194.
1.88 1 .2 .2 .2
2.64 1 .2 .2 .5
2.73 1 .2 .2 .7
2.76 1 .2 .2 .9
2.79 1 .2 .2 1.1
2.82 2 .4 .5 1.6
2.88 1 .2 .2 1.8
2.91 1 .2 .2 2.1
2.94 1 .2 .2 2.3
2.97 1 .2 .2 2.5
3.02 1 .2 .2 2.7
3.09 1 .2 .2 3.0
3.12 2 .4 .5 3.4
3.15 1 .2 .2 3.7
3.21 2 .4 .5 4.1
3.24 1 .2 .2 4.3
3.27 1 .2 .2 4.6
3.30 1 .2 .2 4.8
3.33 3 .7 .7 5.5
3.36 2 .4 .5 5.9
3.39 4 .9 .9 6.8
3.42 1 .2 .2 7.1
3.45 1 .2 .2 7.3
3.48 2 .4 .5 7.8
3.52 2 .4 .5 8.2
3.55 4 .9 .9 9.1
3.58 8 1.7 1.8 11.0
3.61 9 2.0 2.1 13.0
3.64 7 1.5 1.6 14.6
3.67 2 .4 .5 15.1
3.68 1 .2 .2 15.3
3.70 5 1.1 1.1 16.4
3.71 1 .2 .2 16.7
3.73 2 .4 .5 17.1
3.76 10 2.2 2.3 19.4
3.79 5 1.1 1.1 20.5
3.82 6 1.3 1.4 21.9
3.83 1 .2 .2 22.1
3.85 9 2.0 2.1 24.2
3.88 13 2.8 3.0 27.2
3.91 13 2.8 3.0 30.1
3.94 6 1.3 1.4 31.5
3.97 8 1.7 1.8 33.3
(table continues)
Copyright © 2010 by Dale Curry, Michael Lawler and Jann Donnenwirth. May be used by obtaining permission
from the authors. Primary contact: Dale Curry – dcurry@kent.edu.