Sunteți pe pagina 1din 25

Guide to Project Evaluation: A Participatory

Approach
Chapter 3: A Framework for Project Evaluation
Project evaluation is challenging work because of the great diversity in the types of projects
funded. To be effective, an evaluation Framework must respect and respond to this diversity. It
must also provide a consistent and common process that applies across projects, ensures
accountability and produces evidence-based results that promote learning about what contributes
to better health practices for Canadians.
The evaluation framework presented in this guide meets this challenge.
It is composed of two parts:

five key evaluation questions

five evaluation process steps

The five evaluation questions form the core of the framework and can be applied to all types of
project activities. The five process evaluation steps outline a systematic approach to the tasks that
projects need to complete to answer the evaluation questions. Groups work through the steps to
plan and implement the evaluation.
The following two sections discuss the evaluation questions and the process steps. An overview
of the evaluation framework is on page 14.

3.1 The five key evaluation questions


The process of developing the answers to the evaluation questions will vary, as each project
varies, but the five fundamental questions remain the same.
What? 1

1. Did we do what we said we would do?

Why?

2. What did we learn about what worked and what didn't work?

So what?

3. What difference did it make that we did this work?

Now what?

4. What could we do differently?

Then what?

5. How do we plan to use evaluation findings for continuous learning?

1. This approach is based on work done by Ron Labonte and Joan Feather of the Prairie Region
Health Promotion Research Centre.
1.Did we do what we said we would do? "What?" (Description of activities)
The responses to this question describe the work done in the project and the relevance of this
work in meeting the project goals and objectives. The project success indicators provide the
criteria against which success is measured. They assist the project sponsor to collect the
information needed to answer this and subsequent evaluation questions. (Chapter 5 discusses
how to develop project success indicators.)
Some of the more specific questions that may need to be answered to describe the project work include the
following

What activities were undertaken and how did they link to meeting the project goals and
objectives? Examples:
o Describe the resources that were developed to increase awareness.
o Describe the training workshops that were conducted for skill development.
o Describe the new partnerships that were formed to work on accessibility issues.

What were the major achievements of the project and what resources did they require?

If the objectives changed during the course of the project, how and why did they change?

2. What did we learn about what worked and what didn't work? "Why?" (Reasons for
success)
Participatory evaluation focuses on success, learning and action. Finding out what worked well
in a project and what didn't work well practices this principle. Here are some of the questions
that could be included in this discussion:

What strategies worked well for involving the target population in the project. Why?

What strategies didn't work well for involving the target population in the project. Why?

What strategies worked best for broadening the base of community support for the
project. Why?

What strategies didn't work well for broadening the base of community support for the
project. Why?

Which activities and strategies did we change. Why?

What was learned about the relative cost-effectiveness and efficiency of various project
strategies and activities?

How realistic and relevant were the project goals and objectives?

In what ways did the project planning process work most effectively?

What did we learn about working together as a group?

3.What difference did it make that we did this work? "So what?" (Impact)
The answers to this question measure a project's success in changing knowledge, attitudes, skills
and behaviour. The project success indicators represent the group's assumptions about what
changes should be expected from the project work and provide the criteria against which to
measure change both during and at the end of the project. (Chapter 5 discusses how to develop
success indicators.)
There are two main ways project sponsors can assess impact: by using summarized data related
to the success indicators and by asking specific impact questions of people who were involved in
the project and who were the target of the project's work.
The following types of questions may be helpful in discussions about this part of the project evaluation:

What changed as a result of the project?


o knowledge
o attitudes
o skills
o behaviour

What changed as a result of the project for

o members of the target population?


o community groups?
o service providers?
o caregivers?
o project sponsors and staff?

Were there any unexpected changes resulting from the project work? Describe them.

In what ways did this project contribute to increased public participation?

In what ways did this project help to strengthen community groups?

To what extent did the project reduce barriers to health?

What evidence is there to attribute any of the above changes to the project? What other
factors outside the project might have contributed to the changes?

Were other initiatives started, alternative services proposed or new funding resources
acquired as a result of this project?

In what ways did this project contribute to better health practices?

What new partnerships developed from this project? What was the nature of the
partnerships and what was their contribution?

Is the model or approach continuing beyond the initial funding?

To what extent is this model or approach transferable to other communities?

4.What could we do differently? "Now what?" (Future of this and other projects)
Evaluation is for learning and often the best learning comes from examining the challenges that
projects present. Here are some of the questions that could be included in this discussion:

What more effective methods for achieving the objectives emerged from the work?

What additional knowledge development is required to do the work more effectively?

What additional support from the funders and community sponsoring agencies would
have been useful to the project in meeting its goals and objectives?

Are there more cost-effective ways to achieve the project's objectives?

Who else could have been involved in the work?

What could we do to expand the network of people involved in working on this issue?

Were all the project's needs met?

Is there a better way of developing realistic project goals and objectives in the initial
planning stage?

How did management and administrative systems change through the project to become
more effective?

5.How do we plan to use evaluation findings for continuous learning? "Then what" (Use of
evaluation results)
Participatory evaluation includes ways to use the evaluation results throughout the project as
well as at the end. Some questions to consider in developing the evaluation are as follows:

How were evaluation findings used on an ongoing basis to contribute to the planning and
implementation of the project strategies and activities?

How will project findings be used for future knowledge development?

How will the final evaluation learnings be documented and distributed?

Are there alternative ways to present the evaluation results so that more people can make
use of the learnings?

How will the evaluation results be used for new project planning?

How will the evaluation results be used to influence policy and research priorities?

Seeking answers to the five key evaluation questions will guide the evaluation process
throughout a project. The learnings from answering the questions can then be used to shape
current and future work.

3.2 The five evaluation process steps


The steps to developing answers for the five key evaluation questions are briefly outlined below,
and then are further developed in the next five chapters of the guide.
1.Define the project work.

To evaluate a project there must be clear, measurable project goals and objectives that outline
what the project plans to accomplish. While this may seem self-evident, many evaluations have
gone off the track because this initial work has not been done.
Chapter 4, Defining Project Work, provides ideas on how to strengthen the development of clear
project goals and objectives.
2. Develop success indicators and their measures.
The process of defining what constitutes success for a project is another important step in
developing evaluations. Project sponsors need to define the success indicators for their projects.
The success indicators allow project sponsors to evaluate whether they accomplished what they
set out to do and what the impact of their project has been.
Chapter 5, Developing Success Indicators, discusses this process in more detail, gives some
examples of specific indicators and describes an activity that could be used to help identify
success indicators for projects.

3. Collect the evaluation data


After the first two steps have been completed, it is necessary to decide

what information the project needs to collect

who has the information

how the information will be collected.

Chapter 6, Collecting Evaluation Data, gives a brief overview of types of evaluation instruments
and ideas on how to develop evaluation tools that are appropriate for projects. It also outlines
some of the tips and cautions for using these tools.

4. Analyse and interpret the data.


As the evaluation data is collected, it should be summarized and analysed and key learnings
should be identified. This ongoing process will help projects prepare their final evaluation
reports.
Chapter 7, Analysing and Interpreting Data, provides some ideas to help with this process.

5. Use the evaluation results.


Evaluation findings can be used throughout the project to improve the planning and
implementing of project activities. By sharing project results with others, each project adds to the
body of knowledge about health promotion.

Chapter 8, Using Evaluation Results, provides ideas on how to use evaluation findings during
and after the project.
Working through these five steps will provide project sponsors with the information and tools
they need to answer the five key evaluation questions. For small projects with limited resources,
the process will be simple and straightforward. For large projects with greater resources, the
work involved in each step will vary to reflect the complexity of project goals and objectives.
For all projects, project sponsors should:

set realistic limits on the number of project-specific evaluation questions and on the
amount of evaluation information to be collected, as determined by the evaluation
resources available to the group

remember that the quality of information collected, not the quantity, is the most important
factor in evaluation.

Remember, the most successful evaluations are clear and easy to understand.

3.3 Tools for using the evaluation framework


To help in applying the evaluation framework, several different tools have been developed for
this guide. Examples provided reflect the most common Health Canada project activity types,
which are

needs assessments

education and awareness

resource development

skills development

developing innovative models

reducing barriers to health.

One-page overview of the Framework for Project Evaluation (see Section 3.4)

Framework Worksheet for the Five Key Evaluation Questions and Examples of
Developing the Questions by Project Activity Type (see Appendix 3)
o The blank worksheet can be used by projects to develop the five evaluation
questions. The examples show how the questions can be further developed to
reflect the specific evaluation needs of projects

Framework Worksheet for Success Indicators and Examples of Developing Indicators by


Project Activity Type (see Appendix 4)
o The blank worksheet can be used by projects to develop their own project specific
success indicators and their measures. The examples provide ideas for developing
success indicators and measures that projects may find useful.

3.4 A framework for project evaluation


An overview of the framework for project evaluation is presented on the next page. This
overview is a useful tool that can be used for

introducing the framework

reviewing the evaluation work

preparing evaluation reports

A Framework for Project Evaluation


5 key evaluation questions

What?

1. Did we do what we said we would do?

Why?

2. What did we learn about what worked and what didn't work?

So what?

3. What difference did it make that we did this work?

Now what?

4. What could we do differently?

Then what?

5. How do we plan to use evaluation findings for continuous learning?

Steps in the project evaluation process

1.Define the project work

clear, measurable project goals and objectives


Project activity types:

2.Develop success indicators


and their measures

needs assessments

education and awareness

resource development

skills development

developing innovative models

reducing barriers to health

process for identifying indicators


ideas for success indicators linked to process and impact

3.Collect the evaluation data

written questionnaire
telephone survey
reaction sheet
interview - face to-face or phone
focus group

participant - observation
project diary
program records
before and after questionnaires
non-traditional methods of documentation

4.Analyse and interpret the data data analysis


identification of learnings, recommendations, actions

5.Use the results

sharing of the results on an ongoing basis


use of learnings to inform future planning

Guide to Project Evaluation: A Participatory


Approach
Appendix 6: Sample Reaction Sheet for an Evaluation
Workshop

1
.

How useful was this training for you?

not useful

so-so

useful

very useful

Comments:

2
.

What 3 words would you use to describe the training?

3
.

How did this training contribute to your understanding of evaluation?

4
.

What suggestions do you have to make the training more useful?

5
.

What comments would you like to make about the trainers?

6
.

What is one thing that you got from the training that you could use right away in
your work?

Guide to Project Evaluation: A Participatory


Approach
Chapter 7: Analysing and Interpreting Data
Most evaluation projects have no problem with collecting large amounts of evaluation
information. What they sometimes do have difficulty with is effectively analysing, summarizing
and using the results.
The emphasis throughout this guide is on evaluation for learning and action. This section focuses
on practical ways that people at the national, regional and community levels can turn evaluation
information into usable, accessible summaries and reports that add to the body of knowledge
about project success and promote change in attitudes, skills and behaviour. Committing
adequate resources at all levels to do the evaluation work is essential if everyone is to benefit
from the valuable learnings that can be gained from evaluating health promotion projects.

7.1 Analysing project evaluation information


Analysing evaluation information begins with a review of all the collected data to find the
emerging themes or patterns. The five key evaluation questions provide useful categories around
which to group information and develop the themes. Look for and record the information that is
in the data about how well the project is doing, what is working, what should be done differently
and what difference it is making.

Project sponsors may want to record notes on the data on file cards or sheets of paper - one for
each question, issue or topic. This makes it possible to see the emerging patterns more easily.
Include exact quotations from the interviews and questionnaires. It is essential to stay with what
people have said and let the data guide the analysis. Too much detail is better at this stage than
not enough. It is always easier to cut down than to add information later.
Once the material has been grouped into themes, it can be analysed to see how the results
compare to the changes that were expected as identified by the success indicators. Take the time
to reflect on what the analysis reveals. What was learned to answer the "what", "why", "so what",
"now what" and "then what" evaluation questions? People who have been involved in the project
should be involved in the interpretation of the findings.
Project sponsors or the project evaluator should prepare short summaries of the key learnings
from the analysis on a regular basis - for example, every three months or after each project
activity. The importance of preparing these brief summaries, which highlight two or three key
learnings, cannot be overemphasized. The summaries provide an excellent means of letting the
key players in the project know about and begin to use the evaluation findings throughout the
project - one of the basic principles of participatory evaluation. By completing summaries of key
learnings at regular intervals, the work at the end of the project will be greatly reduced.
Summary - Analysing evaluation information

Review the collected evaluation material for emerging themes and patterns.

Use the key evaluation questions to group the material into themes.

Analyse the material by themes, comparing the results to the changes that were expected
as identified by the success indicators.

Reflect on what the analysis means. Ask other key project players for their
interpretations.

Prepare short summaries of key learnings under each theme.

Prepare summaries of key learnings on an ongoing basis.

Submit the summaries to the participants for their feedback and verification of the
findings.

Develop the final analysis.

Analysis of quantitative data


Quantitative data looks at the incidence and quantity of events. Data gathered through
quantitative methods (surveys, questionnaires, administrative records) is numerical and may be

analysed by calculating averages, ranges, percentages and proportions. Descriptive statistics


simply account for what is happening in numerical terms. For example, when evaluating the use
of a needle exchange system, an estimate may be made of the average number of people using
the facility each week or the percentage of users returning needles. Bar charts, pie charts, graphs
and tables can be effective ways to present the statistical analysis in a clear and concise manner.
Analysis of qualitative data
Qualitative data is information that is primarily expressed in terms of themes, ideas, events,
personalities, histories, etc. Data is gathered through methods of observation, interviewing and
document analysis. These results cannot be measured exactly, but must be interpreted and
organized into themes or categories. The primary purpose of qualitative data is to provide
information to the people involved in the project. This standard of usefulness is an important one
to keep in mind when analysing qualitative data.
Note: Neither the quantitative nor the qualitative approach to the collection and analysis of data
is inherently superior. Each has advantages and disadvantages. For both, it is important to know
the context within which they have been used in order to understand the analysis. Whenever
possible, project evaluations should include several types of information collection tools. The
analysis and summaries of key learnings should draw on information collected from all of them.

7.2Preparing useful evaluation reports


Once the evaluation information has been analysed, the next challenge is to present the learnings
in ways that are both informative and interesting.
The brief summaries of key learnings, described in the preceding section, are often all that is
needed to provide information on an interim basis. However, the final project report requires
more data. The next section provides some ideas that might be useful for clarifying the
expectations about the final report with project sponsors.
Evaluation report outline
Having an outline at the beginning of a project about how the final report will be developed is
extremely useful. It helps shape the thinking about what information is needed and how it will be
collected, analysed and used.
There are two questions to consider when planning evaluation reports.
1.Who is writing the report?
Small projects with very limited resources should have different expectations placed on them
than larger projects or projects with funding for an external evaluator.
2.Who is the report for?

While every evaluation report should be written in an interesting and clear style, the structure
and emphasis of the report may vary depending on who it is for. For example, is it intended
primarily for the funder or for the project participants? The former might focus more heavily on
learnings about cost-effectiveness strategies; the latter might be more interested in learnings
about how to implement a specific health promotion activity.
The following sections form the basic structure - the bare bones - of an evaluation report.
Personal stories and quotations from the project participants put a human face on the evaluation
results and can make the report much more interesting and user-friendly. Groups can adapt and
build on the following guidelines to develop evaluation reports that reflect the unique nature of
specific projects.
Example of an outline for a project evaluation report Section 1:
Section 1: Executive Summary
This section is for people who are too busy to read the whole report.

One page is best - never more than three.

It comes first but is the last piece written.

It usually looks at what was evaluated and why and lists the major
conclusions and recommendations.

Section 2: Background Information - Getting started


This section provides background leading up to the evaluation:

how the project was conceived

why it was needed

the project goals and objectives

who was involved in the work

the project organizational structures.

Section 3: Description of the Evaluation - How we learned


This section describes

the evaluation approach and how it was chosen

evaluation goals and objectives

how the evaluator was selected and managed

how the data collection tools were designed and used

how well the data collection tools worked

any limitations of the methodology

how people were selected to be interviewed, or to receive questionnaires, etc.

who did the interviewing, the number of people interviewed and their
situation

how questionnaires were distributed and returned.

Section 4: Evaluation Results - What we learned


One way to organize this section is around the first four evaluation questions:
Did we do what we said we would do?

Outline goals and objectives of the project.

Record what happened as a result of the project - e.g., resources developed,


training sessions completed, etc.

Describe the changes that occurred in relation to the success indicators.

What did we learn about what worked and what didn't work?

Outline key learnings from the project about making things work. Examples:
producing effective resource materials, structuring productive advisory

committees, conducting needs assessments in rural and isolated communities,


building community ownership of health promotion projects, etc.

Identify learnings about what strategies didn't work and why.

What difference did it make that we did this work? (outcomes)

Outline results from the evaluation that show how the project made a
difference to consumers, project sponsors and the wider community.

Identify any changes - of attitudes, knowledge, skills or behaviour that


occurred from the project work, e.g., how health practices have improved.

If appropriate, show how the project contributed to increased public


participation and strengthened community groups.

Include personal statements and anecdotal material from project evaluations


which illustrate the impact an activity has had on project participants.
Example: "One thing I plan to use right away in my work which I got from
the training is..."

What could we do differently?

List learnings from the projects about different ways to do the work.
Examples: improving the cost-effectiveness of projects, adapting the project
model to make it more responsive to volunteers, changing the reporting role
for outside evaluators to improve accountability, etc.

Reflect on cautions and challenges about doing the project work.

Section 5: Conclusions and Recommendations Final thoughts on what we would like others to know

Conclude with a summary of the work done and how well the goals and
objectives were reached.

Include recommendations for further work.

Include recommendations on how the evaluation results can be used.

Section 6: Appendices

These may include copies of questionnaires or interview schedules, statistical


information, program documents or other reference material important to the
evaluation but not important enough to go into the text.

It is useful to include a bibliography - list of the sources used to compile the


evaluation results, other research studies and articles. A list of who was
interviewed or organizations contacted may also be include

7.3 Activity: Analysing and Interpreting Data


Analysing and interpreting data from project evaluations.

Purpose:

To give project sponsors practice in completing the analysis and interpretation of


project evaluation results for inclusion in the project evaluation report.

Time:

1-2 hours

Materials:

Activity:

flipchart

raw project evaluation data

Guide to Project Evaluation, Chapter

Have participants review the raw project data

Working in small groups, divide the raw data among groups and have
participants analyse the data to find themes (refer to Chapter 7 of the
Guide) that relate to the third evaluation question:

"What difference did it make that we did this work?"

Have participants prepare short summaries of learnings for each theme

Have participants develop one example of quantitative analysis and one

example of qualitative analysis

Bring all participants together to share their results and to discuss their
ideas on which information is most useful

Have the total group develop a final analysis plan based on the
information presented by the small groups

Have the total group brainstorm on informative and interesting ways to


present the evaluation results, including how to organize the evaluation
report for maximum effect

Evaluation Planner Instructions


Step I. Use the attached Evaluation Planner Spreadsheet to organize information
about your Centers programs/activities/resources. Additional information about
the topics below may be found in programevaluation.org topic: Focusing the
Evaluation.
A. List all of the programs/activities that your Center provides

Briefly provide names of programs, courses, resources that your Center


provides. These will be referred to as programs that your Center provides.

B. Who is a direct participant of the program?

List people (e.g., science teachers, kindergarten students,


grandparents) who participate in or use the program/resource/activity
directly.

C. Who is affected by the program indirectly?

List those who may benefit indirectly as a result of participants use of


the program/resource/activity.

D. With whom will part or all of the evaluation information be shared? (The
evaluation audience)
List those who may see or hear about information contained in the
evaluation (e.g., community members who read an article in the paper, State
Ed who receives a report, funders, Board of Education, your Teacher Center).
Pay particular attention to those who have asked you for evaluation
information.

E. Rate the stakes of this program (e.g., High stakes a program with high cost,
high public visibility, or outcomes are extremely important).
Consider cost, importance of outcomes and other aspects of the program
and rate it as high, medium or low stakes.

F. For each program/activity/resource decide whether you interested in an


evaluation that helps you to improve program components (formative), an
evaluation that looks at the success that it has in meeting its goals or both?

Consider why you are interested in conducting an evaluation.

Formative evaluation is it to make improvements to the program. You will


probably want to examine components essential to the effective running of
the program and see how improvements can strengthen the program.

Summative evaluation demonstrates how well the program achieved


important objectives. Success is generally determined by use of the
program and how it has accomplished key objectives.

G. Is there already documented effectiveness of this program?

Think of whether or not the program already has documented


effectiveness. Perhaps you put a great deal of time and energy into
evaluating a certain program last year. Despite the importance of the
program, you may want to focus on something else this year. Or perhaps
you implemented a program with a proven track record (e.g., evidenced
through more than one empirical studies). You may not need to prove the
efficacy of the program again, however you may want to evaluate how it was
used by your center.

Step II. Prioritize the top three to five programs that you want to
take a serious look at this year. This is only a tentative list.
Consider the information that you have organized on the Evaluation Planner Worksheet.
Rank the top five programs that you want to evaluate.
Step III. Complete the Evaluation Focusing Worksheet for each program you are
considering to evaluate. Describe program components and intended outcomes of
the program.
1. For each program you are considering, describe how you would know if
program components were in place and how the program would impact
participants if it were successful (program outcomes). Then jot down some
potential ways that objectives might be assessed. Consider the feasibility of the
proposed outcome measure (think of your resources and the willingness of others to
participate in the evaluation). In the example below program components to be
evaluated are designated with (PC) and program outcomes are designated with (PO).
Example program: Peer tutoring
Participant

Impact

Tutor
trainers

Recruit tutors
and tutees
from
classroom
teachers (PC)
Train tutors
according to

Possible
outcome
measures
Archival
information
(records of
students
trained)
Checklist of
training steps

Who can
help to
collect?
Trainers
complete
forms

Feasibility

Trainers
complete

Easy

Easy

the program
steps (PC)
Monitor and
coach tutors
(PC)

forms
Coaching
checklist

Trainers
complete
forms

Will need to
build into
program

Participant

Impact

Possible
outcome
measures
Coaching
checklist,
Self report
checklist
Self report
checklist

Who can
help to
collect?
Tutors
complete
forms

Feasibility

Tutors

Follow the
prescribed
tutoring steps
(PC)
Attend
sessions
regularly (PC)

Tutors
complete
forms

Improve
reading
fluency (PO)

CurriculumBased
Assessment

Trainers?
TAs?
Interns?

Will need to
may self report
easy to
complete
Difficult. Will
need to identify
and train
resources.
Perhaps Only
tutees will be
assessed.

Tutees

Improve
reading
fluency (PO)

CurriculumBased
Assessment

Trainers?
TAs?
Interns?

Difficult. Will
need to identify
and train
resources.
Perhaps only
the most at risk
tutees will be
assessed.

Classroom
teachers

Satisfied with
program (PO)

Teacher survey

Teachers

Probably easy

Will need to
build into
program

2. As you identify components to investigate, desired outcomes and feasible


ways to evaluate them consider whether you have the resources to conduct a
quality evaluation. If you do not have the resources to conduct a meaningful
evaluation, you may want to consider modifying the evaluation plan to make it more
feasible, identifying other resources to assist with the evaluation, or waiting to evaluate
that particular program when you have enough resources.
Step IV. Develop a Calendar to rough out program activities, evaluation
activities and evaluation dissemination activities. An example is provided below:

Program Calendar
District leaders worked with the Teacher Center Director, Policy Board (comprised of teachers in
the district) and course instructors to develop a program calendar that provides specific timelines
for the program and its evaluation. Staff in the district had provided input concerning district
needs and the proposed programs during the previous year.
Sept. 10

Sept. 15, 16

Sept. 18

Oct. 1

Oct. 5

Oct. 10

Oct. 15

Peer Tutor
trainers
trained.
Half day
workshop
for 2 staff
First CBM
in each
Second
assessment
building. CBM
Trained assessors
district
assessment
score CBM results.
wide
district
Monitored/coached
wide
Group 1
by trainer.
(trained
(Trained
Brain
assessors)
Compatible assessors).
Learning Session A;
3 hour
session
(Psychologist)
with 20
Teachers

All
administrators,
teaching and
All district
Eight staff in support staff in
staff
each building district trained
notified
trained to
to use and
about
conduct and understand
program
score CBM
CBA, as well
and timeline
reading.
as how to
at faculty
conduct and
meeting.
score CBM
Full Day
writing. One
training
15 minute
hour after
presentation
(Psychologist) school
(principal)
training.

Oct. 22

Oct. 25

Oct. 29

Nov. 1

Peer tutor
training.
Second session
in each
building (Two
staff train three
groups of 10
children)

Group 2
Brain
Comp.
Learning.
Session
B; 3 hour
session
with 25
teachers

Stop Drop and


Read training.
27 teachers in
a 3 hour
session.

Third CBM All programs


assessment are
district wide implemented.
(trained
assessors).

Group 2 Brain

Nov. 3

Oct. 20

Peer tutor
training. First
session in
each building
(Two staff
train three
groups of 10
children).

Group 1 Brain
Compatible
Learning Session B 3
hour session
with 20
Teachers

Nov. 4

Feb. 8

Feb. 10

CBM data (Oct. 1,


Oct. 15 and Nov.
1) is shared. Up to
20 at risk students
are chosen for
weekly monitoring
based on CBM
data and teacher
nomination.
Weekly and
monthly

Booster
sessions #1
with course
instructors.

Alt.
Booster
sessions #1
with course
instructors.

(Stipended
after school (Stipended
session)
Saturday
session)

Comp.
Learning.
Session A; 3
hour session
with 25
teachers

monitoring
continues.

S-ar putea să vă placă și