Documente Academic
Documente Profesional
Documente Cultură
JUDICIAL TRAINING
INPROL Consolidated Response (07-005)
With contributions from With contributions from Greg Gisvold, Mira Gur-Arie, Renee Dopplick,
Meghan Stewart, Linda Bishai, Sermid Al-Sarraf, Andrea De Maio, Colette Rausch, Ab Currie,
Patrick Murphy, William Brunson, Irene-Maria Eich, Carl Baar, Livingston Armytage, Richard
messick, Karen Widess
Submitted by: Libor Chlad, EUJUST LEX Program, Council of the European Union
The full text of the responses provided by these INPROL members can be found at
http://www.inprol.org/node/1991. INPROL invites further comment by members.
Note: All opinions stated in this consolidated response have been made in a personal capacity
and do not necessarily reflect the views of particular organizations. INPROL does not explicitly
advocate policies.
INPROL is a project of the United States Institute of Peace with facilitation support from the Center of Excellence for Stability
Police Units, the Pearson Peacekeeping Centre, and the Public International Law & Policy Group.
MEASURING THE IMPACT OF JUDICIAL TRAINING
Query:
What factors must be taken into consideration when evaluating a judicial training course,
particularly in a country emerging from conflict? Are there any performance indicators
that can assist in measuring the impact of judicial training? What are the unique
challenges involved in evaluating judicial training?
Response Summary:
Outputs and Impacts: It is also important to keep in mind at each stage of the
evaluation process the difference between measuring outputs (e.g. numbers of
participants, days of sessions, etc) and impacts (such as whether judges have
actually become more knowledgeable, improved their rulings, or are acting with
greater integrity) which reflects the ultimate goals of the training. The former are
easier to measure and speak more to the effectiveness of a project’s administration
(and is thus also important to track fiscal responsibility in the provider). The latter are
far more subjective and must be considered in the context of the myriad factors
Best Practices and International Standards: There are many resources available
on design and evaluation of rule of law training generally, some of which are listed
below. Moreover, evaluators of judicial training must be familiar with the relevant
international standards that impact the work of the judiciary. These standards
provide important guidance for assessing judicial performance. Among the most
important of these international standards are the Basic Principles on the
Independence of the Judiciary, The Bangalore Principles of Judicial Conduct, the UN
Convention against Transnational Organized Crime, and the UN Convention Against
Corruption. Other relevant standards are found below in the compilation of
resources.
During periods of conflict, the judicial sector often suffers more than other forms of
government administration. Judges are dispersed; those that remain receive little
training from the beginning of the conflict; and few gain knowledge of current
international standards. In addition, legal (and primary) education is often weak or non-
existent; therefore recruiting new judges that are competent to carry out the complex
task of issuing consistent, reasoned judgments is a challenge.
Legal System: Any pre-training assessment must first consider the different
attributes of the legal system in question. For example, a judge working in a country,
which applies the civil law, requires training in inquisitorial, rather than adversarial,
trial procedure. In societies which maintain both a formal justice system and an
informal means of resolving disputes, judicial training might need to cover customary
systems, at least in a rudimentary way, in order to give judges an understanding of
how the systems interact. The evaluation of the training should therefore account for
how well the training accounted for these needs.
Flexibility in the Face of Reform: As the legal system develops, so too will the
range and complexity of legal matters which judges will encounter in their work.
Unlike other areas of rule of law training, such as corrections or police training whose
subject matter is relatively static and discrete, a judicial training curriculum must be
regularly updated to reflect legal developments at the local, state or provincial,
national, regional and international levels in a wide range of legal subjects.
Trainers: A training design must address the supply side as well by selecting
trainers that are mutually acceptable to both donors/programmers and the judges to
be trained. Failure to do so can influence the effectiveness of the training. Several
INPROL members who responded to this query pointed out that judges are,
understandably, often unwilling to participate in training unless the trainers have a
high level of expertise and can command respect in judicial circles. This may
preclude administrative experts who have valuable knowledge about court
operations but are not seen by participants as appropriate mentors. The trainer’s
background and understanding of the cultural, historical or religious context in which
training takes place is critical. For example, as one practitioner noted, a trainer with
expertise in Sharia Law is likely to be more effective in transferring skills to judges in
an Islamic country.
To evaluate the extent which a judicial training program effectively meets the needs of
the participants and the objectives of the trainers, two main factors come into play:
evaluation of the process and evaluation of the impact or result. Each may in turn be
measured by both objective and subjective criteria. Given the difficulty of measuring
many of the components of judicial training, a mix of methods and criteria can help to
enhance the reliability of evaluation criteria.
Quantitative Process Indicators – First, one can measure the quantitative features
of the training – such as the number of judges trained, the number and length of
training sessions conducted, the number of materials distributed, the schedule and
length of training, etc. Another example would be an indicator measuring judges’
participation in training, in terms of whether a specific minimum threshold for
attendance was met. These indicators are objective and easily quantified, and
usually allow the evaluation question to be answered with a “yes” or “no” response.
An evaluator will also want to know whether the training was conducted on schedule
and within budget. In that case, the indicators are the date by which the training was
to have been completed and the actual cost of the training. An example of a training
evaluation checklist is found in the compilation of resources section below.
Course materials should also be reviewed for scope and compatibility with other
necessary subjects or priority areas of reform. Questions to ask are:
Were the materials broad enough to cover the full scope of the judicial enterprise?
If the materials are broad in scope, was enough time spent to absorb each?
What other trainings have been given or are planned, and do they dovetail with the
particular training being evaluated?
How does the training correlate with other rule of law initiatives being undertaken in
the country or region in question?
Impact Indicators: The evaluation of judicial training is not complete until both the
outputs and the impacts of the training have been assessed. Evaluation of the
outcomes or impact of judicial training involves consideration of its longer-term effects,
particularly the improvement in the way that judges perform their work as a result of the
training and how that change contributed to judicial reform. These indicators are known
as “Impact Indicators” because they measure elements external to the project and how
those elements contribute to enhancing the quality of justice.
For example, an evaluator will consider whether there has been a change in judicial
performance by looking at factors such as:
Quantitative Impact Indicators -- The data used to measure the impact of judicial
training can be obtained in a number of ways, including:
Answers given in questionnaires completed by the judges after the training; and
Judicial management data such as court statistics.
Depending on the specific topic of the training delivered, evaluators can also examine
case statistics to see if there has been any discernible change between the pre and
post-training periods. This would include statistics on conviction rates; the number of
new cases each year and the number of case disposals; case processing time; the
number of appeals and the percentage of successful appeals, as well as the number and
nature of complaints against the judiciary and their outcomes. This type of
measurement can be used to complement (and perhaps contradict) the subjective
reporting intrinsic in surveys or questionnaires completed by trainees themselves.
However, the accuracy of this method depends upon reliable data being regularly
compiled and available from court administrators. This may rarely be the case in a
country emerging from conflict, but will be vital to develop if international efforts are to be
sustainable.
Key informant interviews. Evaluators can also conduct what is known as “key
informant interviews” of court users which, depending on the type of legal system,
might include members of the public, court employees and clerks, notaries, lawyers
and other judges who did not participate in the training. These individuals often have
the unique opportunity to observe and offer valuable insights regarding actual
changes in judges’ ability, decorum, behavior and skills. A combination of asking the
participants themselves and a control group in a position to interact with judges can
provide useful data.
Focus groups. Another method of measuring impact, albeit a less reliable one, is to
ask select members of civil society, community representatives and public interest
groups about their impressions of the performance of the judiciary after the training
and their confidence in the integrity of the judicial system. The criteria used to
measure satisfaction with judicial services would include the degree to which the
judge in question protects human rights, the judge’s accessibility, openness,
efficiency, transparency and conduct. It may not, however, be easy to select
appropriate representatives from these groups, nor to satisfy statistical validity in how
that selection was made. This method is also highly subjective in that the responses
of those surveyed may reflect their own biases.
Evaluators of judicial training will also face a number of methodological and other
challenges.
Judicial Independence: There are concerns that efforts to assess the impact of
judicial training may undermine or threaten judicial independence. For example, a
judge who issued an unpopular ruling may face allegations of incompetence, as
evidenced by a negative assessment of his or her judicial performance. To some
extent, this is unavoidable as judges require training, and evaluation is a critical part
of that process. Where possible, results of an evaluation should only be made
available to participants and their supervisors, rather than to the public or other
branches of government. Some countries have established judicial training programs
under the auspices of an independent national judicial training center to ensure
critical evaluations while maintaining judicial independence.
Long-term Nature of Judicial Reform: The behavioral change that is the basis for
sustainable judicial reform takes time to register an impact. While mileposts along
the way are necessary, the impact of a judicial reform project may not be measured
in time frames of a year or two. Rule of law professionals should not be tempted to
concentrate solely on quantifiable outputs (such as the provision of legal textbooks
and materials) at the expense of qualitative measures taken to improve judicial
performance over the long term.
As this Consolidated Response indicates, judicial training is one of the most difficult
forms of training to evaluate. INPROL would welcome further comment by members on
their experience in designing an evaluation process and selecting performance
indicators to assess judicial training, particularly in countries transitioning from war to
peace.
________________
Compilation of Resources:
This Consolidated Response draws from many of the following resources, which are
useful reference tools for policing practitioners. All listed documents with a hyperlink are
uploaded to the INPROL Digital Library.
Convention for the Suppression of the Traffic in Persons and of the Exploitation of
the Prostitution of Others (1949)
The Geneva Conventions (1949)
Convention relating to the Status of Refugees (1951)
United Nations Basic Principles on the Independence of the Judiciary (1985)
United Nations Convention against Transnational Organized Crime (2000)
Best Practices
Note: All opinions stated in this consolidated reply have been made in a personal
capacity and do not necessarily reflect the views of particular organizations.
INPROL does not explicitly advocate policies.
Information:
New Queries: To send a new query, please send an email to inprol@inprol.org.
Documents: To submit a document to INPROL, please login to INPROL and visit
http://www.inprol.org/uploadcontent or send an email (with the document attached) to
inprol@inprol.org.