Documente Academic
Documente Profesional
Documente Cultură
TRAINING EVALUATION
Evaluation
As defined by the American Evaluation Association, evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. Evaluation is the systematic collection and analysis of data, needed to make decisions of a process in which most well-run programs engage from the outset. Evaluation: It is a process of establishing a worth of something. The worth, which means the value, merit or excellence of the thing. Evaluation is a state of mind rather than a set of techniques.
Evaluation is that part of a project where you stand back and take stock. It is where you: Monitor what you are doing Measure what you have done Find out what was effective and what was not.
It is not an add-on feature of well-funded projects. It is a necessary part of all projects. Evaluation is at its best when it is fully integrated into all project stages. It is there to help you: Learn from your mistakes. Pass on the benefits of your experience to others. Account for the money and resources you have used.
47
Training Evaluation
Assessing the effectiveness of the training program in terms of the benefits to the trainees and the organisation Process of collecting outcomes to determine if the training program was effective. From whom, what, when, and how information should be collected. The specification of values forms a basis for evaluation. The basis of evaluation and the mode of collection of information necessary for evaluation has been defined as any attempt to obtain information on the effects of training performance and to assess the value of training in the light of that information. Evaluation helps in controlling and correcting the training programme.
Evaluating training:48
Evaluation is any attempt to obtain information (feedback) on the effects of a training programme, and to assess the value of the training in the light of that information. Evaluation leads to control which means deciding whether or not the training was worth the effort and what improvements are required to make it even more effective. Training evaluation is of vital importance because monitoring the training function and its activities is necessary in order to establish its social and financial benefits and costs. Evaluation of training within work settings can assist a trainer/ organization in learing more about the impact of training. It is important to understand the purpose of evaluation before planning it and choosing methods to do it. Some advantages of using evaluations are difficult to directly witness, but when done correctly they can impact organizations in positive ways.
Evaluation feedback assists in improving efficiency and effectiveness of: Training content and methods. Use of organization budget, staff, and other resources. Employee performance. Organizational productivity.
Through evaluation, trainers: o Recognize the need for improvement in their training skills. o Get suggestions from trainees for improving future training. o Can determine if training matches workplace needs.
49
50
Benefits of Evaluation: Improved quality of training activities. Improved ability of the trainers to relate inputs to outputs. Better discrimination of training activities between those that are worthy of support and those that should be dropped. Better integration of training offered and on-the job development. Better co-operation between trainers and line-managers in the development of staff. Evidence of the contribution that training and development are making to the organization.
51
Measure Performance
Train
52
In the pre-post-training performance methods, each participant is evaluated prior to training and rated on actual job performance. After instruction-of, which the evaluator has been kept unaware-, is completed, the employee is reevaluated. As with the posttraining performance method, the increase is assumed to be attributable to the instruction. However, in contrast to the post-training performance method, the pre-post-performance method deals directly with job behavior.
Careful attention should be given in determining when and how pre- and postTest are conducted. When conducting pre-Test, (or pre-program measurements), four general guidelines are recommended. 1. Avoid pre-Test when they alter the participants performance. Pre-Test are intended to measure the state of the situation before the HRD program begins. The pre-Test should possibly be given far enough in advance of the program to minimize its effect, or omitted. 2. Do not use pre-test when they are meaningless. If teaching completely new material or providing information participant do not yet know, pre-test results may be meaningless. 3. Pre-test and post-test should be identical or approximately equivalent. Scores should have a common base for comparison. Identical tests may be used for pretests and post-tests, although this may influence results when they are taken the second time. 4. Pre-testing and post-testing should be conducted under the same or similar conditions. The time allowed for the test and the conditions under which each test is taken should be approximately the same.
53
54
55
Any organization that provides a service or product is usually interested in feedback from those who use the service or product. While participant feedback is popular, it is also subject to misuse. A positive reaction at the end of the program is no assurance that learning has occurred or that there will be a change in on-the-job performance. A high-quality evaluation would be difficult to achieve without feedback questionnaires. Feedback is information about the results of a programme, which is used to change the process itself. Negative feedback reduces the error or deviation from a goal state. Positive feedback increases the deviation from an initial state." In this context, feedback could be information about one's action or thinking, but does not necessarily involve the application of consequences. Viewed from an information processing perspective, feedback could be positive or negative or it could simply be neutral in that it indicates information that the thinking, affect, or behavior was perceived and understood. After the completion of the programme, the feedbacks are evaluated and interpretations are made about the employees that Whether they like the programme or not? Whether they gain something or not? Whether this programme is going to help them in their work or not? What are the pitfalls of the programme? After getting the answer of all these questions with the help of feedback one can effectively analyze the training programmes effectively and efficiently. In this project we are giving more emphasis on these feedbacks only because whole of the analysis and interpretation part is completed with the help of these immediate reaction of trainees after the completion of programme through feedbacks.
56
Advantages/ disadvantages of feedback Several important advantages are inherent in the use of feedbacks. Two important ones are: 1. Reaction questionnaires provide a quick reaction from participants while information is still fresh in their minds. By the end of the program, participants have formed an opinion about its effectiveness and the usefulness of program materials. This reaction can help make adjustments and provide evidence of the programs effectiveness. 2. They are easy to administer, usually taking only a few minutes. And, if constructed properly, they can be easily analyzed, tabulated, and summarized.
Three disadvantages to feedbacks are: 1. The data are subjective, based on the opinions and feelings of the participants at the time of testing. Personal bias may exaggerate the ratings. 2. Participants often are too polite in their ratings. At the end of a program, they are often pleased and may be happy to get it out of the way. Therefore, a positive rating may be given when they actually fell differently. 3. A good rating at the end of a program is no assurance that the participants will practice what has been taught in the program.
57
Identify any remaining training gaps, and include them in future plans. Review return on investment.
Train
Measure Performance
The first approach is the post-training performance method. Perhaps the most important measures are taken, at a predetermined time, after the program is completed. This allows participants an opportunity to apply on the job what has been learned in the program. Participants performance is measured after attending a training program to determine if behavioral changes have been made. If change did occur, we may attribute them to the training, but we cannot emphatically state that the change in behavior related directly to the training. Accordingly, the post- training performance method may overstate training benefits. This is done after some time pass after the training programme like after 1 year or so. To determine whether the changes occur in human behavior or not or whether they gain something from the training programme or applying that in their business for getting effective returns.
58
Description
Timing
Measurement Focus
Pre-Test
Post-Test
Taken at predetermined times during the program, sometimes daily Taken at a predetermined time after the program has been completed, usually a different time period for each type of data
Reaction to the program and progress towards skill and knowledge acquisition Learning retention, job performance and business impact after the program
59
VARIOUS MODELS OF TRAINING EVALUATION 1. Kirkpatrick Levels of Training Evaluation: Training and the Workplace Most training takes place in an organizational setting, typically in support of skill and knowledge requirements originating in the workplace. This relationship between training and the workplace is illustrated in Figure.
Using the diagram in Figure as a structural framework, we can identify five basic points at which we might take measurements, conduct assessments, or reach judgments. These five points are indicated in the diagram by the numerals 1 through 5: 1. Before training 2. During training 3. After training or before entry (Reentry) 4. In the workplace 5. Upon exiting the workplace
60
The four elements of Kirkpatrick's framework, also shown in Figure, are defined below using Kirkpatrick's original definitions. 1. Reactions. "Reaction may best be defined as how well the trainees liked a particular training program." Reactions are typically measured at the end of training -- at Point 3 in Figure. However, that is a summative or end-of-course assessment and reactions are also measured during the training, even if only informally in terms of the instructor's perceptions. 2. Learning. "What principles, facts, and techniques were understood and absorbed by the conferees?" What the trainees know or can do, can be measured during and at the end of training but, in order to say that this knowledge or skill resulted from the training, the trainees' entering knowledge or skills levels must also be known or measured. Evaluating learning, then, requires measurements at Points 1, 2 and 3 -- before, during and after training 3. Behavior. Changes in on-the-job behavior. Clearly, any evaluation of changes in on-the-job behavior must occur in the workplace itself -- at Point 4 in Figure. It should be kept in mind, however, that behavior changes are acquired in training and they then transfer (or don't transfer) to the work place. It is deemed useful, therefore, to assess behavior changes at the end of training and in the workplace. Indeed, the origins of human performance technology can be traced to early investigations of disparities between behavior changes realized in training and those realized on the job. 4. Results. Kirkpatrick did not offer a formal definition for this element of his framework. Instead, he relied on a range of examples to make clear his meaning. Those examples are herewith repeated. "Reduction of costs; reduction of turnover and absenteeism; reduction of grievances; increase in quality and quantity or production; or improved morale which, it is hoped, will lead to some of the previously stated results." These factors are also measurable in the workplace -- at Point 4 in Figure. It is worth noting that there is a shifting of conceptual gears between the third and fourth elements in Kirkpatrick's framework. The first three elements center
61
on the trainees; their reactions, their learning, and changes in their behavior. The fourth element shifts to a concern with organizational payoffs or business results. We will return to this shift in focus later on.
2. The CIRO Approach: Another four- level approach, originally developed by Warr, Bird, and Rackham, is a rather unique way to classify evaluation processes. Originally used in Europe, this framework has a much broader scope than the traditional use of the term evaluation in the United States.
The four general categories of evaluation are: Adopting the CIRO approach to evaluation gives employers a model to follow when conducting training and development assessments. Employers should conduct their evaluation in the following areas: C-context or environment within which the training took place I-inputs to the training event R-reactions to the training event O-outcomes A key benefit of using the CIRO approach is that it ensures that all aspects of the training cycle are covered.
3. Daniel Stufflebeam's CIPP Model: CIPP is an acronym for: Context, Input, Process and Product. This evaluation model requires the evaluation of context, input, process and product in judging a programmes value.
62
CIPP is a decision-focused approach to evaluation and emphasises the systematic provision of information for programme management and operation. In this approach, information is seen as most valuable when it helps programme managers to make better decisions, so evaluation activities should be planned to coordinate with the decision needs of programme staff.
When ROIs are calculated, they should be compared to targets for HRD programs. Sometimes these targets are determined based on company standards for capital expenditures. Others are based on what management expects from an HRD program, or what level they would require to approve implementation of a program. 63
The calculation of the ROI deserves much attention, because it represents the ultimate approach to evaluation and is becoming an increasingly important part of HRD function as business, industry, and government become more bottom-line oriented. It provides a sound basis for calculating the efficient use of financial resources allocated to HRD activities.
METHODS OF EVALUATION:Questionnaires: Comprehensive questionnaires could be used to obtain opinions, reactions, and views of trainees. Test: Standard tests could be used to find out whether trainees have learnt anything during and after the training. Interviews: Interviews could be conducted to find the usefulness of training offered to operatives. Studies: Comprehensive studies could be carried out eliciting the opinions and judgments of trainers, superiors and peer groups about the training. Human resource factors: Training can also be evaluated on the basis of employees satisfaction, which in turn can be examined on the basis of decrease in employee turnover, absenteeism, accidents, grievances, discharges, dismissals, etc. Cost benefit analysis: The cost of training (cost of hiring trainers, tools to learn, training centre, wastage, production stoppage, opportunity cost of trainers and trainees) could be compared with its value in order to evaluate a training programme. Feedback: After the evaluation, the situation should be examined to identify the probable causes for gaps in performance. The training evaluation information (about cost, time spent, outcomes, etc.) should be provided to the instructors, trainees and other parties concerned for control, correction and improvement of trainees activities. The training evaluator should follow it up sincerely so as to ensure effective implementation of the feedback report at every stage.
64
EVALUATION
Myth #1: I cant measure the results of my training effort. Myth #2: I dont know what information to collect. Myth #3: If I cant calculate the return on investment, then it is useless to evaluate the program. Myth #4: Measurement is only effective in the production and financial arenas. Myth #5: My chief executive officer (CEO) does not require evaluation, so why should I do it? Myth #6: There are too many variables affecting the behavior change for me to evaluate the impact of training. Myth #7: Evaluation will lead to criticism. Myth #8: I dont need to justify my existence; I have a proven track record. Myth #9: Measuring progress toward learning objectives is an adequate evaluation strategy. Myth #10: Evaluation would probably cost too much.
65
CHAPTER-4
Methodology of Study
Evaluation of the effectiveness of the training programs is essential as to check continuous improvement in the effectiveness or performance of the training programs, which directly influences the task of every individual who contributes towards attainment of overall organization goals. Thus training imparted to the employees is directly related to the performance of those who underwent training and which reflects in their actual job performance. Keeping this as the main objective of study, the study was done on pre and post training evaluation, through pre-assessment (in some cases) and feedback of the participants after the training. The pre training evaluation was conducted generally before the training starts and post training evaluation was conducted after the participant has completed the training program with the help of feedback forms.
Mode of data collection: The data that was required for the completion of the study has been collected through secondary sources. This data has collected through pre-assessments (in some cases) and feedbacks, which is being collected by HRDC after completion of training programmes. The major analysis tools used have been percentages and data is interpreted with the help of bar graphs and pie diagrams. Other information has been collected from the websites http://www.csirhrdc.res.in & www.csir.res.in, the annual reports and other related literature.
66
In the segment on quantitative evaluation, the participants were asked to mark their responses on a scale of 1 to 10 for; A) Programme (facets and aspects) and B) Logistics/ Arrangements
Scaling Technique Used: In order to facilitate the responses a ten-point rating scale was used. The respondents were asked to make a choice between ten response categories, the range of responses being: -
RESPONSES ON A SCALE OF 1 TO 10
1510 BEING THE LOWEST THE MIDDLE THE HIGHEST
10
67
In the segment on Qualitative evaluation, the participants were asked to write their responses according to their convenience. Feedback of eight training programmes on various topics have been analysed and evaluated for their effectiveness. The analysis was done with the help of charts and graphs. The conclusions of the effectiveness of the programmes have been drawn and suggestions have been made for future programmes.
22-25 March 2007- Total no of participants are 36 6-8 August 2007Total no of participants are 42
Training programme on Service Jurisprudence in Personnel Management. 25-26 July 2006Total no of participants are 16
68
CHAPTER-5
The following steps are considered in training programmes: 1. 2. 3. Assessing/identifying the training needs Designing a training module to fulfill the need Arrangements for conference room, audio-visual aids, training materials, course kit etc. 4. 5. 6. Arranging faculty from outside and inside CSIR. Presentation or lecture delivered by faculty. Evaluation of the programme.
LIMITATIONS: 1.
It was not possible to interview all the employees who had attended the training programmes because getting appointment out of their busy schedule was a difficult task. Thus evaluation of training programmes has been
done only on the basis of feed back forms submitted by the participants during the programme.
2.
Due to scarcity of time and scope of the study, only 8 programmes have been evaluated.
3.
Conclusions and suggestions have been made on the basis of only 8 programmes evaluated.
4.
The project was to be completed within stipulated time of 60 days so more data could not be collected. Only those training programmes have been selected for which data was available.
70
OBJECTIVES OF THE STUDY: Primary Objectives: To assess the effectiveness of the training programmes being organized at HRDC(CSIR), Ghaziabad. To suggest strategies for the improvement in the future programmes. To suggest strategies for better evaluation of the programmes.
Secondary Objective: To assess the following on the basis of feedbacks: The relevance/ usefulness of the programmes. The appropriateness of duration, contents & structure of the programmes. The choice of faculty and resource persons. The appropriateness of logistics arrangements of the programmes. Interested topics, irrelevant topics, topics that could be added or dropped from the programmes. About any specific idea for improvement, the officers are getting during training programme. How will the workers apply their knowledge, skills and understanding gained during the training programme for the betterment in the work. Whether workers are considering participation in the programme worth or not. The ways they have benefited from interaction with fellow participants during the training programme. Overall programme effectiveness.
71