Sunteți pe pagina 1din 71

A Study on

TRAINING EFFECTIVENESS

Conducted at

SIEMENS LTD.

By,
Niki Dholakia
M.A (Industrial Psychology)
Smt. P.N Doshi Women’s College of Arts
Ghatkopar
April 2007

Research guided by:


Mrs. Krishna Upadhyaya
Head of Department (Psychology)
Smt. P.N Doshi Women’s College of Arts

1
Title-Training Effectiveness
This project is submitted as a partial fulfillment of the MA-II, Industrial/
Organizational Psychology course requirements, SNDT University by Miss. Vaishali
Mulay from Smt. P.N Doshi Women’s College of Arts, Ghatkoper.

Month of Submission: March 07

Name Ms Niki.J.Dholakia

Name of the Research Guide Mrs. Krishna Upadhaya


(Head of the Dept. of Psychology)

Office Mentor Ms Lucille Fernandes

Name of the College Smt.P.N.Doshi Women’s College


Industrial Psychology- MAII (06-07)

2
Office certificate

3
Undertaking

4
This is to certify that I, Miss.Niki, J.Dholakia, student of MA-II,
Industrial/Organizational Psychology from Smt. P. N. Doshi Women’s College of
Arts, Ghatkopar have successfully completed the project entitled ‘Training
Effectiveness’, at Siemens Ltd.

I, hereby, state that the data collected by me for the project during the
internship at Siemens Ltd. will be strictly kept confidential & will not be used for any
other purpose. I assure that it will be only used for research purpose.

Students Sign Head of Department

Acknowledgement

With the completion of the project, I can say studying or analyzing the behavior
of a person is exhilarating and complex process. It gave me immense satisfaction to study
the effects of behavioral programme and in my journey of completing this project many

5
people have contributed significantly for which I would like to forward my gratitude
towards all.

I would like to express my sincere gratitude to Dr. Mukund Vyas for giving me
opportunities to work with such a reputed Group like Siemens.

I am very much grateful to my mentor & guide, Ms. Lucille Fernandes for letting me
take up this topic & providing the necessary guidance whenever required.

My humble gratitude to Mrs. Krishna Upadhaya, my guide and professor back in


college for sharing with me her novel ideas & approach, her knowledge & suggestions, in
order to make the study challenging.

My humble acknowledgement to the all supervisor’s and employees of Siemens Ltd,


who took time off their work to fill the questionnaire & to other colleagues,

Besides, I also thank whole-heartedly the entire HR Team of Siemens Limited especially
Ms Aboli Deo for giving me their support whenever I needed it.

Last, but not the least I am grateful to my family for supporting me in every possible way

6
Abstract

To measure the effects of behavioral training programme study was conducted to


observe whether techniques imparted in “Communication Skill and Interpersonal
Programme” was beneficial to the employees. The sample included 16 participants
(Executive level) of the communication and interpersonal skill programme as well as
there respective supervisors. Questionnaire method was used to collect data

Kirkpatrick’s four level concepts( satisfaction, learning, behavior ,results )


were checked if they have occurred in the span of three months .Result show that
reaction and learning effect was strongly seen and behavioral and result effects were
partial seen

7
CONTENTS

8
1. Chapter 1 – Review of literature

What is training?
Post training environment
Evaluation of the training programe
Purpose of the Evaluation
Different types of evaluation
Approaches to training evaluation
Models of training evaluation
Krirkpatrick training evaluation model
Organizational element model
CIPP Model
New model
Types of training Evaluation instrument
Current Practices in evaluation of training

2. Chapter-2 Company Profile

Values of the Company

9
3. Chapter –3 Methodology
Objective
Variables (independent and dependent variable)
Operational Definitions
Sample Description
Description of the tool
Demographic tables
Table 3.1- table 3.1 is table showing total sample including total
number of participants and supervisor
Table 3.2-table 3.2 shows the sex wise distribution of the sample
Table 3.3- table 3.3 shows participants from different departments.
Table 3.4- table 3.4 shows the distribution of participants according
to number of years experience
Table 3.5- table 3.5 shows the age wise distribution of the sample
(participants)
Table 3.6 –table 3.6 shows the supervisors from different
departments
Table 3.7-table 3.7 shows the distribution of supervisors according
to number of year’s experience
. Need and significance of the study

4. Chapter –4 Results and Interpretation


Table 4.1-table 4.1 shows the response of all the participants in
percentage form
Graphs 4.1 –Graph 4.1 shows the rating of all 16 participants on
question 1
Graphs 4.2 –Graph 4.2 shows the rating of all 16 participants on
question 2
Graphs 4.3 –Graph 4.3 shows the rating of all 16 participants on
question 3
Graphs 4.4 –Graph 4.4shows the rating of all 16 participants on
question 4
Graph 4.5 -Graph 4.5ws the response of all participants on question
5
Table 4.6-Graph 4.6 shows the response of all supervisor in
percentage form
Graphs 4.5 – Graph 4.5 shows the rating of all 16 supervisor on
question 1
Graphs 4.6 – Graph 4.6 shows the rating of all 16 supervisor on
question 2
Graphs 4.7 – Graph 4.7 shows the rating of all 16 supervisor on
question 3
Conclusion
Recommendations
Limitations

10
5. Chapter –5 Annexure
Table5.1-Raw data table representing participants response
Table 5.2-Raw data table representing supervisors response
Questionnaire

6. Bibliography

11
Chapter No1: Review of Literature

Every organization needs to have well-trained and experienced people to perform the
actives that have to be done. Successful candidates placed on the job needs training to
perform their duties effectively. Workers, executives, mangers all need training to be
developed in order to enable them to grow and acquire maturity of thought and
action. Thus training and development constitute an ongoing process in an
organization. Training and development are both processes for enhancing employee’s
skills; they have a somewhat different focus.
Goldstein described the training process as systematic acquisition of attitudes,
concepts, knowledge, roles or skills that result in improved performance at work.
According to Wayne Cascio, Training consists of planned programs undertaken to
improve employee knowledge, skills, attitudes and social behavior so that
performance of the organization improves considerably.
According to Armstrong, Training is the systematic modification of behavior through
learning, which occurs as a result of education, instruction, and development and
planned experience. Whereas,
K.Aswathappa has defined training as ‘the process of imparting specific skills’.
C.B Mamoria has defined training as’ a process of learning a sequences of
programmed behavior’.
The objective of training and development is to raise the level of performance
in one or more aspects. This is achieved either by providing new knowledge and
information relevant to a job or by teaching new skills or by imbibing an individual
with new attitudes, values, motives and other personality characteristics.
In the past training and development had a somewhat different focus. The
term training was applied to the process of enhancing the skills of employees at lower
levels in the organization. On the other hand the term development was usually
applied to the process of enhancing skills at the managerial levels or of employees at
higher levels in the organization. However this distinction is no longer so much
applicable.
Training prepares employees not only to perform their present job more
efficiently but it also prepares them for higher positions with increased
responsibilities.
THE POST-TRAINING ENVIRONMENT
The post-training environment strongly influences the effectiveness of the training
program. Employee motivation to apply the newly learnt skills is sometimes
adversely affected by the post-training environment. Certain limitations in the post-
training may affect the actual transfer of training.

12
Paul Muchinsky defines transfer of training as the extent to which the trainees
effectively apply the knowledge, skills and attitudes gained in the training content
back to the job. Organizations that practice the culture of continuous learning
facilitate transfer of training.
There are two important aspects pertaining to transfer of training:
Generalization: It is the extent to which the trained skills and behaviors are exhibited
in the transfer setting.
Maintenance: It is length of time that trained skills and behaviors continue to be used
on the job.
A number of factors affect the transfer of training. Some of them are:
Supervisory support in the form of reinforcement and goal setting is vital to
successful transfer of training. The kind of supervisory support the trainees receive
following the training will determine their attitude towards future training programs.
Transfer of training is affected by opportunities that the trainees have to apply what
they have learnt.
Successful transfer of training is affected by the time gap between completion of
training and the opportunity to use and apply it.
Trainees who receive relapse-prevention training showed more trained behaviors than
trainees with no relapse-prevention training. Relapse-prevention training is designed
to prepare trainees for the post-training environment and to anticipate and cope with
high-risk situations.
EVALUATION OF TRAINING PROGRAMS
Training evaluation is one of the most under-researched and neglected areas
of industrial/organizational psychology. (Beenker and Cohen, 1977)
Most organizations simply assume that their training programs are successful,
doing what they are supposed to do. Such assumptions are very often unjustified. It is
possible that the training programs are not serving the purpose that they were meant
for. Since so much time and money is invested in training programs, a systematic
evaluation of training is a must for every organization.
Hamblin defines evaluation of training as any attempt to obtain information
on the effects of a training program and to assess the value of the training in the light
of that information.
Evaluation of training means determining the impact and the effects of
training on the performance and behavior of the trainee. A systematic evaluation will
provide valuable information regarding the strengths and weaknesses of the training
program. It will provide feedback regarding the usefulness of the training program,
whether the money spent was worthwhile and whether it should be continued in the
future or not.
Evaluation is an integral part of most instructional design (ID) models. Evaluation
tools and methodologies help determine the effectiveness of instructional
interventions. Despite its importance, there is evidence that evaluations of training
programs are often inconsistent or missing (Carnevale & Schulz, 1990; Holcomb,
1993; McMahon & Carter, 1990; Rossi et al., 1979). Possible explanations for
inadequate evaluations include: insufficient budget allocated; insufficient time
allocated; lack of expertise; blind trust in training solutions; or lack of methods and
tools (see, for example, McEvoy & Buller, 1990).

13
Part of the explanation may be that the task of evaluation is complex in itself.
Evaluating training interventions with regard to learning, transfer, and organizational
impact involves a number of complexity factors. These complexity factors are
associated with the dynamic and ongoing interactions of the various dimensions and
attributes of organizational and training goals, trainees, training situations, and
instructional technologies.
Evaluation goals involve multiple purposes at different levels. These purposes include
evaluation of student learning, evaluation of instructional materials, transfer of
training, return on investment, and so on. Attaining these multiple purposes may
require the collaboration of different people in different parts of an organization.
Furthermore, not all goals may be well defined and some may change.
Different approaches to evaluation of training indicating how complexity factors
associated with evaluation are addressed below. Furthermore, how technology can be
used to support this process is suggested. In the following section, different
approaches to evaluation and associated models are discussed. Next, recent studies
concerning evaluation practice are presented. In the final section, opportunities for
automated evaluation systems are discussed. The article concludes with
recommendations for further research.
What are the Purposes of Evaluation?

Below are the five common purposes of evaluation. Some can be construed as
positive, and others might be construed as negative. Because evaluation data are
potentially powerful, it is important to know how and why the data might be used.
This has implications for how you design evaluation plans and how you report
evaluation results.
The five purposes we will consider:
Feedback
Control
Research
Intervention
Power
Feedback:
Feedback provides quality control over the design and delivery of training activities.
Some important "evaluation for feedback" questions include...
Are the objectives being met?
Were pertinent topics and learning events covered?
Is there evidence of before and after learning?
Is there evidence of transfer of learning back to the workplace?
Do we know for whom the program was most and least beneficial?
What is good and what is not so good?
Control:
Control relates training policy and practice to organizational goals (productivity, cost-
benefit analysis). Some important "evaluation for control" questions include...
What is the value of the training to the organization?
Are measures of worth compared to measures of cost?

14
Was consideration given to different combinations of interventions for tackling the
problem (were options besides training considered)?
Research:
Research is to add to knowledge of training principles to improve techniques. Some
important "evaluation for research" questions include...
Internal validity: To what extent can particular conclusions justly be drawn from the
data collected?
External validity: To what extent can information gained from training program be
applicable generally to other situations?
Intervention:
Intervention is the process of using evaluation to affect the way the program being
evaluated is viewed, and subsequently using this to redefine the sharing of learning
between trainers, trainees, and employing managers. Some important "evaluation for
intervention" questions include...
Are line managers involved in pre/post training activities?
Is management an extension of training?
Are changes made in the work environment to support use of new skills learned
during training?
Does training cause the training department to continually rethink and adjust the
deployment of trainers to functions that strengthen the role of training?
Power:
Power is to use evaluation information for a political agenda. Some important
"evaluation for power" questions include...
Is evidence gathered and used via evaluation based upon sound evidence?
Is it presented fairly and ethically?
Is it reported to appropriate stakeholders?
What are the different types of evaluation?
There are five different types of evaluation. The terminology in this section is
foundational and thus very important for every evaluator to know and use.
The five types are as follows:
Formative evaluation
Summative evaluation
Process evaluation
Product evaluation
Usability testing
Formative evaluation
Evaluation of a program in its developmental stages. In the instructional design
process, formative evaluation occurs before the final product is completed. Formative
evaluation most often results in changes to the instructional program to make it more
effective.
Summative evaluation
Consists of those activities that judge the worth of a completed program. In the
instructional design process, summative evaluation is often viewed as the final stage.
Summative evaluation usually does not result in changes to the instructional program
being evaluated; it most often informs the trainer and organization whether students
learned from the training and whether on the job performance improved.

15
Process evaluation
Includes monitoring an instructor's performance, weighing the use of instructional
materials, or assessing the learning experiences found in the training setting.
Product evaluation
Judging training outcome and the costs incurred for a program offering. This also
involves relating the outcomes to pre-specified objectives and considering both
positive and unintended outcomes.
Usability testing
Determining whether people can use the product easily to meet their goals? The
advent of computer-based and web-based instruction has made usability testing an
increasingly popular and necessary form of evaluation.
Different Jargon are used ?
Depending on the type of the organization ,some evaluation terminology are more
frequently used then others
As depicted in this table, depending on corporate or government setting focusing on
training development, the commonly use the terms are "formative" and "summative"
evaluation. In a K-12 or higher education setting,the more likely used terms are
"process" and "product" evaluation. But as a product developer, regardless of
corporate or school focus, the term "usability testing" is used on a regular basis.

16
Product
Training Education
Development
Evaluation Formative & Process &
Usability Testing
Type Summative Product
Orientation Performance Learning Using

17
Students &
Audience Organization Customers
Society

18
Approaches to Evaluation of Training
Commonly used approaches to educational evaluation have their roots in systematic
approaches to the design of training. They are classified by the instructional system
development (ISD) methodologies, which emerged in the USA in the 1950s and
1960s and are represented in the works of Gagné and Briggs (1974), Goldstein
(1993), and Mager (1962). Evaluation is traditionally represented as the final stage in
a systematic approach with the purpose being to improve interventions (formative
evaluation) or make a judgment about worth and effectiveness (summative
evaluation) (Gustafson & Branch, 1997). More recent ISD models incorporate
evaluation throughout the process (see, for example, Tennyson, 1999).
Some of other Evaluation Approaches are as follows:-

19
Approaches Features
Kirkpatrick Implied linear -- levels
Dixon's six steps Linear - customers
Brinkerhoff's six stages Iterative - stages
Bramley's goal-based Linear - objectives
Wade's High Impact Iterative -- ID steps
Phillips' results-based Linear - ROI or CB
Shapiro's Matrix TQM focus - standards

20
Pershing's perspectives Units of analysis

21
Some evaluation perspectives are:-

22
Goal-based Are objectives realized?
Goal-free What are the benefits?
Responsive Client driven!
Systems Efficient and effective?
Professional review External expert appraisal
Quasi-legal approach Court of inquiry -- testimonials taken!

23
Pre-program Assess availability and readiness of inputs

24
Six general approaches to educational evaluation can be identified (Bramley, 1991;
Worthen & Sanders, 1987), as follows:
Goal-based evaluation
Goal-free evaluation
Responsive evaluation
Systems evaluation
Professional review
Quasi-legal apporach and pre programmed
Models of training evaluation
Quasi-le Goal-based and systems-based approaches are predominantly used in the
evaluation of training (Philips, 1991). Various frameworks for evaluation of training
programs have been proposed under the influence of these two approaches. The most
influential framework has come from Kirkpatrick (Carnevale & Schulz, 1990; Dixon,
1996; Gordon, 1991; Philips, 1991, 1997). Kirkpatrick’s work generated a great deal
of subsequent work (Bramley, 1996; Hamblin, 1974; Warr et al., 1978). Kirkpatrick’s
model (1959) follows the goal-based evaluation approach and is based on four simple
questions that translate into four levels of evaluation. These four levels are widely
known as reaction, learning, behavior, and results. On the other hand, under the
systems approach, the most influential models include: Context, Input, Process,
Product (CIPP) Model (Worthen & Sanders, 1987); Training Validation System
(TVS) Approach (Fitz-Enz, 1994); and Input, Process, Output, Outcome (IPO) Model
(Bushnell, 1990).
Table 1 presents a comparison of several system-based models (CIPP, IPO, & TVS)
with a goal-based model (Kirkpatrick’s). Goal-based models (such as Kirkpatrick’s
four levels) may help practitioners think about the purposes of evaluation ranging
from purely technical to covertly political purpose. However, goal evaluation ranging
from purely technical to covertly political purpose. However, these models do not
define the steps necessary to achieve purposes and do not address the ways to utilize
results to improve training. The difficulty for practitioners following such models is
in selecting and implementing appropriate evaluation methods (quantitative,
qualitative, or mixed). Because of their apparent simplicity, “trainers jump feet first
into using [such] model[s] without taking the time to assess their needs and resources
or to determine how they’ll apply the model and the results” (Bernthal, 1995, p. 41).
Naturally, many organizations do not use the entire model, and training ends up being
evaluated only at the reaction, or at best, at the learning level. As the level of
evaluation goes up, the complexities involved increase. This may explain why only
levels 1 and 2 are used.

25
Kirkpatrick CIPP Model IPO Model (1990) TVS Model
(1959) (1987) (1994)
1. Reaction: to 1. Context: 1. Input: evaluation 1. Situation:
gather data on obtaining of system collecting pre-
participants information about performance training data to
reactions at the the situation to indicators such as ascertain current
end of a training decide on trainee levels of
program educational needs qualifications, performance
and to establish availability of within the
program objectives materials, organization and
appropriateness of defining a
training, etc. desirable level of
future
performance

2. Learning: to 2. Input: 2. Process: 2. Intervention:


assess whether identifying embraces planning, identifying the
the learning educational design, reason for the
objectives for the strategies most development, and existence of the
program are met likely to achieve delivery of training gap between the
the desired result programs present and
desirable
performance to
find out if
training is the
solution to the
problem

3. Behavior: to 3. Process: 3. Output: 3. Impact:


assess whether assessing the Gathering data evaluating the
job performance implementation of resulting from the difference
changes as a the educational training between the pre-
result of training program interventions and post-training
data

26
4. Results: to 4. Product: 4. Outcomes: 4. Value:
assess costs vs. gathering longer-term results measuring
benefits of information associated with differences in
training regarding the improvement in the quality,
programs, i.e., results of the corporation’s productivity,
organizational educational bottom line- its service, or sales,
impact in terms of intervention to profitability, all of which can
reduced costs, interpret its worth competitiveness, be expressed in
improved quality and merit etc. terms of dollars
of work,
increased quantity
of work, etc.

27
Table 1. Goal-based and systems-based approaches to evaluation
On the other hand, systems-based models (e.g., CIPP, IPO, and TVS) seem to be
more useful in terms of thinking about the overall context and situation but they may
not provide sufficient granularity. Systems-based models may not represent the
dynamic interactions between the design and the evaluation of training. Few of these
models provide detailed descriptions of the processes involved in each steps. None
provide tools for evaluation. Furthermore, these models do not address the
collaborative process of evaluation, that is, the different roles and responsibilities that
people may play during an evaluation process.
KIRKPATRICKS TRAINING EVALUATION MODEL:
Donald Kirkpatrick's book Evaluating Training Programs defined his
originally published ideas of 1959, thereby further increasing awareness of them, so
that his theory has now become arguably the most widely used and popular model for
the evaluation of training and learning. Kirkpatrick's four-level model is now
considered an industry standard across the HR and training communities. The four
levels of training evaluation model was later redefined and updated in Kirkpatrick's
1998 book, called 'Evaluating Training Programs: The Four Levels'.
The four levels of Kirkpatrick's evaluation model are:
Evaluation research has developed as a result of substantive support by the federal
government, beginning in World War II training and evaluation activities. It provides
answers to the questions of "Do we implement or repeat a program or not?" and "If
so, what modifications should be made?" Today, measurement in the form of
instructor and course evaluations is a fixture of most training programs.
However, when what goes on in the classroom is not the outcome of interest, these
are the wrong measurements -- or at least the unimportant ones -- to take. An outcome
that is of importance answers the question of "How have you used what you learned?"
This type of evaluation is difficult to conduct, as it requires being done at three
months, six months, or even at 12 months from the time of the training. Adding to the
difficulty is the aspect that the evaluators need to be co-workers, managers and
outside customers of the participant who took part in the training.
In order to classify areas of evaluation, Donald Kirkpatrick created what is still one of
the most widely used approaches, even though it was first developed in 1959. At the
time, he was a professor of marketing at the University of Wisconsin. His four levels
of evaluation are:
Level 1: Reaction - a measure of satisfaction
Level 2: Learning - a measure of learning
Level 3: Behavior - a measure of behavior change
Level 4: Results - a measure of results
Here are questions that should be asked at each level:

28
Level 1: Reaction Were the participants pleased?
What do they plan to do with what they learned?
Level 2: Learning What skills, knowledge, or attitudes have changed? By how much?
Level 3: Behavior Did the participants change their behavior based on what was learned
in the program?

29
Level 4: Results Did the change in behavior positively affect the organization?
Despite the fact that the Kirkpatrick model is now nearly 40 years old, its elegant
simplicity has caused it to be the most widely used methods of evaluating training
programs. ASTD survey, which reports feedback from almost 300 HRD executives
and managers, revealed that 67% of organizations that conduct evaluations use the
Kirkpatrick model.
The Four Levels of Training Evaluation
Perhaps the best-known training methodology is Kirkpatrick's Four Level Evaluation
Model (1994) of reaction, learning, performance, and impact. The chart below shows
how the evaluation process fits together:

Level One - Reaction


Evaluation at this level measures how the learners react to the training. This level is
often measured with attitude questionnaires that are passed out after most training
classes. This level measures one thing: the learner's perception (reaction) of the
course.
Learners are keenly aware of what they need to know to accomplish a task. If the
training program fails to satisfy their needs, a determination should be made as to
whether it's the fault of the program design or delivery.

30
This level is not indicative of the training's performance potential, as it does not
measure what new skills the learners have acquired or what they have learned that
will transfer back to the working environment. This has caused some evaluators to
down play its value. However, the interest, attention and motivation of the
participants are critical to the success of any training program. People learn better
when they react positively to the learning environment.
When a learning package is first presented, rather it be e learning, classroom training,
CBT, etc., the learner has to make a decision as to whether he or she will pay
attention to it. If the goal or task is judged as important and doable, then the learner is
normally motivated to engage in it (Markus & Ruvulo, 1990). However, if the task is
presented as low-relevance or there is a low probability of success, then a negative
effect is generated and motivation for task engagement is low.
This differs somewhat from Kirkpatrick. He writes, "Reaction may best be
considered as how well the trainees liked a particular training program" (1996).
However, the less relevance the learning package is to a learner, then the more effort
that has to be put into the design and presentation of the learning package. That is, if
it is not relevant to the learner, then the learning package has to "hook" the learner
through slick design, humor, games, etc. This is not to say that design, humor, or
games are not important. However, their use in a learning package should be to
promote the "learning process," not to promote the "learning package" itself. And if a
learning package is built of sound design, then it should be help the learners to fix a
performance gap. Hence, they should be motivated to learn! If not, something went
dreadfully wrong during the planning and building processes! So if you find yourself
having to hook the learners through slick design, then you probably need to
reevaluate the purpose of the learning program.
Level Two - Learning
This is the extent to which participants change attitudes, improve knowledge, and
increase skill as a result of attending the program. It addresses the question: Did the
participants learn anything? The learning evaluations require post-testing to ascertain
what skills were learned during the training. In addition, the post-testing is only valid
when combined with pre-testing, so that you can differentiate between what they
already knew prior to training and what they actually learned during the training
program.
Measuring the learning that takes place in a training program is important in order to
validate the learning objectives. Evaluating the learning that has taken place typically
focuses on such questions as:
What knowledge was acquired?
What skills were developed or enhanced?
What attitudes were changed?
Learner assessments are created to allow a judgment to be made about the learner's
capability for performance. There are two parts to this process: the gathering of
information or evidence (testing the learner) and the judging of the information (what
does the data represent?). This assessment should not be confused with evaluation.
Assessment is about the progress and achievements of the individual learners, while
evaluation is about the learning program as a whole (Tovey, 1997, p. 88).

31
Evaluation in this process comes through the learner assessment that was built in the
design phase. Note that the assessment instrument normally has more benefits to the
designer than to the learner. Why? For the designer, the building of the assessment
helps to define what the learning must produce. For the learner, assessments are
statistical instruments that normally poorly correlate with the realities of performance
on the job and they rate learners low on the "assumed" correlatives of the job
requirements (Gilbert, 1998). Thus, the next level is the preferred method of assuring
that the learning transfers to the job, but sadly, it is quite rarely performed.
Level Three - Performance (behavior)
In Kirkpatrick's original four-levels of evaluation, he names this level "behavior."
However, behavior is the action that is performed, while the final result of the
behavior is the performance. Gilbert said that performance has two aspects --
behavior being the means and its consequence being the end (1998). If we were only
worried about the behavioral aspect, then this could be done in the training
environment. However, the consequence of the behavior (performance) is what we
are really after -- can the learner now perform in the working environment?
This evaluation involves testing the student’s capabilities to perform learned skills
while on the job, rather than in the classroom. Level three evaluations can be
performed formally (testing) or informally (observation). It determines if the correct
performance is now occurring by answering the question, "Do people use their newly
acquired learning’s on the job?"
It is important to measure performance because the primary purpose of training is to
improve results by having the students learn new skills and knowledge and then
actually applying them to the job. Learning new skills and knowledge is no good to
an organization unless the participants actually use them in their work activities.
Since level three measurements must take place after the learners have returned to
their jobs, the actual Level three measurements will typically involve someone
closely involved with the learner, such as a supervisor.
Although it takes a greater effort to collect this data than it does to collect data during
training, its value is important to the training department and organization as the data
provides insight into the transfer of learning from the classroom to the work
environment and the barriers encountered when attempting to implement the new
techniques learned in the program.
Level Four - Results
This is the final result that occurs. It measures the training program's effectiveness,
that is, "What impact has the training achieved?" These impacts can include such
items as monetary, efficiency, moral, teamwork, etc.
While it is often difficult to isolate the results of a training program, it is usually
possible to link training contributions to organizational improvements. Collecting,
organizing and analyzing level four information can be difficult, time-consuming and
more costly than the other three levels, but the results are often quite worthwhile
when viewed in the full context of its value to the organization.

32
As we move from level one to level four, the evaluation process becomes more
difficult and time-consuming, however, it provides information that is of increasingly
significant value. Perhaps the most frequently type of measurement is Level one
because it is the easiest to measure. However, it provides the least valuable data.
Measuring results that affect the organization is considerably more difficult, thus it is
conducted less frequently, yet it yields the most valuable information.
Each evaluation level should be used to provide a cross set of data for measuring
training program.
The first three-levels of Kirkpatrick's evaluation -- Reaction, Learning, and
Performance are largely "soft" measurements, however decision-makers who approve
such training programs, prefer results (returns or impacts). That does not mean the
first three are useless, indeed, their use is in tracking problems within the learning
package:
Reaction informs you how relevant the training is to the work the learners perform (it
measures how well the training requirement analysis processes worked).
Learning informs you to the degree of relevance that the training package worked to
transfer KSAs from the training material to the learners (it measures how well the
design and development processes worked).
The performance level informs you of the degree that the learning can actually be
applied to the learner's job ( it measures how well the performance analysis process
worked).
Impact informs you of the "return" the organization receives from the training.
Decision-makers prefer this harder "result," although not necessarily in dollars and
cents. For example, a recent study of financial and information technology executives
found that they consider both hard and soft "returns" when it comes to customer-
centris technologies, but give more weight to non-financial metrics (soft), such as
customer satisfaction and loyalty (Hayes, 2003).
Note the difference in "information" and "returns." That is, the first three-levels give
you "information" for improving the learning package. While the fourth-level gives
you "impacts." A hard result is generally given in dollars and cents, while soft results
are more informational in nature, but instead of evaluating how well the training
worked, it evaluates the impact that training has upon the organization. There are
exceptions. For example, if the organizational vision is to provide learning
opportunities (perhaps to increase retention), then a level-two or level-three
evaluation could be used to provide a soft return.
This final measurement of the training program might be met with a more "balanced"
approach or a "balanced scorecard" (Kaplan & Norton, 2001), which looks at the
impact or return from four perspectives:
Financial: A measurement, such as an ROI, that shows a monetary return, or the
impact itself, such as how the output is affected. Financial can be either soft or hard
results.
Customer: Improving an area in which the organization differentiates itself from
competitors to attract, retain, and deepen relationships with its targeted customers.
Internal: Achieve excellence by improving such processes as supply-chain
management, production process, or support process.

33
Innovation and Learning: Ensuring the learning package supports a climate for
organizational change, innovation, and the growth of individuals.
Since Kirkpatrick established his original model, other theorists (for example Jack
Phillips), and indeed Kirkpatrick himself, have referred to a possible fifth level,
namely ROI (Return On Investment). In my view ROI can easily be included in
Kirkpatrick's original fourth level 'Results'. The inclusion and relevance of a fifth
level is therefore arguably only relevant if the assessment of Return On Investment
might otherwise be ignored or forgotten when referring simply to the 'Results' level.
Learning evaluation is a widely researched area. This is understandable since the
subject is fundamental to the existence and performance of education around the
world, not least universal.

Level 5: Return on Investment.


How to Measure: One has to calculate the real cost of training (i.e. trainer's salary;
man-hours spent in training; materials etc.) and then track the dollars and cents (or
pounds or Euros) generated as a result of this investment. This is not easy, but more
and more companies are demanding some kind of ROI equation from consultants and
their own training departments.
ROI model for educational programs, which was developed by
Jack Phillips, PhD, a measurement and evaluation expert. The model consists of five
levels of evaluation, including reaction, learning, behavior, results and impact and
ROI.
LEVEL ONE (REACTION): A reaction evaluation is performed immediately at the
end of a class. This level of evaluation often is required by state boards of nursing so
contact hours can be awarded for an education session. Both class evaluations and
overall program evaluations are obtained throughout the preoperative nurse
fellowship program.
LEVEL TWO (LEARNING): This level of evaluation determines whether learning
has occurred during the education session. This can be accomplished by using a test, a
return demonstration, a role-play exercise, or other methods. There are two
comprehensive tests and skill demonstrations in the Inova preoperative nurse
fellowship program.
LEVEL THREE (BEHAVIOR): At this level, the transference of learning to the work
setting (i.e., the OR) is evaluated. It is recommended that this evaluation be
completed after an interval of time, such as one month, three months, or six months,
to allow an individual time to use what he or she has learned. After five months in the
preoperative nurse fellowship program, a clinical performance evaluation of each
fellow is completed collaboratively by the preceptor, instructor, and fellow.
LEVEL FOUR (RESULTS AND IMPACT): This level measures the results of the
training in terms of organizational improvement. One of the outcomes of the
preoperative nurse fellowship program was an increase in recruitment and retention
of nurses for the Inova ORs. Data were collected to determine how many nurses
started the program, how many successfully completed the program, and how many
remained at the end of their contract.

34
LEVEL FIVE (ROI): This evaluation compares the costs of the program to its
benefits. Dr Phillips recommends that all levels of evaluation be completed, even if
the interest is solely in determining ROI, because if the result of the analysis shows a
negative ROI, the next step is to determine at which level the breakdown occurred.
The formula used for calculating ROI is as follows.
ROI (%) = x 100
Net benefits are program benefits minus program costs. The net benefits are
then divided by the costs. The ratio is expressed as a percent by multiplying the result
by one hundred.
THE ORGANIZATIONAL ELEMENTS MODEL:
Roger Kaufmanns Organizational Elements Model (OEM) consists of five
parts: inputs, processes, products, outputs, and outcomes. Every organization,
whether it be an educational or business setting, is made up of these five elements.
The OEM is a framework for organizations to relate organizational efforts,
organizational results, and societal payoffs or consequences (Cost-Consequence
Analysis, 90). The OEM may also be divided into two different levels. The first level
shows "What Is" and the second level shows
The Five Parts of the OEM are:
INPUTS: Inputs are the things that every organization uses. These are the ingredients
or starting conditions used by the organizations. In a business, these are the raw
material, existing facilities, available resources, human capital, buildings, equipment,
existing objectives, policies, procedures, and finances. In a school, these are the
learners, teachers, schools, classrooms, media resources, available learning material,
budgets, board members, administrators, and credentials.
PROCESSES: Processes are the things that every organization does. In other word,
these are the methods, means, activities, or programs that an organization follows.
These are the "how-to" that are used to turn the inputs into accomplishments. In a
business, these are the production methods, applied skills, robotics, quality
management, training and development, personnel at work, and machinery. In a
school, this includes teaching learners, developing material, scheduling, and in-
service training of teachers.
PRODUCTS: Products are the things that every organization accomplishes. This is
what the building blocks accomplish by using the inputs and the processes together.
In a business, these are the fenders, headache tablets, validated training material,
labels, completed bills, reports, disk drives, reading glasses, drinking glasses, and
television. In a school, these include completed courses, instructional videodisks,
filed attendance cards, lost football games, and approved strategic plans.
OUTPUTS: Outputs are the things that every organization delivers. These are
finished products that are ready to be delivered to society for use by the people.
OUTCOMES: Outcomes are the external consequences for the society. These
consequences can be in the form of a payoff or a consequence. These effects can
effect the entire community or society. In a business, these are the repeat-customers,
the safe automobile, or the helpful computer systems. In a school, these are the
satisfied parents, the successful business workers, or the professional worker.
How does the OEM Relate to a Needs Assessment?

35
A major part of the planning is the identification of a need (Strategic Planning
in Education). A need is the gap, which exists between "What Is" and "What Should
Be". The OEM is related to a Needs Assessment by examining the two levels: "What
Is" and "What Should Be" (Needs Assessment)
What Is
What Should Be Need
The Needs Assessment not only helps identify the needs, but also allows for
prioritizing and selecting the most important aspect of the OEM that need
restructuring, changing, or improving.
CIPP MODEL:
Daniel Stufflebeam developed this model. This model takes the following into
consideration:
CONTEXT EVALUATION: Needs analysis, this assists in forming goals.
INPUT EVALUATION: Policies, budgets, schedules, proposals and procedures aids
in program planning.
PROCESS EVALUATION: Reaction sheets, rating scales and analysis of existing
records
PRODUCT EVALUATION: Measures and interprets the attainment of objective
helps in CIRO APPROACH:
Warr, Bird and Rackhal developed this. It gives importance to evaluation in
terms of context, input, reaction and outcome.
CONTEXT EVALUATION: Collection of information about performance,
deficiency and setting the objectives with 3 levels like immediate, intermediate and
ultimate.
Immediate objective: New knowledge, skills and attitudes required to reach
intermediate objective.
Intermediate objective: Change in employees work balance necessary for ultimate
Objective.
Ultimate objective: Particular deficiency in the organization that will be eliminated.
INPUT EVALUATION: The following questions are relevant during this evaluation:
What are the relative merits of the different HRD methods?
Is it feasible for an outside organization to be more efficient at conducting the
program?
Should it be developed with the internal resources?
Should the line managers be involved?
How much time is available for HRD
What results were achieved when a similar program was connected in the past?
REACTION EVALUATION: This includes subjective reports of the participants
about the whole program.
OUTCOME EVALUATION: This should follow the steps given below:
Defining trained objectives.
Selecting and constructing some measure of those objectives.
Making the measurements in the appropriate time.
Assessing the results and using them to improve future programs.
A New Model?

36
Not everyone agrees that the Kirkpatrick model should be used for evaluations.
Elwood Holton, writing in HRD Quarterly (1996, 7:1, p. 5-21), goes so far as to say it
isn't even a model, but rather merely taxonomy. The biggest problem, he says, is in
trying to use the four levels to determine where a problem exists with a given
educational program.
Holton proposes a new model for evaluation of training that, unlike Kirkpatrick's
four-level system, will "account for the impact of the primary intervening variables
such as motivation to learn, trainability, job attitudes, personal characteristics, and
transfer of training conditions." The important differences between this and the
Kirkpatrick system are:
Absence of reactions (level one) as a primary outcome.
Individual performance is used instead of behavior.
The inclusion of primary and secondary influences on outcomes.
Three primary learning outcome measures are proposed:
Learning: achievement of the learning outcomes desired in the intervention.
Individual performance: change in individual performance as a result of the learning
being applied on the job.
Organizational results: consequences of the change in individual performance.
And, according to the model, the three primary influences on learning are:
Trainee reactions
Motivation to learn
Ability
This model has been proposed but needs to be tested, Holton says. A simpler model
may emerge from such testing; for example, perhaps measuring only primary
intervening variables will be sufficient, or perhaps only a few key variables within
each category should be measured. [For a more detailed outline of Holton's paper, see

TYPES OF EVALUATION INSTRUMENTS:


Questionnaire
Attitude Survey
Tests
Interviews
Observations
Performance records
QUESTIONNAIRE: A question can be prepared keeping the objective of the
program and can be administered to the participants to record their response about the
effectiveness of the program. The response can be taken before and the program to
understand their explanation and again after the program to know to what extent their
expectations have been met.
ATTITUDE SURVEY: Attitudes of the people can be measured before and after the
program evaluates the impact of the training.
TEST: Different tests can be developed to measure the training impact. This is more
benefiting in case of technical & vocational training program.

37
INTERVIEWS: It can be conducted or evaluation can be made through interview
method. After the completion of the program, the participant can be interviewed to
judge his level of learning.
OBSERVATION: It is very good technique of judging the training impact. Basically
after the skill up gradation training, the people can be judged by their supervisors at
work to evaluate the training effectiveness.
PERFORMANCE RECORDS: The performance appraisal system is a tool available
in most of the organization to measure the performance. Analyzing the performance
records before and after the program, the effectiveness can be evaluated through
comparison.
REPORTING EVALUATION RESULTS:
Evaluation reports should be prepared using a format that is consistent, the
content of the report should include a factual statement, perhaps under the following
headings:
Name of the training program
Topics covered in the training program
Level or class of training
Target population
Training methods used
Duration, dates, timing and venue of training
Type of training (classroom or self instructional)
Number of instructors
Kind of results and measurements taken
Evaluation score (against pre-specified criteria)
Although the Kirkpatrick model has served trainers well in terms of evaluating
whether learners liked their instruction, whether they learned something from it, and
whether it had some effect for the company, evaluation experts are now pointing out
that the four-level approach has weaknesses. Mainly, it can't be used to determine the
cost-benefit ratio of training (ROI), and it can't be used diagnostically, i.e., when a
training program doesn't deliver the expected results.
When looking at ROI and cost benefit analysis, it is important to remember that:
Improving efficiency means achieving the same results with lower costs.
Improving effectiveness means achieving better results with the same costs.
It is possible to get better results with lower costs, and this is called improved
productivity.
In order to calculate ROI, evaluation experts such as like Jack Phillips are
recommending the addition of a fifth level to Kirkpatrick's model for some programs.
This requires collecting level 4 data, converting the results to monetary values, and
then comparing those results with the cost of the training program. Here is Phillips'
basic formula for calculating ROI:
Collect level-4 evaluation data. Ask: Did on-the-job application produce measurable
results?
Isolate the effects of training from other factors that may have contributed to the
results.

38
Convert the results to monetary benefits. Phillips recommends dividing training
results into hard data and soft data. He says hard data are the traditional measures of
organizational performance because they're objective, easy to measure, and easy to
convert to monetary values. They include output (units produced, items assembled,
tasks completed); quality (scrap, waste, rework); time (equipment downtime,
employee overtime, time to complete projects); and cost (overhead, accident costs,
sales expenses). Conversely, soft data includes such things as work habits (tardiness,
absenteeism); work climate (grievances, job satisfaction); attitudes (loyalty,
perceptions); and initiative (implementation of new ideas, number of employee
suggestions).
Total the costs of training.
Compare the monetary benefits with the costs. The non-monetary benefits can be
presented as additional - though intangible - evidence of the program's success.
It takes a lot of time and effort to conduct evaluations at this level, and not every
program needs this much attention. But when it's important to know the real value of
a program, ROI measurement can go a long way to justify company efforts. For
example, Magnavox Electronics Systems Company in Torrance, CA, maintains an
18-week literacy program covering verbal and math skills for employees. Here are the
results of a five-level evaluation the company conducted:
Here are the results of a five-level evaluation the company conducted:
Level 1: reaction was measured by post-course surveys.
Level 2: learning was measured with the Test of Adult Basic Education.
Level 3: changes in behavior were measured by daily efficiency ratings
Level 4: business results were measured through improvements in productivity and
reductions in scrap and rework.
Level 5: ROI was calculated by converting productivity and quality improvements to
monetary values. The resulting ROI was 741%.
According to Phillips, the purposes and uses of evaluation are to improve the Human
Resources Development process and to decide whether or not to continue this
process. He states that evaluation should:
Determine whether or not a program is accomplishing its objectives
Identify the strengths and weaknesses in a Human Resources Program.
Determine the cost/benefit ratio of an HRD program
Decide who should participate in future programs
Identify which participants benefited most or least from the program
Reinforce major points made to the participants
Gather data to assist in marketing future programs
Determine of the program was appropriate (Phillips, 1983)
Current Practices in Evaluation of Training

39
Evaluation becomes more important when one considers that while American
industries; for example, annually spend up to $100 billion on training and
development, not more than “10 per cent of these expenditures actually result in
transfer to the job” (Baldwin & Ford, 1988, p.63). This can be explained by reports
that indicate that not all training programs are consistently evaluated (Carnevale &
Shulz, 1990). The American Society for Training and Development (ASTD) found
that 45 percent of surveyed organizations only gauged trainees’ reactions to courses
(Bassi & van Buren, 1999). Overall, 93% of training courses are evaluated at Level
One, 52% of the courses are evaluated at Level Two, 31% of the courses are
evaluated at Level Three and 28% of the courses are evaluated at Level Four. These
data clearly represent a bias in the area of evaluation for simple and superficial
analysis.
This situation does not seem to be very different in Europe, as evident in two
European Commission projects that have recently collected data exploring evaluation
practices in Europe. The first one is the Promoting Added Value through Evaluation
(PAVE) project, which was funded under the European Commission’s Leonardo da
Vinci program in 1999 (Donoghue, 1999). The study examined a sample of
organizations (small, medium, and large), which had signaled some commitment to
training and evaluation by embarking on the UK’s Investors in People (IiP) standard
(Sadler-Smith et al., 1999). Analysis of the responses to surveys by these
organizations suggested that formative and summative evaluations were not widely
used. On the other hand, immediate and context (needs analysis) evaluations are more
widely used. In the majority of the cases, the responsibility for evaluation was that of
managers and the most frequently used methods were informal feedback and
questionnaires. The majority of respondents claimed to assess the impact on
employee performance (the ‘learning’ level). Less than one-third of the respondents
claimed to assess the impact of training on organization (the ‘results’ level).
Operational reasons for evaluating training were cited more frequently than strategic
ones. However, information derived from evaluations was used mostly for feedback
to individuals, less to revise the training process, and rarely for return on investment
decisions. Also, there were some statistically significant effects of organizational size
on evaluation practice. Small firms are constrained in the extent to which they can
evaluate their training by the internal resources of the firm. Managers are probably
responsible for all aspects of training (Sadler-Smith et al., 1999).

40
The second study was conducted under the Advanced Design Approaches for
Personalized Training-Interactive Tools (ADAPTIT) project. ADAPT is a European
project within the Information Society Technologies programme that is providing
design methods and tools to guide a training designer according to the latest cognitive
science and standardization principles (Eseryel & Specter, 2000). In an effort to
explore the current approaches to instructional design, a series of surveys conducted
in a variety of sectors including transport, education, business, and industry in
Europe. The participants were asked about activities that take place including the
interim products produced during the evaluation process, such as a list of revisions or
an evaluation plan. In general, systematic and planned evaluation was not found in
practice nor was the distinction between formative and summative evaluation.
Formative evaluation does not seem to take place explicitly while summative
evaluation is not fully carried out. The most common activities of evaluation seem to
be the evaluation of student performance (i.e., assessment) and there is not enough
evidence that evaluation results of any type are used to revise the training design
(Eseryel et al., 2001). It is important to note here that the majority of the participants
expressed a need for evaluation software to support their practice
Using Computer to Automate Evaluation Process:For evaluations to have a
substantive and pervasive impact on the development of training programs, internal
resources and personnel such as training designers, trainers, training managers, and
chief personnel will need to become increasingly involved as program evaluators.
While using external evaluation specialists has validity advantages, time and budget
constraints make this option highly impractical in most cases. Thus, the mentality that
evaluation is strictly the province of experts often results in there being no evaluation
at all. These considerations make a case for the convenience and cost-effectiveness of
internal evaluations. However, the obvious concern is whether the internal team
possesses the expertise required to conduct the evaluation, and if they do, how the
bias of internal evaluators can be minimized. Therefore, just as automated expert
systems are being developed to guide the design of instructional programs (Spector et
al., 1993), so might such systems be created for instructional evaluations? Lack of
expertise of training designers in evaluation, pressures for increased productivity, and
the need to standardize evaluation process to ensure effectiveness of training products
is some of the elements that may provide motivations for supporting organization’s
evaluation with technology. Such systems might also help minimize the potential bias
of internal evaluators.
Ross & Morrison (1997) suggest two categories of functions that automated
evaluation systems appear likely to incorporate. The first is automation of the
planning process via expert guidance; the second is the automation of the data
collection process.
For automated planning through expert guidance, an operational or procedural model
can be used during the planning stages to assist the evaluator in planning an
appropriate evaluation. The expert program will solicit key information from the
evaluator and offer recommendations regarding possible strategies. Input information
categories for the expert system include:
Purpose of evaluation (formative or summative)
Type of evaluation objectives (cognitive, affective, behavioral, impact)

41
Level of evaluation (reaction, learning, behavior, organizational impact)
Type of instructional objectives (declarative knowledge, procedural learning,
attitudes)
Type of instructional delivery (classroom-based, technology-based, mixed)
Size and type o based on this input, an expert system can provide guidance on
possible evaluation design orientations, appropriate collection methods, data analysis
techniques, reporting formats, and dissemination strategies. Such expert guidance can
be in the form of flexible general strategies and guidelines (weak advising approach).
Given the complexities associated with the nature of evaluation, a weak advising
approach such as this is more appropriate than a strong approach that would replace
the human decision maker in the process. Indeed, weak advising systems that
supplement rather than replace human expertise have generally been more successful
when complex procedures and processes are involved (Spector et al., 1993).
Such a system may also embed automated data collection functions for increased
efficiency. Functionality of automated data collection systems may involve intelligent
test scoring of procedural and declarative knowledge, automation of individual profile
interpretations, and intelligent advice during the process of learning (Bunderson et al.,
1989). These applications can provide increased ability to diagnose the strengths and
weaknesses of the training program in producing the desired outcomes. Especially,
for the purposes of formative evaluation this means that the training program can be
dynamically and continuously improved as it is being designed.
Automated evaluation planning and automated data collection systems embedded in a
generic instructional design tool may be an efficient and integrated solution for
training organizations. In such a system it will also be possible to provide advice on
revising the training materials based on the evaluation feedback. Therefore,
evaluation data, individual performance data, and revision items can be tagged to the
learning objects in a training program. ADAPT instructional design tool is one of the
systems that provide such an integrated solution for training organizations (Eseryel et
al., 2001).

42
Chapter 2 Company profile

Values of Company

Edge
Creates an inspiring vision of the future, makes tough strategic decisions, and acts
with
Entrepreneurial spirit
1. Unlimited Thinking - thinks beyond existing limitations and set highest aspirations;
creates an inspiring vision to rally his/her people towards a better future; thinks
outside the box; develops new and innovative ways of doing things
2. Entrepreneurial Spirit - displays a strong competitive drive; strives to be the best in
the market
3. Self Determination – has an independent point of view; shows a strong competitive
drive; strives to be the best in the market
4. Strategic Judgment and Risk Management - sees the big picture and masters
complexity; develops reliable and robust strategies and implementation plans; takes a
long-term perspective; Proactively makes tough, strategic, high impact decisions;
acts courageously; takes and manages calculable risks.
Energy
Is fascinated by the work and initiates a continuous change and learning
1. Initiative – is dedicated to owns professional mission; loves to work and always
goes the extra mile; seeks fast growth, high performance and innovative business
opportunities; acts proactively to address challenges and opportunities
2. Change orientation - thrives on actions and lives continuous change; identifies
opportunities in changing situations
3. Learning - actively searches for learning opportunities about business, people and
customer; includes and explores other perspectives and experiences; is able to learn
beyond day-to-day realm of responsibility; is self-aware of own strengths and
weaknesses
4. Business competence – has a deep knowledge of the business (portfolio, sector,
competitor know how) and strong strategic orientation, has a deep understanding of
business processes (sales logistics, manufacturing…) and applies management tools
effectively

Principle and ethics

43
Energize
Guides and motivates the employees and develops the next generation leaders
1. Communication Skills - communicates clear direction, intentions and objectives; is
self aware, understands the impact of ones own communication; convinces and wins
the support of others; exercises the influence in an ethically responsible manner
2. Network built on trust - initiates and maintains partnerships relevant for successful
business; is able to facilitate the development of positive relationships amongst
people who work together; builds trust and co-operates with internal and external
partners to get his/her job done effectively
3. Coaching and Mentoring - regularly gives employees a clear, objective, open
feedback and perspective; enables the employees to build the skills to achieve the set
targets; develops next generation leaders, identifies talent early and creates
development opportunities based on individual needs and potential; is culturally
sensitive and understands people’s emotions, feelings, concerns and underlying issues
4. Team Player – creates an authentic excitement for work and a challenging climate
fostering teamwork and motivates to keep moving; sets clear, inspiring goals
(direction) that require action from everyone in the team; understands which roles are
needed in an effective team and assign those roles according to the individual’s
characteristics (strengths and limitations); fosters opinions of individuals with
different backgrounds; applies time efficient processes; gets issues off the agenda as
quickly as possible; defines individual and collective responsibility for achieving
specified goals and applies consequences equally, immediately and in line with
specific performance results

Execute
Gets the right things done with highest impact and quality
1. Analytics - picks up relevant information; analyzes complex situations based on
facts and finds solutions; breaks down complexity into concrete tasks and activities
2. Decision-making - takes decisions in the framework of a given strategy; knows
when to stop assessing and make the decision; makes tough yes or no decisions and
translates them into actions; takes personal responsibility for decisions; pushes things
to an end and manages conflict effectively
3. Result orientation - aligns activities and resources to achieve goals in changing
parameters, gets things done at a highest level of quality and impact and
systematically monitors achievements/progress of execution
Passion
Has a heartfelt, deep and authentic excitement about our customers, the profession
and Siemens
1. Customer Focus – is driven by passion to fulfill the customer’s real needs, takes on
personal responsibility to advocate customer’s issues; makes him/herself yourself
fully available to the customer
2. Professional ethics – applies highest ethical and professional standards; projects
personal values, especially honesty and integrity into everything he/she does
3. Siemens Values – lives and breathes the Siemens values; is a Siemens ambassador
and

44
promotes Siemens, even in most difficult times; has a heartfelt, deep and authentic
excitement about his/her mission for Siemens; is modest (“both feet on the ground”)
combined with strong professional will

Business philosophy

To set the benchmark by being the


‘Best in Class’ in our fields and to create value for our customers, wealth for our
stakeholders and a future for our employees, while giving back graciously to society,
a piece of our success ”

Chapter No 3:- Methodology

Objective:
To evaluate the effectiveness of the training programme at the end of three months
To check third and fourth level effect of Krirkpatrick training model was obtained.

Hypothesis: There is no difference seen in the employee’s behavior in terms of


communication skill before and after the training programme
Variable
Independent Variable: Communication and interpersonal Skills training programme
Dependent Variable: Clarity in communication, listening skills, communication
barrier, self-image, professional and personal growth

Operational Definitions:

Clarity in communication – Clarity in communication and interpersonal skill is in


terms of sending across the message to the receiver without any ambiguity

Listening Skill- In any communication listening with understanding important than


only hearing One should understand the incoming message clearly and then should
reciprocate.

. Personal and Professional growth: In terms all overall enhancement in ones


personality. Contributing more quality output at professional front as well as
personal front having better interpersonal relations

Self- Image- It means looking for something likeable in oneself, focusing on being
the person you want to be and acknowledging and celebrating your achievements

Methodology:

45
The sample for the study included the participants of the Communication and
Interpersonal Skills behavioral training programmes. This included 17 participants of
Communication and interpersonal Skill programme. Sample also includes the
supervisors of respective participants. All participants belong to executive grade from
different departments of Siemens Ltd.

Description of the tool used:


Questionnaire method was used to collect the data.
A likert type follow up questionnaire was designed to obtain participants as well
as their supervisors response were:
5- To a large extent
4-To some extent.
3-Still trying
2-No change.
1-Made it worse
On the basis of the objectives of the programme and design of the programme
adopted by a faculty (trainer), follow up questionnaire was designed. Participants
questionnaires contents 6 questions & supervisor’s contents 4 questions related to
Communication training programme.

Need and significance of the Study: Training is imparted after extensive training need
analysis which comprises of identification of key development areas of an individual
employees pertaining to there job roles. Which effects the production and
organizational commitment of individual. Training needs is the result of review of
individual employees performance appraisal. So effective evaluation is required to
check the above criteria.
Effective training evaluation can help organization to enhance there
training programme, so it can be more appealing and can lead to better employee
output as end result
Tables showing the demographic variables:

Table 3.1 – table 3. 1 shows the total sample size. It shows total participants who
attended the training programme and supervisors included in the sample.

46
Total No. 16
Participants
Total No. 16
Supervisors
Total 32

Table 3.2 –table 3.2 shows the sex wise distribution of the sample (participants)

47
Sex Total
Male 15
Female 01
Total 16
Table 3.3 –Table 3.3 shows
participants from different departments.

Departments Male Female Total


Transportation 1 - 1
Systems
Medical 1 - 1
Solutions
Power 1 - 1
Transmission
Division
Automation & 11 1 12
Drives
Finance & 1 - 1
Accounts
Industrial 1 - 1
Solutions
Services
Total 17

Table 3.4- Table 3.4 shows the experience wise distribution of the sample.

48
Years of Experience Total number of employees
Below 5 years 9
Between 5-15 years 3

49
Above 15 years 3

50
Table 3.5- Table 3.5 shows the age wise distribution of the sample.

51
Age Total number of
employees
20-30 5
31-40 9
41-50 2

52
51 & above 1

53
Table 3.6-Table 3.6 shows supervisors from different departments.

Departments Total number of


supervisor
Transportation Systems 1
Medical Solutions 10
Powers Transmission Division 1
Automation & Drives 1
Finance & Accounts 1
Industrial Solution Services 1
Total 15

Table-3.7–table 3.7 shows the experience wise distribution of the sample.

54
Age Total number of employees
Below 5 years 6
Between 5-15 years 4
Above 15 years 5
Total 15

Chapter No 4- Results and Analysis

Effective communication is basic and essential skill required in any work


setting. Communication refers to the sharing of ideas, facts, opinions, information and
understanding it effectively across the level within and outside the organization.
To enhance this skill among executive level employees, behavioral training
programme on “Communication and Interpersonal Skills” was organized on 6th-7th
September ’06 by western region training and development group of Siemen's Ltd.
Their were16 participants across the different departments who attended the training
programme. To observe the effects of the training programme post programme
evaluation was conducted at end of three months.
The procedure adopted for post training evaluation was based on Krirkpatrick
training evaluation model. Krirkpatrick model is a four level evaluation model .The
four levels of evaluation are:
Level 1: Reaction- a measure of satisfaction
Level 2: Learning – a measure of learning
Level 3: Behavior- a measure of behavior changes
Level 4: Results – a measure of results
The reaction and learning effect was measured by obtaining a feedback through
a questionnaire method immediately after the training from all the participants. The
immediate response obtained was positive .The content of the programme, techniques
imparted in the programme, handouts and other facilities were rated high and satisfied
by all the participants. Some of the important leaning reported b the participants are
as follows: -
Importance and proper use of non-verbal communication
Transactional Analysis
NLP
Do’ and Don’ts in communication

55
Cultural Etiques etc
Thus these responses suggest that there is high reaction and learning effective of
Krirkpatrick model after the training programme.

To measure the behavior effect


second time feedback was obtained from all the participants at the end of three
months and to measure results effects feedback was obtained from all the supervisors.

Table 4.1-Table 4.1shows the response of all the participants in percentage form

56
Questions Percent of response on all questions under each
rating
1 2 3 4 5
Improvement in bringing 62.5 37.5%
clarity in your %
communication skills
Improvements in 6.25 37.5 56.25%
understanding the % %
importance for professional
as well as personal growth
Improvement in the 6.25 37.5 56.25%
understanding the % %
importance of listening
Skill

57
Improvement in building up 50% 50%
a positive self image

58
The results obtained at the end of three months are as follows:
Graphs 4.1 –Graph shows the rating of all 16 participants on question 1
Question No1 -Improvement in bringing clarity in your communication Skill

Q1's total response

80%
Response in
percentage

60%
40% Series1
20%
0%
1 2 3 4 5
participants rating for Q1

Improvement in clarity means conveying correct message across without


any ambiguity . .Here out of 16 participants 10 participants rated 4 .i.e. 62.5%
participants reported some extend improvement to some extend and 6 participants
rated 5 .i.e 37.5% participants reported improvement in clarity to large extend
The above results suggest that, the overall improvement in terms of
clarity was only to some extend .No participants reported that there was no change
observered in there behavior before and after the training programme or they are still
trying to master the technique learned in the programme .These results suggest that
all the participants have acquired and are able to put into practice technique to
improve their clarity in communication to some or large.

59
Graph 4.2- Graph shows the response of all 16 participants on question 2 .
Improvement in understanding the importance for professional as well as personal
growth

Q2's total response

60%
Response in
percentage
40%
Series1
20%

0%
1 2 3 4 5
participants rating for Q2

Improvement in professional and personal growth refers to overall enhancements,


which directly reflect in professional and personal development. It also means in
terms of contributing more quality output at professional front as well as personal
front having better interpersonal relations
Here 9 out of 16 participants. i.e. 56.25% participants rated to large extend
improvement while 6 rated. i.e. is 37.5% participants rated to some extend
improvement and rated 4. Only 1 individual rated 3, which means individual is still
trying to improve his overall personality using the techniques imparted in training
Above results suggest that majority participants have benefited from
the training programme. There is an overall large extend personal satisfaction among
all the participants.

Graph 3 .3 – Graph shows the rating of all participants on question 3:


Improvement in understanding the importance of effective listening

60
Q3's total response

60%

Response in
percentage
40%
Series1
20%

0%
1 2 3 4 5
participants rating for Q3

Improvement in understanding the importance of effective listening .It means when


engaged in any form of communication it is important to listen and understand the
conveyed message carefully .It is important for a good communicator to listen more
than talking and mere hearing.
As seen in the graph 9 out of 16. i.e. 56.25% rated 5 indicating that there is high
level of improvement in their listening skills . 6 participants. i.e. 37.5% reported
improvement to some extend and again one person reported that he is still trying to
improve his listening skills
Overall results obtained show that participants has benefited from training
programme.

Graph 4- Graph shows the rating of all the participants in percentage form for
question 4.

61
Q4's total response

60%

Response in
percentage
40%
Series1
20%

0%
1 2 3 4 5
participants rating for Q4

Improvement in building up appositive self image.

Improvement in building up a positive self-image for yourself means looking for


something likeable in oneself, focusing on being the person you want to be and
acknowledging and celebrating your achievement.

Her 50% individual felt that the training programme has helped them to improve
there self image to some extend where as other 50% individuals felt that training
programme has helped them to build a positive self image to large extend

Graph 3.5. - Graph shows the response of all participants on question 5

Q5's response
Total response

10

5 Series1

0
Yes No Partial
Category

The participant’s response with respect to question 5. i.e. whether others have
noticed explicit change in there behavior are broadly categorized into Yes, No, and
partial.

62
9 out of 16 reported Yes, 4 reported No, 2 reported partial improvement are noticed
by family and colleagues. Looking at the finer response under yes people reported
they were complimented for there improved communication skill and rapport building
skill and also there family have started sharing there problems more openly ad clearly
with them
Overall it is seen that behavioral changes have taken place ranging from some to large
extend where as some others are still trying to improve there behavior.
This proves that Kirkpatrick’s third level. i.e. behavior level effect
has occurred moderately

To check the final level .i.e. Result, the response from respective supervisor were
collected and the response obtained is as follows

Table 3.4 –shows the response of all supervisor in percentage form

63
Questions Percent of response on all questions under each rating
1 2 3 4 5
Improvement in 6.25 18.75 68.75 6.25%
bringing clarity in your % % %
communication skills
Improvement in the 12.5 75% 12.5%
understanding the %
importance of listening
Skill

64
Improvement in getting 6.25 25% 31.25 37.5%
along with others % %
(Rapport building)

65
Graph 4.4 –Graph shows the response of all the supervisor in percentage form for
question 1:
Q1's total rating

response in 80.00%
percentage 60.00%
40.00% Series1
20.00%
0.00%
1 2 3 4 5
Rating for Q1

It means improvement in terms of conveying message correctly without any


ambiguity while communicating with collogues and other higher official.
Here 68.7% supervisor reported that participants have improved there clarity to some
extend and rated 4while 18.75%supervisor reported participants are honestly trying to
improve the skill using techniques learned in training programme. While 6.25% each
supervisor feels large extend change or no change at all before and after the training
programme.
Overall obtained results show that supervisors are more or less satisfied with the
employees improvement.

Graph 4.5: Graph shows the response of the entire supervisor in percentage form for
question 2:
Q2's total rating
response in

100.00%
percentage

50.00% Series1
0.00%
1 2 3 4 5
Rating for Q2

It means whether person listens to your and other’s instruction carefully understands
them accurately and then react to it.

66
Here 75% supervisors reported that they have noticed improvement in there
employees to some extend .where as 12.5 % supervisor’s reported that there is large
extend improvement in the listening skills. Same percentage of supervisors (12.5%)
were of the view that there is no positive or negative change observed among
employees.

Graph 4.6-Graph shows the response of all the supervisor in percentage form for
question 3.

Q3's total ratings

40.00%
35.00%
total response in

30.00%
percentages

25.00%
20.00%
15.00%
10.00%
5.00%
0.00%
1 2 3 4
Ratings for Q3

It means whether person has acquired the skill to get along with colleagues and
supervisors.
Here 37.5% supervisors feels that there is large extend improvement observed in
there colleagues, 31.25% reported to some extend, 25% supervisors reported still
trying and only 6.25%. i.e. one supervisor felt that there is no change in the
employee’s behavior before and after the training programme .
The above-obtained results suggest that overall participant’s interpersonal skills have
improved and remarkable change is observed in work inside as well as outside the
organization.
The overall results obtained above suggest that result level effect. i.e. improvement in
communication and interpersonal skills has occurred to some extend and in few cases
to large extend . High positive results are obtained for listening skills followed by
clarity in communication. The less positive results are observed for rapport building
skills.

Conclusion:
From the above results it can be concluded that effects or concepts Krirkpatricks
model has occurred. There was strong response for reaction and learning level where
as behavioral and result level had occurred to some extend.

67
Further it is seen that among various factors affecting effective communication ,
factors like listening skills, self image ,and overall personality enhancement was
improved to large extend where clarity in communication was improved to some
extend .The low result in terms of clarity in communication are based on several
factors like cultural factors, knowledge of the language physical, psychological
barriers etc.

Recommendations:
Before the training programme pre-test should be conducted to get the better picture
of the training effectiveness.
Training programme should be organized department wise (Participants from one
division) so content uniformity becomes difficult
To improve or modify any behavioral training programme large number of response
is required so continuous evaluation of similar program is needed

Limitation:
Behavioral change is on –going process so it restricts the generalized of the results
obtained.
Due to limited sample size of 30 which include employees and supervisor from
different departments and with various number of years of experience any statistical
analysis becomes intricate

Chapter 5 - Annexure

Table 5.1 – table is raw data showing the participants response on five questions

68
Number of the Ratings
Participants
Q1 Q2 Q3 Q4 Q5
S-1 5 4 4 4 Partial
S-2 5 4 4 5 Yes
S-3 4 5 4 5 Yes
S-4 4 4 5 4 Yes
S-5 4 4 4 4 Partial
S-6 4 4 5 4 Yes
S-7 4 3 5 4 No
S-8 4 5 4 4 No
S-9 5 5 5 5 Yes
S-10 5 5 5 4 No
S-11 4 5 4 4 Yes
S-12 4 4 5 5 Yes
S-13 4 5 3 5 No
S-14 5 5 5 5 Yes
S-15 4 5 5 5 No

69
S-16 5 5 5 5 Yes

70
Number of Supervisors Ratings
Q1 Q2 Q3 Q4
S-1 4 4 5 No
S-2 2 2 2 No
S-3 4 4 4 Partial
S-4 4 4 4 Partial
S-5 3 2 4 No
S-6 4 4 3
S-7 4 4 4 Yes
S-8 5 5 5 Yes
S-9 4 4 5
S-10 4 5 5 Yes
S-11 3 4 3 No
S-12 4 4 3 Yes
S-13 3 4 5 No
S-14 4 4 4 Partial
S-15 4 4 5 Yes
S-16 4 4 3 No

Bibliography

71

S-ar putea să vă placă și