Sunteți pe pagina 1din 17

This article was downloaded by: [University of Leicester]

On: 27 November 2012, At: 03:00


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in Higher


Education
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/caeh20

Enhancing curriculum and delivery:


linking assessment to learning
objectives
a a a
Kathryn L. Combs , Sharon K. Gibson , Julie M. Hays , Jane
a a
Saly & John T. Wendt
a
University of St Thomas, Minnesota, USA
Version of record first published: 11 Dec 2007.

To cite this article: Kathryn L. Combs, Sharon K. Gibson, Julie M. Hays, Jane Saly & John T. Wendt
(2008): Enhancing curriculum and delivery: linking assessment to learning objectives, Assessment &
Evaluation in Higher Education, 33:1, 87-102

To link to this article: http://dx.doi.org/10.1080/02602930601122985

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-


conditions

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation
that the contents will be complete or accurate or up to date. The accuracy of any
instructions, formulae, and drug doses should be independently verified with primary
sources. The publisher shall not be liable for any loss, actions, claims, proceedings,
demand, or costs or damages whatsoever or howsoever caused arising directly or
indirectly in connection with or arising out of the use of this material.
Assessment & Evaluation in Higher Education
Vol. 33, No. 1, February 2008, 87102

Enhancing curriculum and delivery: linking assessment to


learning objectives
Kathryn L. Combs, Sharon K. Gibson, Julie M. Hays*, Jane Saly and John T. Wendt

University of St Thomas, Minnesota, USA


Taylor and Francis
CAEH_A_212231.sgm
Downloaded by [University of Leicester] at 03:00 27 November 2012

Typical university-wide course evaluations do not provide instructors with sufficient


Assessment
10.1080/02602930601122985
0260-2938
Original
Taylor
102008
33
jmhays@stthomas.edu
JulieHays
000002008
&Article
Francis
(print)/1469-297X
& Evaluation in Higher
(online)
Education

information on the effectiveness of their courses. This article describes a course assessment
and enhancement model where student feedback can be used to improve courses and/or
programs. The model employs an assessment tool that measures student perceptions of
importance and their current competence in course-specific learning objectives both pre- and
post-course. Information gained from this assessment enables course improvement over time
and also allows for modification in delivery and/or content of the current course. This model
is intended to augment traditional course evaluation mechanisms based on specific and
actionable feedback on learning objectives.

Introduction
Student evaluations have the potential to have a significant impact on improving courses and
increasing student learning and satisfaction. However, the typical university-wide course evalu-
ations completed by students at the conclusion of the semester do not provide instructors with
enough specific information on the effectiveness of their courses. To address this gap in knowl-
edge, this paper describes a course assessment and enhancement model for graduate-level courses
that provides information to enhance curriculum and course delivery.
The centerpiece of the model is a learning objectives assessment tool. The assessment tool,
developed by Combs et al. (2003) measures both the perceived importance of each stated
course objective and how well each objective is being met based on students perceptions of
their current ability. While this instrument and methodology could be used at the undergraduate
level, we believe that it is most appropriate for the graduate level where students have the
experience and knowledge needed to accurately assess the importance of particular learning
objectives.
The course assessment and enhancement model proposed has several important advantages
for curriculum improvement. The first advantage is that the learning objectives assessment tool
measures the importance of the course content as perceived by the student. Importance, or
perceived relevance of course material, is a strong motivator for adult students in terms of their
learning. Second, the tool can be used at the beginning and the end of the course. Using pre-
course information, the instructor can modify course delivery and/or content during the term.
Then, information gained from the end-of-term assessment can be used in revising the course
learning objectives for subsequent classes. Third, students assess importance and ability based
on learning objectives, not instructor characteristics. This ties the assessment directly to the

*Corresponding author. Email: jmhays@stthomas.edu

ISSN 0260-2938 print/ISSN 1469-297X online


2008 Taylor & Francis
DOI: 10.1080/02602930601122985
http://www.informaworld.com
88 K.L. Combs et al.

course content. Fourth, the tool is individualized for each course yet is constructed in a standard
format. Therefore, the results lend themselves to examination across sections and courses. Such
broad examination can help to coordinate curriculum and thus improve its effectiveness.
Using the learning objectives assessment tool described above does not necessarily take the
place of, or negate the value of, more typical student evaluations of courses, faculty and teaching
methodologies. Student evaluations of courses typically serve two purposes. First, they are a key
input into personnel decisions (e.g. promotion, pay and tenure). Second, they are used for instruc-
tor development and course improvement (Dulz & Lyons, 2000). While our tool could be used as
input for evaluating an instructors effectiveness, our objective is to provide information to
improve course offerings by highlighting areas where opportunities exist to enhance courses
through changes in content or teaching methodology. Our tool should be viewed as a supplement
providing information on course content and mastery that may well be missing from typical
student evaluations.
Downloaded by [University of Leicester] at 03:00 27 November 2012

Value of student evaluations


Both researchers and faculty disagree on what is actually measured by the typical student evalu-
ations of faculty (SEFs). Researchers have used measures of student satisfaction with the course
and instructor to study teaching effectiveness (White & Ahmadi, 1986; Feldman, 1989; Abrami
et al., 1990). While some studies find that student evaluations of faculty (SEFs) are valid
measures of the quality of instruction (Cohen, 1983; Marsh & Dunkin, 1992; Cashin, 1995;
Greenwald & Gillmore, 1997), other studies dispute this relationship (Dowell & Neal, 1982;
Chacko, 1983; Jacobsen, 1997; Stapleton & Murkison, 2001). Clayson (Clayson & Haley, 1990)
found SEFs to be a measure of how much the students liked the instructor, rather than a measure
of teaching effectiveness. Kim et al. (2000) discovered that, of eight broad practices (professor
character traits, managing the class, assignments, course design, testing, grading, feedback, and
course materials) identified in the literature, the category of professor characteristics (likeable,
flexible, committed, and knowledgeable) was the primary determinant of student satisfaction
with the course. In addition, Dulz and Lyons (2000) found that even students do not believe that
typical SEFs serve their needs. In this study, students focused on the need for the courses to have
relevance. Nowhere in the typical SEF is relevance evaluated. Recognizing this, our course
assessment and enhancement model incorporates an evaluation of the perceived importance of
learning objectives as a key assessment element.

Using learning objectives


Course learning objectives define a course in terms of the outcomes the instructor expects
students to achieve. To this end, many authors advocate that instructors use specific behavioral
statements (e.g. define, describe, critique, apply, solve) to state expected student compe-
tences (Diamond, 1989; Lowman, 1995; Huba & Freed, 2000; McKeachie, 2002). Diamond is
quite rigorous about this, requiring each learning objective to contain a verb, a set of conditions
and a level of acceptable performance. For example, in a statistics course: When given two
events, you will be able to determine if they are independent or if there is a relationship between
them. On the basis of this decision, you will be able to select and use the appropriate rules of
conditional probability to determine the probability that a certain event will occur (Diamond,
1989, p. 132). Others take a more relaxed approach. For example, McKeachie (2002) recognized
that a specific behavioral objective is simply a means toward an end in that it tries to measure a
more general objective. He recommended including an objective even if it is impossible to state
behaviorally. Appendix A includes samples of learning objectives that we developed for our
various graduate business courses.
Assessment & Evaluation in Higher Education 89

Dean (1994) suggested that the use of learning objectives in instructional design results in
more efficient use of instructional time and, therefore, improves learning. In addition, he stated
that instructional design principles can be used to make corrections in the way that content is
delivered during the course. Hutchings et al. (1991) delineated the benefits of assessment as (1)
the ability to compare student learning with objectives; (2) having objectives that are clear and
known; and (3) capturing information that will enable ongoing improvement. Kirkpatrick (1987)
suggested that the effectiveness of training workshops can be measured by (a) participant reac-
tions, (b) change in knowledge, (c) change in behavior on the job, and (d) results. Because our
proposed model focuses on learning objectives, it adheres to these instructional design principles.
And, because it assesses these objectives at both the beginning and end of the course, it also
provides multiple opportunities for ongoing improvement.
A focus on learning objectives provides a natural method by which to tailor evaluations to
individual courses. Davis (1995) calls for faculty to develop an evaluation instrument that will
Downloaded by [University of Leicester] at 03:00 27 November 2012

suit their purposes rather than using standardized forms that do not reflect the individual
content of particular courses. As Dulz and Lyons (2000) note, often a common evaluation
instrument is used across the board for all courses in a department, program, school or even the
entire university. The instrument that we have developed follows a standardized presentation,
but is individualized for each course according to learning objectives. Combs et al. (2003)
detail our administration of the learning objectives tool in our own graduate business courses.
We have found the tool extremely easy to adapt to individual courses across multiple subject
areas. Further, the information we have received through this assessment is much more detailed
for purposes of content improvement than the standardized SEF that our university requires.
The paper proceeds with a description of the course assessment and enhancement model
followed by a detailed description of the learning objective assessment tool and how it is used.
We then discuss the limitations of this model and delineate areas in which future research is
recommended.

Course assessment and enhancement model


Our proposed Course Assessment and Enhancement Model includes the following five phases:
Course Design; Assessment Tool Pre-Course; Modified Course Delivery; Assessment Tool Post-
Course; and Enhancements (see Figure 1).
Our model is based on the work of Shewhart (1986), Deming (1986), Kolb (1984), and Schon
Figure 1. Course assessment and enhancement model

(1987). Shewhart (1986) and Demings (1986) PlanDoStudyAct cycle provides a model for
continuous process improvement. Kolbs (1984) model of the Learning Cycle begins with
concrete experiences that are the basis for reflection. These reflections are assimilated into
abstract concepts from which new implications for action can be determined. These implications
serve as the guide in creating new experiences for a continuation of the cycle. Finally, Schon
(1987) believed that reflective practitioners use the knowledge they gain through continual
inquiry and analysis to improve instruction. His model includes reflection, interpretation, appli-
cation and engagement. More recently, Seymour (1995) advocated a model for improving quality
in higher education that includes: direction setting, process design and feedback. In his model, an
analysis of the gap between where you are and where you want to be provides information for
continuous improvement. All of these models involve the same process of planning, reflecting on
what has been done and using feedback to learn in order to modify what will be done in the future.
Our model incorporates this same ideology for obtaining information and improving courses and
programs. Although it is true that SEFs have the same intent, our methodology adds effectiveness:
it provides more specific information on course content and allows for adjustment of the course
within the term as well.
90 K.L. Combs et al.

Course Design Modified Enhancements


Course
Define Delivery Course:
learning Objectives
objectives Refine teaching Delivery
Determine emphasis: methods
course topics Assessment Assessment Communication
Determine Tool Communicate Tool
course Pre-Course: importance of Post-Course: Program:
methodology Importance objectives Importance Integration/
Ability Emphasize Ability sequencing
low perceived Curriculum
ability areas development
Multiple
sections
Downloaded by [University of Leicester] at 03:00 27 November 2012

Figure 1. Course assessment and enhancement model.

Course design
In the first phase, Course Design, instructors develop learning objectives for their courses. Course
learning objectives should be consistent with and support program learning objectives. Initially,
instructors may need training in developing and articulating learning objectives. Short workshops
through a campus faculty development center or external source may need to be offered to assist
instructors in developing learning objectives for their particular course. Additionally, there are
many good tutorials on the Web devoted to developing learning objectives.
The development of course learning objectives helps instructors clarify what they believe are
the key elements of the course they are teaching. Learning objectives help give an organizational
structure to the course by encouraging the instructor to link the various learning objectives as they
relate to the overall content of the class. Once learning objectives are established, they are trans-
lated into course topics to be taught during the semester. Individual instructors then determine the
course methodology and variety of methods that they believe would be best suited to teach these
topics and achieve the learning objectives.

Assessment tool pre-course


The second phase, Assessment Tool Pre-Course, involves incorporating specific learning
objectives for the course into an assessment instrument and administering this assessment to
students during the initial class session. After the course objectives are introduced on the first
day of class, students are asked to rate the importance of each objective and their current level
of ability (competence) on each objective. As a result of this pre-course assessment, the
instructor gains baseline data on student perceptions on these measures prior to the start of the
course. (The tool is discussed in detail below. See Appendix A for an example of the assess-
ment tool.)
Students benefit directly from the use of this learning objective tool in several ways. The use
of the tool requires stated learning objectives that clarify what the course is to deliver, which
contributes to students understanding of what the instructor views as the important components
of the course content. The identification of learning objectives also helps show students how the
different course elements link to one another.
Assessment & Evaluation in Higher Education 91

Modified course delivery


In the third phase, Modified Course Delivery, the instructor has the opportunity, based on the pre-
course results, to refine his/her teaching emphasis. Based on an aggregate analysis of the pre-
course results, instructors can determine whether any change in methodology is needed. Changes
in course delivery methods may address the students perceived importance on specific objectives
or their perceptions of current ability (e.g. their perceived competence). The instructor can use
these data as a springboard to survey class participants as to various learning or teaching methods
that they would find helpful in clarifying a particular objective (e.g. case studies, problem sets,
tutorials and so forth).
Analysis of the pre-course results allows for potential improvements to be made to the
current class session. This is an enhancement to most evaluation systems that occur only at
the end of class and do not impact on the class that has just completed the course. Huba and
Freed (2000) advocate taking such a learner-centered approach, and use the term formative
Downloaded by [University of Leicester] at 03:00 27 November 2012

assessment to describe the practice of gathering student feedback during a course and using
it to make improvements during the term. This method enables modifications to be custom-
ized based on the pre-course assessment results for a particular group of students in a given
class.

Assessment tool post-course


The fourth phase, Assessment Tool Post-Course, occurs on completion of the course. Students
are again asked to rate their perceptions of importance and ability (competence) on the course
objectives. These post-course results are compared with the pre-course results on an aggregate
basis to determine changes in perceptions of importance and ability. Based on concerns that were
identified in the pre-course phase and/or changes that were made based on prior class feedback,
the instructor may also decide to query students as to whether a particular delivery method was
effective in helping to clarify a particular objective. These findings can be used by the instructor
to analyze shifts in students perceptions of importance and ability that have occurred as a result
of completing the course.

Enhancements
The final phase, Enhancements, involves determining the enhancements or improvements that are
to be implemented, based on the pre- and post-course results. There are two categories of
enhancements that may result from the comparison of pre- and post-course results on the assess-
ment instrument: Course and Program.
Specific to the Course that has been assessed, the results of the assessment can be used by the
instructor, in concert with the typical student evaluation reports and any supplemental informa-
tion gained on teaching methods, to determine modifications for future classes. Possible course
enhancements that may be suggested include: (1) modification of future course objectives; (2) a
shift in delivery methods, e.g. course structure and/or methodology; and (3) better communica-
tion of objectives and expected performance outcomes. Through the modification of future course
objectives based on student perceptions, this methodology incorporates double-loop learning as
prescribed by Argyis and Schon (1978). Single-loop learning consists of choosing or determining
goals and then actions are taken and evaluated in an attempt to achieve those goals. In contrast,
double-loop learning questions the goals themselves and possible alterations of those goals
results in double-loop learning. As noted in Figure 1, these enhancements link back into the initial
Course Design phase, as they will affect the future development of learning objectives, course
topics and course methodologies.
92 K.L. Combs et al.

Although this paper is focused on utilizing the proposed methodology to enhance and
improve particular courses, it also has the potential to contribute to overall program design
enhancements. Assessment results can be analyzed across sections and courses to improve
course consistency and coordination of the curriculum within programs. Faculty can use data
gathered across sections of the same course to hone and standardize the list of objectives. In
addition, coordination of objectives across courses can improve course sequencing, integration
and curriculum development. For example, learning objectives that are applicable across a
curriculum can be included as part of the assessment of each applicable course, thus providing a
perspective on how students view a particular course as facilitating the achievement of that
broader objective.
This methodology should also be appropriate for programmatic learning goals and objectives.
If a program has particular objectives, such as the development of teamwork skills, these can be
assessed pre-program and post-program to determine whether students believe these objectives
Downloaded by [University of Leicester] at 03:00 27 November 2012

are important and whether they perceive that they have improved mastery by the completion of
the program.

Instrument
Description
The instrument (see Appendix A) measures students beliefs concerning learning objectives in a
particular course along two dimensions. Students are asked to rate the importance of each learn-
ing objective and their current ability (competence) in meeting the objective. Students rate the
importance of each objective on a Likert-type scale ranging from Very Unimportant (1) to Very
Important (7) or Dont Know. Likewise, students rate their current ability in achieving each
objective from No Competence (1) to High Competence (7) or Dont Know. In addition,
students rate The course as a whole using the same two scales.
Reflected in the instruments rating scale is the method suggested by Fishbein and Ajzen
(1975) to measure attitudes and beliefs. In the Fishbein and Ajzen method, attitudes are learned
predispositions to respond to an object in a favorable or unfavorable way and directly influence
behavioral intentions. Beliefs are viewed as hypotheses concerning the nature of the object and
its relation to other objects. Attitudes are a function of both the strength of the belief and the eval-
uation of the attribute. The Fishbein method suggests two salient strategies for behavioral inten-
tion change: one can either change the strength of the belief associated with an attribute or change
the evaluation of the attribute.
Our instrument is also similar to the ServQual instrument developed by Parasuraman et al.
(1988) where expectations and perceptions are compared to determine service quality. Reducing
the gap between these two evaluations by altering expectations or perceptions is the basis for
quality improvement. Our two variables of comparison are perceived importance and perceived
ability (i.e. competence) as related to the learning objectives. Similar to the ServQual methodol-
ogy, we prescribe attempting to alter either perceptions of importance or course methodology to
enhance student learning in order to achieve more favorable outcomes.
It is generally accepted that student motivation to learn plays an important role in learning and
student performance, with more highly motivated students learning more and performing at a
higher level. While many factors can influence a given students motivation to learn, researchers
have identified student perceptions of the usefulness of the material, the relevance of the material
and/or the importance of the material as a significant driver of student motivation (Bligh, 1971;
Cashin, 1979; Sass, 1989). Indeed, we have found that the more important students believe
a learning objective to be, the greater they perceive their competence to be at the completion of a
course (Combs et al., 2003).
Assessment & Evaluation in Higher Education 93

We believe that student self-assessment of competence is a valid measure of actual ability.


Meta-analyses of college students self-evaluation of learning find that it is positively correlated
with student achievement (Cohen, 1986; Falchikov & Boud, 1998). More recent, individual studies
from various content areas also find overall high correlations between self-assessment results and
ratings based on a variety of external criteria (Mehta & Danielson, 1989; Coombe, 1992; Oscarsson,
1997; Chesebro & McCroskey, 2000; Wortham & Harper, 2002; Fitzgerald et al., 2003). However,
we do acknowledge that student self-assessment of learning could be biased (some research has
shown that low performers tend to overestimate their abilities: Hacker et al., 2000; Moreland et al.,
1981) and plan to revalidate the relationship between student self-assessment and ability with future
research.

Instrument use and analysis


Downloaded by [University of Leicester] at 03:00 27 November 2012

Our Course Assessment and Enhancement Model calls for an analysis of ratings both pre-course
and post-course and provides useful information to modify current course delivery and future
course/program enhancements. The following examples illustrate various assessment outcomes
and describe potential actions an instructor might take in response to the assessment data.

Modified course delivery phase. During the Modified Course Delivery phase, it is useful to
analyze responses on the pre-course assessment for each objective. To do this we plot individual
student responses and average class responses on a graph of importance versus ability for each
learning objective. Figure 2 demonstrates the four possible quadrants for student responses and
possible strategies an instructor can employ for objectives that fall into those quadrants.
Figure 2. Pre-course student evaluations





  
    
 
 
  
  



   
       
     
 

   

 


    

  


   
   
  "   #   
  
         $   #   
 !      




   

 

 

Figure 2. Pre-course student evaluations.
94 K.L. Combs et al.

Student responses falling in Quadrant A reflect low competence and low importance. Since
previous research has found a strong correlation between the students perception of the impor-
tance of a topic and student performance, the instructor who finds the majority of responses fall-
ing in this quadrant will want to work on communicating to students the importance of this
learning objective. The instructor may wish to use current examples that illustrate the importance
of this objective or invite subject-matter experts in as guest lecturers to reinforce the importance
of this learning objective in practice.
Quadrant B responses reflect students with low competence who believe the topic is impor-
tant. These students are ready and willing to learn. The instructor does not need to spend time
convincing students that this is an important learning objective, but does need to supply the
students with the necessary knowledge and tools for them to become competent on this objective.
Quadrant C responses reflect students who both see themselves as competent and believe the
topic is important. Since students perception of current ability may not be accurate, the instructor
Downloaded by [University of Leicester] at 03:00 27 November 2012

may want to implement a performance assessment mechanism (e.g. a test on this content area) to
ensure that the objective has, in fact, been achieved. If students are truly competent in this area
the instructor may wish to reduce the time spent on this objective and increase the time spent on
an objective where students perceive they have low current ability. Alternatively, the instructor
could increase the level of difficulty or depth of coverage.
Quadrant D responses reflect the most potentially challenging objectives. These students not
only believe that they are very competent in this area but also believe that the objective is not
important. As with Quadrant C, the instructor will wish to verify the students knowledge level.
If the instructor determines that this objective is not critical to the course and confirms that the
students do have sufficient knowledge, this may be an area in which the instructor would reduce
coverage or the instructor might ultimately decide to eliminate this objective as part of this partic-
ular course.
The instructor will first want to look at the average responses for all objectives to see if there
are any problem objectives. The next step is to look at the graph for each objective in case some
of the individual responses are significantly different. In particular, if the average response for a
learning objective shows medium to high competence and one or a few students believe they have
low competence, the instructor might need to direct those low-ability students to outside
resources for tutorials and/or provide additional exercises or readings so that those students can
raise their competence to the level of the rest of the class. Because most of the class is already
competent in this area, the instructor cannot spend the amount of time on the basics that might be
needed for these less competent students. Hence, the analysis of the individual responses to the
pre-course assessment enables instructors to customize the delivery of content to better fit the
level of prior knowledge that students bring to a particular class.

Enhancements phase. In the Enhancements phase, after completion of the class, an instructor
can use the instrument to improve future course delivery. Post-course responses can be compared
with pre-course responses, by plotting changes in average student perceptions for each learning
objective. (We cannot plot changes by student since the responses are anonymous.) Figure 3
shows potential changes that might occur from pre-course to post-course.
For learning objective 1, student perceptions move from Quadrant A to B. The students are
Figure 3. Pre- and post-course student evaluations

more convinced that the topic is important, yet still have low confidence in their ability. This may
be a topic that needs extra attention or time for students to absorb the material. For learning objec-
tive 2, students competence has improved; however, they are less convinced of the importance
of the material. For the instructor, the increase in competence is most critical; however, it is useful
to consider what may be underlying this decrease in perceived importance. For example, in our
Assessment & Evaluation in Higher Education 95

Very
Important

Learning Objective 2

Importance
Neutral
Learning Objective 1 Learning Objective 3
Downloaded by [University of Leicester] at 03:00 27 November 2012

Very Learning Objective 4


Unimportant
No High
Competence Neutral Competence

Current Ability
Figure 3. Pre- and post-course student evaluations.

experience, this type of change has occurred with some required quantitative topics. It may be
that students view these difficult topics as very important until they understand the topic better.
As their understanding improves and the topic becomes more straightforward, it is perceived as
less important (e.g. they become more confident in their ability to learn and apply the material).
Learning objective 3 represents our most desired outcome in that both perceived competence and
importance has increased. It would appear that the coverage of this topic has been accomplished
in an optimal fashion; therefore, no modifications are required. Learning objective 4 shows a
decrease in importance as well as low final ability. This result would merit further analysis as to
the relationship between this objective and the class as a whole. The instructor may want to
review how this material was presented and may even decide to eliminate this material altogether,
depending on how integral this objective is to the overall class content.
Finally, the instrument provides an opportunity to develop common learning objectives
across several sections of the same course, which may have different instructors. It can help
departments avoid duplication and can clarify topic coverage in a sequence of courses. For
example, if several sections of the same course have the same course objectives, the results of the
pre- and post-course assessments could be compared by objective and section. This would be
especially useful in determining whether an objective should be revised or removed from future
course delivery.

Limitations and recommendations for future research


The limitations of this research are related to the assessment tool we developed and employed.
For an assessment to be useful, it must be both reliable and valid. While we believe that this
instrument is both reliable and valid, we have only anecdotal evidence to support this claim.
Further research is needed to establish both reliability and validity.
96 K.L. Combs et al.

A survey instrument is reliable if the same individual would give the same answers if he/she
took the survey again. Although we cannot test the same individual, or even the same class, we
plan to look at particular courses taught by the same instructor to different classes to determine
whether students assessments of particular learning objectives, in terms of both importance and
ability, are stable. Instructors using our model may want to administer the instrument to several
classes to establish the reliability of the results prior to making any significant changes in their
course.
Validity is related to correctly measuring the characteristic of interest. The focus on learning
objectives as the centerpiece of our assessment process was intended to provide specific informa-
tion on perceptions related to course content (as represented by learning objectives) versus
instructor characteristics. However, we recognize that instructor characteristics and methods of
delivery may impact on student perceptions of the quality of a particular course, as well as their
perceptions of importance and ability. Further research is planned to explore the relationship
Downloaded by [University of Leicester] at 03:00 27 November 2012

between student perceptions of instructor characteristics and student perceptions of importance


and competence as measured by our assessment tool.
In addition, the assessment tool measures perceptions of ability, which may or may not be
correlated with the grade (a more objective measure of student performance) that the student
receives for the course. Further research is planned to determine whether there is a relationship
between students perceptions of importance and competence and the average grade granted.
Finally, students concerns about having the requisite skills to succeed in a particular course
(for example, the quantitative skills required in a statistics course) may be related to their
ratings of the importance of particular course objectives. The relationship between students
self-efficacy and their perceptions of both importance and ability on learning objectives merits
further exploration.

Conclusion
The purpose of this research was to develop a course assessment and enhancement model, in
which student feedback on specific course objectives could be used by the instructor to enhance
course outcomes. This methodology provides instructors with course-specific information on
both the perceived importance of learning objectives and the students perceived ability to
complete those objectives both prior to the delivery of the course and after the course is
completed. In contrast to the typical course assessment that is done only at the conclusion of the
course, this methodology allows for input from students at the beginning of the course. Instructors
can therefore make course modifications at the start of a particular course, as well as modifica-
tions to subsequent courses. The inclusion of students evaluation of the importance of individual
learning objectives, which has been found to be related to students perceptions of ability,
also enables the instructor to assess the perceived relevance of the course and/or learning objec-
tives. In summary, this assessment methodology shows high potential for providing specific and
actionable information on course effectiveness that instructors can use for course and program
improvement.

Notes on contributors
Kathryn L. Combs is a Professor of Business Economics at the University of St Thomas in Minnesota. She
holds a BA in Economics from Washington State University, and an MA and PhD in economics from the
University of Minnesota. Her research interests are in gambling determinants and policy, the economics of
R&D, and technology transfer. She has published in International Journal of Industrial Organization,
Economics Letters, the Journal of Technology Transfer, and Technology Analysis and Strategic Management.
She was formerly on the faculty of California State University, Los Angeles, and was a visiting faculty
member at University of Southern California and the University of Minnesota.
Assessment & Evaluation in Higher Education 97

Sharon K. Gibson is an Associate Professor of organization learning and development at the University of
St Thomas. She received her PhD from the University of Minnesota, and holds an MSW from the University
of Michigan and a BS from Cornell University. Her research interests include developmental relationships
such as mentoring and coaching; strategic human resource and organization development; and adult learn-
ing. Her articles have appeared in publications such as Human Resource Development Review, Journal of
Career Development, Human Resource Management, Advances in Developing Human Resources (ADHR)
and Innovative Higher Education. Dr Gibson is on the Editorial Board of ADHR and is co-editor of an issue
on Mentoring and Human Resource Development: Perspectives and Innovations. She has over 20 years of
business, non-profit and consulting experience, and has held various management positions in the human
resources field.

Julie M. Hays is an Associate Professor in the Opus College of Business at the University of St Thomas,
Minneapolis, MN. She holds a BS in Chemical Engineering from the University of Minnesota, an
MBA from the College of St Thomas, and a PhD in Operations and Management Science from the Curtis
L. Carlson School of Management at the University of Minnesota. She was the 1998 recipient of the Juran
Fellowship awarded by the Juran Center for Leadership in Quality at the University of Minnesota. Her
Downloaded by [University of Leicester] at 03:00 27 November 2012

dissertation research was on service quality issues. She was heavily involved in a US$140,000 grant from
the National Science Foundation/Transformations to Quality Organizations to study service guarantees. She
has published research articles in the Journal of Operations Management, Production and Operations
Management, Decision Sciences, Decision Sciences Journal of Innovative Education and the Journal of
Service Research.

P. Jane Saly is an Associate Professor and Chair of the Accounting Department in the Opus College of
Business at the University of St Thomas, Minneapolis, MN. She holds a BSc in Mathematics from Queens
University, Canada, an MBA from the University of Alberta, and a PhD in Accounting from the University
of British Columbia. Her research interests are executive compensation and executive stock options. She has
published research articles in the Journal of Accounting and Economics, Accounting Horizons, Journal of
Finance and Cases from Management Accounting Practice. Her article, The timing of option repricing,
was nominated for the Brattle Prize in Finance.

John T. Wendt is an Assistant Professor in the Opus College of Business at the University of St Thomas,
Minneapolis, MN. He holds a BA Summa Cum Laude in Humanities from the University of Minnesota, an
MA in American Studies from the University of Minnesota and a JD from the William Mitchell College of
Law. He was the Inaugural Recipient of the Business Excellence Award for Innovation in Teaching at the
University of St Thomas. He was also the recipient of the Alumni of Notable Achievement Award, University
of Minnesota College of Liberal Arts. He has published articles in the American Bar Association Entertain-
ment and Sports Lawyer Journal, Midwest Law Review and the Business Journal for Entrepreneurs.

References
Abrami, P.C., S. dApollonia, and P.A. Cohen. 1990. Validity of student ratings of instruction: what we do
know and what we do not? Journal of Educational Psychology 82 no. 2: 21931.
Argyris, C. and D.A. Schon. 1978. Organizational learning: A theory of action perspective Reading, MA:
Addison-Wesley.
Bligh, D.A. 1971. Whats the use of lecturing? Devon, UK: Teaching Services Centre, University of
Exeter.
Cashin, W.E. 1979. Motivating students Manhattan, KS: Kansas State University, Center for Faculty
Evaluation and Development in Higher Education.
Cashin, W.E. 1995. Student ratings of teaching: the research revisited Manhattan, KS: Kansas State
University, Center for Faculty Evaluation and Development in Higher Education.
Chacko, T.I. 1983. Student ratings of instruction: a function of grading standards. Educational Research
Quarterly 8, no. 2: 1925.
Chesebro, J.L. and J.C. McCroskey. 2000. The relationship between students reports of learning and their
actual recall of lecture material: a validity test. Communication Education 49, no. 3: 297301.
Clayson, D.E. and D.A. Haley. 1990. Student evaluations in marketing: what is actually being measured?
Journal of Marketing Education 12, no. 3: 917.
Cohen, P.A. 1983. Comment on a selective review of the validity of student ratings of teaching. Journal of
Higher Education 54, no. 4: 44858.
98 K.L. Combs et al.

Cohen, P.A. 1986. An updated and expanded meta-analysis of multisection student rating validity studies,
paper presented at the 70th Annual Meeting of the American Educational Research Association, San
Francisco, CA.
Combs, K.L., S.K. Gibson, J.M. Hays, P.J. Saly, and J.T Wendt. 2003. Development and use of an assessment
tool based on course learning objectives, paper presented at the 46th Annual MWAOM Conference,
St. Louis, MO.
Coombe, C. 1992. The relationship between self-assessment estimates of functional literacy skills and
basic English skills test results in adult refugee ESL learners Columbus: Ohio State University.
Davis, M.H. 1995. Staging a pre-emptive strike: turning student evaluation of faculty from threat to
asset, paper presented at the 46th Annual Meeting of the Conference on College Composition and
Communication, Washington, DC.
Dean, G.J. 1994. Designing instruction for adult learners Malabar, FL: Krieger.
Deming, W.E. 1986. Out of the crisis Cambridge: Cambridge University Press.
Diamond, R. 1989. Designing and improving courses and curricula in higher education San Francisco:
Jossey-Bass.
Dowell, D.A. and J.A. Neal. 1982. A selective review of the validity of student ratings of teachings.
Downloaded by [University of Leicester] at 03:00 27 November 2012

Journal of Higher Education 53, no. 1: 5162.


Dulz, T. and P. Lyons. 2000. Student evaluations: help or hindrance? [Electronic version]. Journal of the
Academy of Business Education, 1: (Proceedings). Available online at: http://www.abe.villanova.edu/
proc2000/n038.pdf (accessed 20 February 2006).
Falchikov, N. and D. Boud. 1998. Student self-assessment in higher education: a meta-analysis. Review of
Educational Research 59: 395430.
Feldman, K.A. 1989. The association between student ratings of specific instructional dimensions and
student achievement: refining and extending the synthesis of data from multisection validity studies.
Research in Higher Education 30, no. 6: 583645.
Fishbein, M. and I. Ajzen. 1975. Belief, attitude, intention, and behavior: An introduction to theory and
research Reading, MA: Addison-Wesley.
Fitzgerald, L., E. Ferlie, and C. Hawkins. 2003. Innovation in healthcare: how does credible evidence
influence professionals? Health & Social Care in the Community 11, no. 3: 21928.
Greenwald, A.G. and G.M. Gillmore. 1997. No pain, no gain? the importance of measuring course work-
load in student ratings of instruction. Journal of Educational Psychology 89, no. 4: 74351.
Hacker, D.J., L. Bol, D.D. Horgan, and E.A. Rakow. 2000. Test prediction and performance in a classroom
context. Journal of Educational Psychology 92, no. 1: 16070.
Huba, M.E. and J.E. Freed. 2000. Learner-centered assessment on college campuses: shifting the focus
from teaching to learning Boston, MA: Allyn & Bacon.
Hutchings, P., T. Marchese, and B. Wright. 1991. Using assessment to strengthen general education
Washington, DC: AAHE.
Jacobsen, M. 1997. Instructional quality, student satisfaction, student success, and student evaluations of
faculty: what are the issues in higher education? (ERIC, Document Reproduction Service number
423786).
Kim, C., E. Damewood, and N. Hodge. 2000. Professor attitude: its effect on teaching evaluations. Journal
of Management Education 24, no. 4: 45873.
Kirkpatrick, D.L. 1987. Evaluation of training, in: R. Craig (Ed.) Training and development handbook: a
guide to human resource development New York, McGraw-Hill.
Kolb, D.A. 1984. Experiential learning: experience as the source of learning and development Englewood
Cliffs, NJ: Prentice Hall.
Lowman, J. 1995. Mastering the techniques of teaching San Francisco: Jossey-Bass.
Marsh, H.W. and M.J. Dunkin. 1992. Students evaluations of university teaching: a multidimensional
perspective, in: J.C. Smart (Ed.) Higher education: handbook of theory and research Berlin: Springer
Verlag, 14333.
McKeachie, W.J. 2002. McKeachies teaching tips: strategies, research, and theory for college and
university teachers Boston, MA: Houghton Mifflin.
Mehta, S. and S. Danielson. 1989. Self-assessment by students: an effective, valid, and simple tool?, paper
presented at the ASEE National Conference, Charlotte, NC.
Moreland, R., J. Miller, and F. Laucka. 1981. Academic achievement and self-evaluations of academic
performance. Journal of Educational Psychology 73: 33544.
Oscarsson, M. 1997. Self-assessment of foreign and second language proficiency. Encyclopedia of
Language and Education 7: 17587.
Assessment & Evaluation in Higher Education 99

Parasuraman, A., V.A. Zeithaml, and L.L. Berry. 1988. SERVQUAL: a multiple-item scale for measuring
consumer perceptions of service quality. Journal of Retailing 64, no. 4: 1240.
Sass, E.J. 1989. Motivation in the college classroom: what students tell us. Teaching of Psychology 16: 8688.
Schon, D.A. 1987. Educating the reflective practitioner: toward a new design for teaching and learning in
the professions San Francisco: Jossey-Bass.
Seymour, D. 1995. Once upon a campus Phoenix, AZ: Oryx Press.
Shewhart, W.A. 1986. Statistical method from the viewpoint of quality control London: Dover Publishing.
Stapleton, R.J. and G. Murkison. 2001. Optimizing the fairness of student evaluations: a study of correla-
tions between instructor excellence, study production, learning production, and expected grades.
Journal of Management Education 25, no. 3: 26991.
White, C.S. and M. Ahmadi. 1986. A novel approach to the analysis of a teacher-evaluation system.
Journal of Education for BusinessOctober: 2427.
Wortham, K. and V. Harper. 2002. Learning outcomes assessment. Available online at: http://www.aacsb.
edu/knowledgeservices/LearningOutcomes.pdf (accessed 2 September 2005).
Downloaded by [University of Leicester] at 03:00 27 November 2012
Downloaded by [University of Leicester] at 03:00 27 November 2012

Appendix 1. Course Assessment Survey


100

The primary purpose of this questionnaire is to improve the course offerings at Institution Name. This data will be used to help us design courses
taht are both relevant and effective. It is anonymous. Participation is voluntary and will not affect your status in this class or in any other way at
Institution Name. If you have any concerns about this research you can contact Name (contact email or contact phone) or Institution Name Institu-
tional Review Board (IRB log #01-084-01) at institution phone. By returning the survey you are agreeing to participate in this study.

Demographics
K.L. Combs et al.

Gender Male Female


Age (in years) < 21 2125 2630 3140 >40
Years of business work experience <1 13 46 710 >10
How many hours do you typically 0 120 2140 4150 >50
work at all paid employment per
week?
How many credits are you taking 03 46 79 10 or more
this semester?
What is the highest level of school Bachelors degree
you have completed or the highest
degree you have received?
Masters degree
Professional School Degree
(MD,DDS,DVM,LLB,JD)
Doctorate degree (PhD, EdD)
Your current GPA < 2.5 2.52.9 3.03.4 3.54.0 na
Degree Program Evening MBA Day MBA MBAHR MIM
Program Concentration accounting JD/MBA
environmental management management
finance manufacturing systems
financial services management marketing
franchise management non-profit management
health care management real estate appraisal
human resources risk and insurance management
Downloaded by [University of Leicester] at 03:00 27 November 2012

Appendix 1. (Continued).

Demographics

information management sports and entertainment management


international management venture management
For my program and required an elective
concentration this course is
How many courses have you 02 35 68 911 >11
completed for your program?
What grade do you expect to A A B+ B B C <C
receive in this course?
(over)
Assessment & Evaluation in Higher Education
101
Downloaded by [University of Leicester] at 03:00 27 November 2012

Appendix 1. (Continued).
102

Learning Objectives
Please rate the following course learning objectives on both their importance to you at this time and your confidence in your competence/ability to
perform these objectives at this time.

Importance to You Your Current Ability


K.L. Combs et al.

Neutral
Dont Know
Dont Know

Very Important
No Competence
High Competence

Very Unimportant
Some Competence

Individual course objectives listed here. Below are some example objectives from various courses. 12345670 12345670
Be able to identify the components of a total reward system that apply to employees in organizations and understand 12345670 12345670
what is meant by total compensation/rewards and direct compensation.
Define and give examples of various cost concepts and measures (sunk cost, opportunity cost, marginal and average 12345670 12345670
cost, fixed cost, economies of scale, scope, and learning), and explain their relevance in managerial decision-making.
Effectively apply foundational knowledge of base compensation programs and compensation design principles in the 12345670 12345670
development of a compensation philosophy for an organization.
Explain the importance of models to help understand complex management situations. 12345670 12345670
Know what linear programming is and when it is useful. 12345670 12345670
Be able to identify the internal and external organizational contexts (such as market strategy, organizational lifecycle 12345670 12345670
stage, etc.) of compensation practice as they relate to various types of organizations.
Be able to determine conditional probability and understand why this is useful. 12345670 12345670
Be able to calculate elasticities of demand and use the concept to inform pricing decision. 12345670 12345670
Be able to perform an industry analysis using a five-forces approach. 12345670 12345670
A. The course as a whole. 12345670 12345670

S-ar putea să vă placă și