Documente Academic
Documente Profesional
Documente Cultură
Course Outcomes:
Students should be able to
1. understand, at the introductory level, computer architecture, operating systems and networks,
automata and models of computation, programming languages and compilers, algorithms,
databases, security and information assurance, artificial intelligence, graphics, and
social/ethical issues of computing.
(Assessment through homework and exams)
Assessment Methodology:
• The course outcome was tied to all of the questions on the exam and the homework and the
professional brief writing assignments.
• The student’s class average was used as the overall metric.
• The numeric score per outcome was averaged over the whole class.
• Data for all students who completed the course was used for this analysis.
Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.
Unsatisfactory corresponds to less than 60 percent.
Results:
The outlined process results in numeric assessment scores:
If the class average is considered only for students who completed the course (submitted assignments and
took exams), then the assessment results are:
Thus, the outcome was satisfactory. The performance measures translate as follows: Excellent is 4,
Satisfactory is 3, Marginal is 2, and Unsatisfactory is 1.
Actions:
The following is being considered:
• In 2009 both faculty and student evaluations concluded that the course really needed more contact
time and more homework for students to have a better understanding of the material and the
discipline. Therefore, the course was changed to a two-credit course with two hours of class time,
which was put in place for 2010. There was no significant change in assessment results from this
change.
• Further consideration to the alignment of the course with the program outcomes will be
considered for redevelopment of the course for fall 2010. The faculty will discuss whether this
course should impact program outcomes, at an introductory level, e.g., for Theory, Technical
Communications, and Ethics. This 2009 consideration was postponed until after the results for the
course modifications could be evaluated.
1
• In 2010, students specifically suggested that they might understand the material better if only one
faculty member did the main course content. Therefore, for 2011, it is being considered whether
the primary instructor should teach all of the assessed material and only have other faculty come
in to present their research areas.
Program Outcomes:
No course outcomes (CO) for this course are currently mapped to program outcomes (PO).
2
CSE 113: Introduction to Programming
Course Assessment for Undergraduates in Spring 2010
2 Assessment Methodology
For the purpose of objective assessment, numeric scores were calculated to measure each student’s
mastery of each learning outcome. The following table shows the metrics used for assessment. The
metrics include midterm and final exam questions as well as grades on the projects and the final lab
assignment. An average of students’ lab assignment grades was also used to measure their ability to use
the software tools introduced. Homework and quiz grades were not used for assessment because they
were part of the initial topic presentation and were thus not a good measurement of topic mastery.
Outcome Metrics
Midterm Section 3: Binary Numbers
1 Midterm Section 4: Boolean Logic
Final Section 0: Basic Concepts
Project 0 Design
2
Final Section 4: Program Design
Midterm Section 2: Lexical Scoping
3 Final Section 1: Pointer Arithmetic
Final Section 2: Activation Records
Project 0
4 Project 1
Lab Final
Labs1
5
Lab Final
Table 1: Metrics for Performance Assessment
Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
1
The grade for labs is computed by taking the average of all twelve lab assignments throughout the semester.
CSE 113 Undergraduate Course Assessment Spring 2010
dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.
Only the performance of Computer Science majors was evaluated. There were eleven Computer Science
majors enrolled in the course.
Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.
Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent
Table 2: Numerical Ranges for
Performance Measurement
One student only showed up for class a handful of times before the Midterm and submitted only three
assignments. The student did not choose to Withdraw from he class, so his grades are included in the
above scores. The student’s lack of participation, however, is not at all indicative of the performance of
the class as a whole. If the student’s scores were not considered, then the above scores would be 5-6
percentage points higher.
2
CSE 113 Undergraduate Course Assessment Spring 2010
The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
marginal comprehension of basic concepts. However, the scores shown here are not representative of the
performance of the class as a whole. Students from Electrical Engineering, Physics, and Mathematics
achieved much better performance in this area. A likely explanation is that the Freshmen CS majors did
not have a very strong background in mathematics. A continued effort must me made to introduce
mathematics and to provide sufficient homework for the students to master mathematical concepts such as
number systems and Boolean logic.
The analysis here shows that design skills were the weakest area of this class. The conclusion is consistent
with the observations of the instructor. There were four students who did not even turn in the design
documents for Project 0. The lack of submission could be a combination of motivational issues and a lack
of direction. Nevertheless, the scores for the rest of the students were also low for the design documents.
In the current instructor’s opinion, the course curriculum needs to place more emphasis on project design
principles. See the Remedial Action section for a discussion of some suggestions.
The performance measurement for outcome 3 was marginal, but it was the highest score of any of the
outcomes. If the score is dropped from the student who failed to take the midterm and the final, then the
average becomes 68.24, Satisfactory. The evaluation shows that the class successfully understood
concepts underlying the C language. Since Fall 2009, CSE 113 has been focused on language and
program design, since the more theoretical aspects were transferred to CSE 101. The result for objective 3
shows that the emphasis on the C language has been effective in improving student understanding.
The result for outcome 4 was Unsatisfactory. More than anything, however, the results show a lack of
motivation. There were three students who did not turn in either of the projects nor the lab final. The
schedule provided ample time for the completion of the assignments. During the development times for
the projects, both the instructor and class TAs help extended office hours. None of the students asked for
help. Their lack of participation gave them 0% scores for outcome 4. Removing these students from the
average, we obtain a new measurement of 65.94, Satisfactory. Removing one other student from the
average who did not turn in either Project 1 or the Lab Final, we obtain 74.93, which is an encouraging
score. This score much better reflects the performance of the students who made the effort to complete
and submit the projects.
The result for outcome 5 was Marginal. The average score is slightly misleading, however, because the
majority of students achieved either Satisfactory of Excellent performance. The results are influenced by
four students who did not attend the lab final and who did not submit over 40% of the lab assignments.
By not submitting their lab assignments, they clearly did not demonstrate any mastery of the tools
3
CSE 113 Undergraduate Course Assessment Spring 2010
presented. A much better score is obtained by considering students who turned in at least 60% of the lab
assignments. In this case, we obtain a score of 82.93, Satisfactory.
During the class lectures, the students were exposed to many topics in Computer Science including
programming language paradigms, computer architecture, and operating systems. However, these topics
were reinforced with homework assignments or exam questions because they did not correspond to
primary course objectives. Therefore, the contributions of this course to other program objectives cannot
be numerically measured and will be best analyzed by the professors of the students’ later courses.
The students who regularly attended lecture and lab this semester and attempted to complete assignments
achieved at least a marginal understanding of the material. Several students achieved Satisfactory to
Excellent performance for a majority of the objectives. Therefore, the basic approach in material
presentation seems sound. In particular, the assignment of small homework problems throughout the first
half of the semester helped students to understand the basics of the C language and was an improvement
over previous semesters.
The greatest weakness that this assessment has revealed is in programming project design. In the
instructor’s opinion, the curriculum for this course needs to have more focus on Software Engineering
strategies, as well as more applied program design exercises. Many students refused to turn in design
documents, and many others clearly spent much less time on project design than implementation. Future
instructors must make design methodologies interesting and useful to the students. Software planning
4
CSE 113 Undergraduate Course Assessment Spring 2010
should be introduced early in the semester and reinforced in homework and lab assignments instead of
only in the projects. A useful change would be to add an interactive group “software planning” lab during
the first half of the semester. In addition, it would be useful to modify lab assignment later in the semester
to correlate with the project design activities so that students could focus on project design before
beginning implementation.
One more suggestion for future classes would be to incorporate more applied software design into the
discussion of C data types. A strong effort was made in the second half of the semester to map problems
to data structure representations. However, it would be better to discuss practical applications of
structures, unions, and linked lists as they are introduced early in the semester.
5
CSE 113: Introduction to Programming
Course Assessment for Undergraduates in Fall 2010
Outcome Metrics
Midterm Section 2: Lexical Scoping
1 Final Section 0: Basic Concepts
Midterm Section 3: Binary Numbers
Project 0
2
Final Section 4: Program Design
Midterm Section 5: Short Answer
3
Final Section 3: Short Answer
Project 0
4
Lab Final
Labs1
5
Lab Final
Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.
Only the performance of Computer Science majors was evaluated. There were 42 Computer Science
majors enrolled in the course.
1The grade for labs is computed by taking the average of all ten lab assignments throughout the semester.
CSE 113 Undergraduate Course Assessment Fall 2010
Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.
Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent
Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 2 3 20 16 80.2 Satisfactory
2 6 2 13 20 77.66 Satisfactory
3 9 6 21 5 67.09 Satisfactory
4 10 1 12 18 72.07 Satisfactory
5 8 1 11 21 75.54 Satisfactory
Five students stopped showing up during the semester. Only one student choose to Withdraw from the
class, so that student's grades are not included in the above scores. These student’s lack of participation,
however, is not at all indicative of the performance of the class as a whole.
The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
Satisfactory comprehension of basic concepts. These concepts were taught in class to emphasize the fact
that one must learn certain aspects of how computers work in order to learn how the C language works.
The first five weeks of the semester were dedicated to these basic concepts.
Of the 41 students, one student did not take either of the exams that were used to score this metric.
2
CSE 113 Undergraduate Course Assessment Fall 2010
Students were given many design problems and were told to develop solutions that were solvable in C.
Despite repeated attempts to emphasize the importance of design, students did not take it seriously. One
homework was given (which was not calculated into Outcome 2). Students had the opportunity to redo
the assignment for more points, but many chose not to. More students had Unsatisfactory ratings than
Outcome 1 because a significant portion of Project 0 was used to calculate Outcome 2 of this course
metric.
There were 20 students who had an outcome of Excellent because they took advantage of the extra credit
offered which where programming based. The extra credit tested more advanced concepts implementing
data structures.
The performance measurement for Outcome 3 was satisfactory, but it was the weakest score of any of the
outcomes. The metrics used for this Outcome were open ended questions which tested students
knowledge on these basic programming elements. Of the two metrics used in the calculation of the
outcome, students overall did better on the final than the midterm. Below is a table showing averages of
the data set from the two metrics used.
This suggests that students were gaining a mastery of these basic elements in programing by the end of
the course.
The result for outcome 4 was Satisfactory. This data is more skewed than the rest because towards the end
of the semester which this data was taken from, results show a lack of motivation. There were quite a few
students who did not turn in portions of the project. The schedule provided ample time for the completion
of the assignment, however students are usually busy with other classes as well. During the development
times for the projects, both the instructor and class TAs help extended office hours.
Many students did poorly on the lab final. This could have been due to the fact that this was the first time
students had to accomplish a small programming assignment without any help from the instructor or TA.
The result for outcome 5 was Satisfactory. Throughout the semester, students had ample time to get
familiar with the programming environment and tools. The TA and instructor were at all the lab sessions
to troubleshoot any problems. This score is a bit misleading in the fact that some students stopped turning
in labs at the end of the semester. This was possibly due to their busy schedules or lack of interest during
the busy time.
3
CSE 113 Undergraduate Course Assessment Fall 2010
Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 0 0 0 3 88.33 Excellent
2 0 0 1 2 83.17 Satisfactory
3 1 1 1 0 62.13 Marginal
4 1 0 2 0 70.17 Satisfactory
5 0 2 1 0 64 Marginal
During the class lectures, the students were exposed to many topics in Imperative design and C
Programming. This includes top-down design and many of the C language concepts. These topics were
reinforced with homework assignments or exam questions.
*this number includes IT students while the Fall 2010 outcomes include only CSE students.
Outcome 2 improved from Marginal to Satisfactory while Outcome 5 was reduced from Mastery to
Satisfactory. Enrollment has gone up from Fall 2009 to Fall 2010.
Outcome 3 (understanding the basic elements of programming) was the lowest scoring outcome of the
course outcomes. Instructor feels that not enough practice was given to the students to master this
outcome. One suggestion for improvement is to assign more homework targeting this course outcome.
Half way through the course, students were finishing up learning about basic concepts (Outcome 1) and
4
CSE 113 Undergraduate Course Assessment Fall 2010
were just starting to learn about basic elements of programming (Outcome 3), hence the lower scores on
the midterm than on the final.
Outcome 4 was the second weakest scoring outcome. Students were motivated at the beginning of the
semester to implement the small programming assignments. Toward the end of the semester and
especially after introducing linked lists, students were unmotivated to learn. There is a big gap between
teaching students basic programming elements (Outcome 3) and basic data structures (Outcome 4).
Especially after the labs where students where introduced to their first data structures, many of them
seemed to have given up. This translated to the lack of motivation for project 0 which was used as part of
the metric for Outcome 4.
Instructor believes that labs should be re-written to introduce more of the concepts needed to manipulate
the basic data structures. Students understand pointers and structs separately, but putting them together
poses a challenge. Perhaps introducing a lab that covers the synthesis of these two concepts. This would
build the confidence in the students to be able to handle implementing these basic data structures.
Another point to contemplate is in programming project design. More lectures towards these
methodologies can be developed in order to solidify ground with these concepts. In addition, a larger
homework or mini-project can be given during the first half of the semester focusing on imperative design
may be given because not many of the advanced programming concepts have to be known in order for
students to learn this.
5
Class Assessment: Spring 2010 CSE122-- Algorithms and Data Structures
Assessment Results
Course Outcomes
At the end of the course, a student should be able to:
1. Understand fundamental data structures and their benefits;
2. Understand various sorting and searching algorithms;
3. be able to analyze the performance of such algorithms and data structures;
4. Design and implement simple software applications using appropriate algorithms and data
structures.
Assessment Methodology:
Each course outcome was tied to one or more questions in the comprehensive final exam,
midterm exam, quizzes and homework
A formula was used to compute a normalized weighted sum from the scores for those exam
questions, quizzes and homework
A table containing one numeric score per student per outcome was computed
The table was aggregated along outcomes to obtain a numeric score per outcome averaged over
the whole class
For example, the numeric score for Course Outcome 1 for student S1 was obtained by
1. Taking S1's score on the third question on the final exam and dividing it by the maximum
possible points on that question
2. Doing a similar computation on S1's score on the second question on the midterm exam, the
first homework, questions one, two and three on second homework, questions two and three on
the third homework, and quizzes one and two;
3. Adding up the 7 values obtained in (1) and (2) after multiplying them by the respective
coefficients
4. Repeating the above for each undergraduate student;
5. Averaging these numbers over the whole class to get a numeric assessment score for Outcome
1 averaged over the whole class.
6. The same process was applied for each outcome.
Course Formula
Outcome
Outcome 1 0.1*H1all + 0.1*H21,2,3 + 0.1*H32,3 + 0.15*Q1 + 0.15*Q2 + 0.2*M2 + 0.2*F3
Outcome 2 0.5*Report + 0.2*H53 + 0.1*Q4 + 0.2*M3
Outcome 3 0.3*H72,3 + 0.25*M4 + 0.25*M5+ 0.25*F4
Outcome 4 0.6*ProgAssignall + 0.2*M6 + 0.2*F5
Notations: H21,2,3 means homework 2 questions one + two + three, H is for homework, M for
Midterm, F for finals and Q for quizzes. The suffix is the question number. ProgAssignall is the
sum of four programming assignments given in class.
Performance Metric
Based on the answers to the questions, it was felt that a score between 69 and 80 percent implied
that the basic concept had been grasped, a score of 81 percent or more indicated a superior
performance, a score less than 55 percent implied that the basic concepts had not been learned
with a 56 to 68 percent score implying a marginal state:
Results
The following numeric assessment scores were obtained through the process outlined above:
Outcome Class Average Performance
1 72.48% Satisfactory
2 80.49% Excellent
3 79.93% Excellent
4 71.63% Satisfactory
Thus, outcomes 2 and 3 are excellent, and 1 and 4 are satisfactory.
Background Information
The CSE 122: Algorithms and Data Structures course has been identified as one of the most
retention-sensitive courses that are required of Computer Science and Information Technology
majors at New Mexico Tech. As part of the department’s efforts to continually improve the
retention of CS and IT students, therefore, it was decided in 2009-2010 that regular faculty
members, instead of TAs or Lecturers who have been teaching the course for several years,
should take responsibility and teach the course on a rotating basis, starting with senior faculty.
This instructor, Andrew Sung, thus took the first turn in the fall of 2010; he had last taught
CS122 (renamed CSE122 recently) more than 10 years ago.
In NMT’s sample curriculum for CS and IT majors, CSE122 is taken during semester 2. As a
result, the CSE122 class is always much larger in spring than in fall, since the majority of the
lower-division students are usually able to follow the sample curriculum. Due to the recently
declined enrollment in CS/IT, a total of only eight students enrolled in the fall 2010 class,
including three non-majors, (at least) two of them had failed the course before and were
repeating it for fulfilling their degree requirements, and five CS/IT majors.
An entry survey was conducted at the semester’s beginning of the small class’ background,
preference, and level of knowledge regarding computer science, mathematics, and programming
languages. Perhaps surprisingly, all students reported sufficient familiarity with the C language
to make coverage of any part of it unnecessary. As expected, it was clear that the subject students
were least familiar with is formal techniques for the analysis of algorithms and data structures.
Accordingly, course outcomes and assessment method were developed, as follows.
Course Outcomes
At the end of the course, a student should have learned and understood:
1. the basics of programming methodology
2. the basic techniques for the design and analysis of algorithms
3. a variety of examplary data structures and algorithms
Students are also expected to have:
4. gained experience in intermediate-level programming in C
Assessment Method
As no TA was assigned and the instructor, in response to the class’ background, had adopted
only a tentative syllabus to allow for flexible scheduling of lectures on the selected topics, a
simple assessment method based on the class’ performance in the (midterm and final) exams and
the four programming assignments was used to measure student learning with respect to the
course outcomes. At the end, five students (three majors and two non-majors) received A or A-,
a major had missed a number of lectures and some assignments due to sickness received C, a
non-major received a D, and another (possibly major) student received F for having missed most
lectures, assignments, and exams for unknown reasons. (Repeated efforts by the instructor to
contact the student had failed.)
In the instructor’s assessment, the student learning has met the course outcomes to a low-
satisfactory degree based on an overall average of 63%, according to the department’s score-
based assessment criteria shown below. This corresponds to a class score of 3 (satisfactory) with
respect to the department’s educational program outcome, item 1a (small scale programming),
which is the only outcome that the course contributes to.
As far as CS majors only are concerned, the overage average is 67% (satisfactory) with respect
to course outcomes, and a score of 3 (satisfactory) with respect to the relevant educational
program outcome.
Class average Performance
< 40% Unsatisfactory
40 – 59% Marginal
60 – 74% Satisfactory
75 – 100% Excellent
3. Describe how the class mechanism supports encapsulation and information hiding;
4. Design, implement, and test the implementation of “is-a” relationships among objects using a
class hierarchy and inheritance;
5. Compare and contrast the notions of overloading and overriding methods in an object-oriented
language;
6. Explain the relationship between the static structure of the class and the dynamic structure of the
instances of the class; and
2 Assessment Methodology
For the purpose of objective assessment, numeric scores were calculated to measure each student’s
mastery of each learning outcome. The following table shows the metrics used for assessment. The
metrics include midterm and final exam questions as well as grades on the projects and the homework
assignments. Quiz grades were not used for assessment because they were part of the initial topic
presentation and were thus not a good measurement of topic mastery.
Outcome Metrics
Midterm Short Answer Question 5
Midterm Free Answer Question 1
Midterm True/False Question 3
1 Final Multiple Choice Question 8
Final Short Answer Question 1
Final Short Answer Question 2
Final Short Answer Question 9
Project 0
Project 1
2
Final Free Answer Question 1
Homework 3
Midterm Short Answer Question 2
3 Midterm Short Answer Question 10
Final Short Answer Question 10
Project 1
4
Final Free Answer Question 2
5 Final Multiple Choice Question 4
Final Multiple Choice Question 10
CSE 213 Undergraduate Course Assessment Fall 2010
Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some projects included extra credit
opportunities. For this assessment, grades were limited to a maximum of 100% to make average scores
representative of the class as a whole. Finally, a score was computed for each outcome by averaging each
student’s individual score for the outcome.
Only the performance of Computer Science majors was evaluated. There were 12 Computer Science
majors enrolled in the course.
Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.
Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent
2
CSE 213 Undergraduate Course Assessment Fall 2010
5 2 0 3 7 83.3
6 3 0 3 6 73.6
7 3 0 3 6 72.2
One student only showed up for class a handful of times before the Midterm and stopped showing up
after the midterm test. The student did not choose to Withdraw from the class, so their grades are included
in the above scores. The student’s lack of participation, however, is not at all indicative of the
performance of the class as a whole.
The class outcomes were objective 3 was excellent while the rest were satisfactory.
The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
satisfactory comprehension of Object-oriented philosophy. Improvements in this area would include more
practice in specific concepts of polymorphism and inheritance.
This area is strong since the OOP language selected was Java. Many of the basic programming concepts
can be carried over from CSE113 and many of the students were able to pick up the additional OOP
knowledge with ease. Perhaps more challenging small programs could be chosen.
This was the strongest of the objective outcomes. This was a fairly easy objective to accomplish because
the concepts are fairly easy to grasp. This objective can be combined with objective 1 as a part of OOP
philosophy.
This was one of the lower scoring outcomes. Since the course philosophy focuses more on Object-
oriented Programming rather than just Java, this course objective can be raised if there was an additional
programming assignment testing knowledge of the “is-a” concept given in another OOP language such as
C++.
This was another strong objective outcome. This is another relatively easy concept to teach to the students
and many of them picked up on it easily. One improvement would to be give an assignment in another
OOP language.
3
CSE 213 Undergraduate Course Assessment Fall 2010
This was another strong objective outcome. This is another relatively easy concept to teach to the students
and many of them picked up on it easily. One improvement would to be give an assignment in another
OOP language.
This objective was created before the lecturer was hired. This seems like a very specific objective and
therefore there is very little data to support this. Perhaps this objective could be changed to “Collections
and operations to collections”.
One suggestion would be to have a Java primer at the beginning of the semester (5 weeks) where students
can learn the basics of Java syntax, compilation, and use. This would be small Java exercises which
would help with the course and program outcomes. After the initial period, then the course can get into
Object-oriented design and concepts.
4
CS 221: Computer System Organization
Assessment for Undergraduates in Fall 2010
Course Outcomes
Assessment Methodology:
! The assessment for students in this class consisted of four parts: quizzes (10%),
homework (20%), projects (10%), midterm exam (30%), and final exam (30%).
! Quizzes and homework covered all the aspects of course outcomes.
! Projects involved the 2-7 of course outcomes.
! Midterm exam and final exam were used to test all the aspects of course outcomes.
! The formula in table 1 was used to evaluate a student’s performance in this class.
! The formula in table 2 was used to evaluate the course outcomes.
! Only undergraduate student data was used for this analysis.
1
Table 2. Formula for measuring course outcomes in the final exam
Assessment
for course Formula
outcomes
N
Outcome 100 #
N # $ (full mark for question j in the outcome evalution # w )
evaluation j
j
Performance Metric
Based on the answers to the questions in the final test, it was felt that a 50 percent score
implied that the basic concept had been grasped, a score of 75% or more indicated a
superior performance, a score less than 35 percent implied that the basic concepts had not
been learned with a 35 to 50 percent score implying a marginal state:
Results
The following numeric assessment scores were obtained through the process outlined
above:
2
Scores for Program Outcomes:
This course affects two program outcomes. We will deal with each in turn by
substituting a numeric value for performance (1, 2, 3, and 4 for unsatisfactory, marginal,
satisfactory, and excellent respectively) and computing the average.
Outcome Overall
3 PL/Sys/Arch 3.2
Program outcome 7: Awareness of the legal, ethical, and societal impact of developments
in the field of computer science -- Course outcome 7 relates to this program outcome.
Outcome Overall
7 Ethics 4
According to the assessment in last two years, proposed strategies have been employed to
improve the outcome of this course. The course objective has been updated, so that the
new objective reflects the current content taught in the class. More fundamental
knowledge was covered to fulfill the prerequisites of subsequent courses including,
Operating Systems, Computer Architecture, and Compiler Writing. From the outcomes,
the refined objective clarified the content specifically, and the detailed objective guided
the teaching better. With another strategy, more effort has been focus on previous weak
objective, fundamental knowledge associated with Operating Systems and Computer
Architecture is intensified, and different variations and extensions of architecture were
compared. This strategy successfully enhanced the outcome of previous weak topic.
Some new strategies are proposed below to address new issues in this semester.
Remedial Actions:
! Outcome 4 was Marginal. This can be improved by spending more lecture time
and integrating more homework assignments covering the subject areas in the
outcome. Additionally, some advanced topics about of these subject areas are also
covered by the course of Computer Architecture. The approaches addressing the
problems include: first, this course objective could be adjusted to reduce the
overlap between Computer System Organization and Computer Architecture by
briefly introducing advanced topics of these areas in this course and, second, the
3
course schedule could be adjusted to allow more lecture time for these subject
areas.
! Outcomes 1 could be improved by strengthening the analysis and computational
expertise in the next teaching by affording several more small exercises in
homework and quiz.
4
New Mexico Tech
Department of Computer Science and Engineering
Course Assessment Report
CSE 222: Systems Programming
Spring 2010
Instructor: Jun Zheng
Course Outcomes
Assessment Methodology:
Each course outcome was tied to one or more questions in the homework,
quizzes, project, midterm exams and final exam.
A formula was used to compute a normalized weighted sum from the scores for
those questions.
A table containing one numeric score per student per course outcome was
computed.
The table was aggregated along course outcomes to obtain a numeric score per
outcome averaged over the whole class.
For example, the numeric score for Course Outcome 1 for student S1 was obtained by
(1) taking S1's score on the HW1 and dividing it by the maximum possible points on
HW1, i.e. normalized to [0, 1];
(2) doing a similar computation on S1's scores on the other parts that corresponds to
Course Outcome1;
(3) adding up the values obtained in (1) and (1) after multiplying them by their
percentage in the final score respectively, for example, the percentage for HW1 is
6.25%;
(4) diving the value obtained in (3) by the sum of percentages of all the parts related
to Course Outcome 1;
(5) multiplying the value obtained in (4) with 100;
(6) repeating the above for each student;
(7) averaging these numbers over the whole class to get a numeric assessment score
for Outcome 1 averaged over the whole class.
1
(8) the same process was applied for each outcome.
(Notation: HW1 represents the normalized score on Homework 1; QZ1 represents the
normalized score on Quiz 1; P1 represents the normalized score on Project 1; ME1Q1
represents the normalized score on question 1 of midterm exam 1; ME2Q1 represents the
normalized score on question 1 of midterm exam 2; FEQ1 represents the normalized
score on question 1 of final exam.)
Performance Metric
Based on the answers to the questions, it was felt that a 60 percent score implied that the
basic concept had been grasped, a score of 80% or more indicated a superior
performance, a score less than 40 percent implied that the basic concepts had not been
learned with a 40 to 60 percent score implying a marginal state:
2
Results
The following numeric assessment scores were obtained through the process outlined
above:
We will deal with each in turn by substituting a numeric value for performance (1, 2, 3,
and 4 for unsatisfactory, marginal, satisfactory, and excellent respectively) and
computing the average.
This was the first time I taught this course. The followings are the problems I met this
semester and the proposed solutions:
3
1. I used whiteboard for most of the class content which made the pace of the class
slow. I plan to put all course materials in powerpoints to solve the problem
2. I used 8 weeks to cover shell and shell programming which made the time left not
enough to cover other important topics such as signals. I plan to cut this part to 4
weeks.
3. Since most of class content were system calls for the corresponding topics,
students were easy to lose concentration. I plan to prepare more example
programs to explain the system calls. I’ll also prepare some in-class questions so
that students can actively join the class discussion.
4. Another improvement is to give students more small homework assignments to
reinforce the concepts learned in class.
4
Class Assessment: 2010 CSE324: (Principles of Programming Languages)
Class outcomes:
4. The ability to critique and properly utilize languages from each of the above
paradigms in building desired software solutions or the design of a new language.
The methodology that we deployed to calculate the above percentages for the
assessment is as follows:
1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual exam questions, homework, projects, and quizzes that relate to such
outcome. Each of the contributing modules maps to only one class outcome, i.e., no
“one-to-many” mapping of one module to more than one class outcome.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., exam questions, homework, projects, and
quizzes. The obtained average is divided by the corresponding assigned points to said
component to get a percent score. In the process, modules contributing scores to an
outcome will be weighted by different factors based on their importance, e.g.,
True/False exams questions are higher than MCQ questions, followed by the SA
questions.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, quizzes and semester projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3 co4
SEMESTER % 74% 80% 74% 86%
Performance Metrics:
The scale that has been used to assess each of the class outcomes is as follows:
Outcome 3: The 74% is still acceptable with a slight improvement over last year
(+3%). In addition to the quizzes, homework, projects, reports, and
exams I have continued to challenge the students every class with
some extra point questions in their quizzes and exams about the
different language paradigms. I continue to notice that students stay
more alert in the class and raise many useful discussions (some are
even challenging) and asked many useful follow up questions. I
intend to keep doing that in future classes to further improve the
score.
A common point to all of the above actions to enhance and better achieve the
class outcomes is to keep posting the class lecture notes on the class website. In
addition, continue updating the notes as the class progresses and notify the
students of any update on the class website. Hence students have continued access
to the notes. Students are showing more progress and many appreciated the note-
posting for easy access all the time. I also intend to seek monthly anonymous
student views on the covered subjects, to report on my teaching, the covered
topics, and any other points of their concerns. Those evaluations will be very
helpful to adjust the class teaching, based upon input from other department
faculty who practiced such a process.
1) First program outcome “the ability to design, implement, and test small software
programs, as well as large programming projects” is directly affected by the third
course outcomes.
In class, we analyze and critique languages like Ada with its “exception” handling
mechanism and concurrent tasking facility, relating to critical military applications
where such features are very useful. Moreover, we explore the powerful lisp and
Prolog capabilities, mapping them to AI and “expert systems” implementation,
respectively.
Course Outcomes
Assessment Methodology:
• Each course outcome was tied to one or more questions in the midterm and final
exam, or to one of the lab assignments. A formula was used to compute a
normalized weighted sum from the scores for those questions.
• A table containing one numeric score per student per course outcome was
computed. The table was aggregated along course outcomes to obtain a numeric
score per outcome averaged over the whole class.
• Only undergraduate student data was used for this analysis.
(Notation: MidtermQ3 represents the score on question Q3 on the midterm exam divided
by the maximum possible points on that question. Lab2 includes all parts of the lab
assignment: code reading, design, and implementation.)
1
Performance Metric
Based on the answers to the questions, it was felt that a 50 percent score implied that the
basic concept had been grasped, a score of 75% or more indicated a superior
performance, a score less than 35 percent implied that the basic concepts had not been
learned with a 35 to 50 percent score implying a marginal state:
Results
The following numeric assessment scores were obtained through the process outlined
above:
Program outcomes:
1b. the ability to design, implement, and test large programming projects involves the
course outcome 5.
3. knowledge of the concepts and techniques of operating systems and OS-level
programming involves course outcomes 1, 2, 3, and 4.
6. the capacity to work as part of a team involves the course outcome 5.
2
Program Course Class Average Performance Overall
Outcome Outcome
1b 5 83.5 4 4
Comments:
• In the labs, students chose their group mates based on their interests. This
approach is fine. However, imbalance in the performance existed among groups.
Qualification test may be given to evaluate students’ programming skills and the
results can be used to choose students for different groups.
• It was Venkata Jitendra Tumma’s first time to be the TA of this course. He was
not acquainted with Linux kernel programming. There was not enough time to
train him before labs began. I gave him detailed slides about the background and
assignment requirements of each lab. But still there were some questions asked by
the students that he could not answer. Training of the TA is necessary in the
future.
• The system administrators upgraded the operating systems in the lab once after
the first lab assignment. It made the 2.6.18 kernel not runnable on UML. Students
switched to the latest 2.6.32 kernel. However, the new kernel codes were not well
documented. Students spent much time in reading and understanding the kernel
codes. A stable lab environment is important and helpful.
• Four lab assignments were given. It took about three to four lab sessions to
complete each of them. It would be better to break the big problems into several
smaller ones and have each lab assignment contain two to three milestones for
completion, so that the students may not feel the lab assignments are too difficult
or challenging.
3
5/26/2011 Dongwan Shin
- Formula used:
100*(0.2*MidQ1 + 0.4*MidQ2 +
0.2*MidQ3 + 0.2FinalQ1)
2. Ability to elicit requirements from - Questions 4, 5, 6, 7 of Midterm 2. The class average was 64.2. This item is marginally met.
clients and specify them exam, Questions 2, 3, 4 of Final
Exam, and Quizzes 1, 2
- Formula used:
100*(0.1*MidQ4 + 0.1*MidQ5 +
0.1*MidQ6 + 0.1*MidQ7+0.15*
FinalQ2 + 0.15*FinalQ3 +
0.2*FinalQ4 + 0.05*Quiz1 +
0.05*Quiz2)
3. Ability to perform detailed design - Question 8 of Midterm, Questions 1. The class average was 43. This item is unsatisfactorily met.
through the architectural design, interface 5, 6, 7, 8 of Final Exam, and Quiz
design, object design, and the use of 3, 4
design patterns
- Formula used:
100*(0.1*MidtermQ8 +
0.2*FinalQ5 + 0.2*FinalQ6 +
0.2*FinalQ7 + 0.2*FinalQ8 +
0.05*Quiz3 + 0.05*Quiz4)
4. Ability to perform implementation - Question 9 of Final exam, and 3. The class average was 79.1. This item is satisfactorily met.
from design specification Class project implementation
specification
- Formula used:
5/26/2011 Dongwan Shin
100*(0.5*FinalQ9 +
0.5*ProjectImp)
5. Ability to plan and apply various - Question 10 of Final exam, Class 3. The class average was 67.2. This item is satisfactorily met.
testing techniques project final report
- Formula used:
100*(0.5*FinalQ10 +
0.5*ProjectReport)
6. Practical experience of using UML - Class project requirement 4. The class average was 84.5. This item is excellently met.
and OOP specification, Class project design
specification, and Class project
implementation specification
- Formula used:
100*(0.4*ProjectReq +
0.3*ProjectDes + 0.3*ProjectImp)
7. Ability to work in group to produce a - Class project peer review, Class 4. The class average was 90.4. This item is excellently met.
large-scale software product project presentation
- Formula used:
100*(0.8*ProjectPeer +
0.2*ProjectPresentation)
The course learning outcome 3 (O3), 4 (O4), and 5 (O5) contribute to the program outcome P1b(Large Prog). Hence, the numeric score
for assessment against the program outcomes is as follows:
The course learning outcome 2 (O2) and 6 (O6) contribute to the program outcome P5(Tech Comm). Hence, the numeric score for
assessment against the program outcomes is as follows:
5/26/2011 Dongwan Shin
P5 = (3 + 4)/2 = 3.5
The course learning outcome 7 (O7) contributes to the program outcome P6 (Group). Hence, the numeric score for assessment against the
program outcomes is as follows:
P6 = 4(O7) = 4
Conclusion
- Compared to the last assessment on this course offered in Spring 2009, this year’s assessment result shows that Outcome 3 has become worse
from “marginal” to “unsatisfactory.” This is because students still had problems in understanding object design and some design patterns.
Though we had lab sessions on the topics of object design and design patterns along with the introduction to IBM Software Architecture, it
seems that student did not benefit from the tool much, since it required a steep learning curve. This could be improved by introducing the tool
earlier in the semester and spending more labs using the tool.
- Output 2 and Output 3 could be improved by requiring students to take CSE 213 (Introduction to OOP).
1
Assessment Methodology
• Each course outcome was tied to one or more questions in the midter/final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
New Mexico Tech
Department of Computer Science and Engineering
Course Assessment Report
CSE 331: Computer Architecture
Fall 2010
Course Outcomes
1. Ability to design, implement, and test small software programs;
2. Knowledge of principles of modern computer system design;
3. Knowledge of principles of instruction set design;
4. Knowledge of basic pipelining techniques;
5. Knowledge of instruction level parallelism;
6. Knowledge of memory hierarchy design: Cache, virtual memory, virtual
machines;
7. Ability to evaluate the effectiveness of different computer architectures for
specific uses;
Assessment Methodology:
• Each course outcome was tied to one or more questions in the homework,
quizzes, midterm exams and final exam.
• A formula was used to compute a normalized weighted sum from the scores for
those questions.
• A table containing one numeric score per student per course outcome was
computed.
• The table was aggregated along course outcomes to obtain a numeric score per
outcome averaged over the whole class.
For example, the numeric score for Course Outcome 1 for student S1 was obtained by
(1) taking S1's score on the HW1 and dividing it by the maximum possible points on
HW1, i.e. normalized to [0, 1];
(2) doing a similar computation on S1's scores on the other parts that corresponds to
Course Outcome1;
(3) adding up the values obtained in (1) and (1) after multiplying them by their
percentage in the final score respectively, for example, the percentage for HW1 is
6.25%;
(4) diving the value obtained in (3) by the sum of percentages of all the parts related
to Course Outcome 1;
(5) multiplying the value obtained in (4) with 100;
(6) repeating the above for each student;
1
(7) averaging these numbers over the whole class to get a numeric assessment score
for Outcome 1 averaged over the whole class.
(8) the same process was applied for each outcome.
Performance Metric
Based on the answers to the questions, it was felt that a 60 percent score implied that the
basic concept had been grasped, a score of 80% or more indicated a superior
performance, a score less than 40 percent implied that the basic concepts had not been
learned with a 40 to 60 percent score implying a marginal state:
2
Results
The following numeric assessment scores were obtained through the process outlined
above:
We will deal with each in turn by substituting a numeric value for performance (1, 2, 3,
and 4 for unsatisfactory, marginal, satisfactory, and excellent respectively) and computing
the average.
3
5 77.83 3
6 66.12 3
7 76.63 3
The following table shows the comparison of three years’ scores for the outcomes.
In Fall 2010, course projects were assigned to students to help students understand certain
course materials (for outcomes 3, 5 and 6). The results are not significant but
encouraging. For example, an improvement on the score for outcome 6 (memory
hierarchy) can be observed which is consistently the weakest one among all outcomes.
Therefore, I plan to improve the quality of the course projects next time so that students
can gain hand-on experience on the course materials.
Another improvement in Fall 2010 was the quality of powerpoint slides which was
successful. There were few complain about the slides. I’ll continue to improve it.
4
A significant failure of Fall 2010 semester was the new textbook adopted. I will move
back the one adopted in Fall 2008 and Fall 2009 but improve the quality of home
assignments since this is the primary reason I switched the textbook.
5
CSE 342: Formal Language and Automata Theory
Assessment of Spring 2010 Class
(Instructor: Andrew Sung)
There are 20 undergraduate and one graduate student in the class. This assessment is based on
data of undergraduate students only.
Course Outcomes
The course covered finite automata, regular languages, context-free languages, and
pushdown automata, roughly corresponding to the major contents of chapters 1-7 of the
original Hopcroft, Motwani, Ullman textbook (first edition, 1979).
Other topics, including Turing machines, recursively enumerable languages, Chomsky
hierarchy of languages, decidability, were briefly discussed at appropriate times.
Selected advanced topics on theory of computation and computational complexity were
occasionally discussed mainly as motivating material.
Assessment Methodology
• The first exam covers finite automata and regular languages, corresponding to Outcome 1.
• The second exam is a take-home exam and covers topics corresponding to Outcome 1 and
Outcome 2.
• The final exam primarily covers context-free languages and pushdown automata,
corresponding to Outcome 2.
• No questions of relevance to Outcome 3 were given in any of the exams.
The average scores of the three exams of the class are interpreted according to the
performance metric established by the CS faculty for evaluating outcomes, as follows
1
Calculations based on the class’s normalized average score of the three exams results in the
assessment that the achievement of course outcomes 1 and 2 have been satisfactory, and
outcome 3 unsatisfactory.
This entire course is devoted to one program outcome (knowledge of the theoretical concepts
of computing). So, substituting a numeric value for the entries in the performance column in
the above table (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory, and excellent
respectively) and computing the average, giving a score of
(3+3+1)÷3 = 3.3 for that program outcome.
Comments:
The majority of the CSE342 class in spring 2010 completed the new (revised through
consultation of CSE and math faculties) Math221 course, which is the prerequisite for
CSE342 and taught by the math department.
The TA was responsible and effective.
2
CS 344: Design & Analysis of Algorithms
Assessment for Undergraduate CS Majors in
Fall 2010
Author: Subhasish Mazumdar
• Each course outcome was tied to one or more questions in the Midterm and final
exam.
• A formula was used to compute a normalized weighted sum from the scores for
those questions.
• A table containing one numeric score per student per course outcome was com-
puted.
• The table was aggregated along course outcomes to obtain a numeric score per out-
come averaged over the whole class.
• Only data for undergraduate CS majors was used for this analysis.
1
(Notation: Midterm:Q3 represents the score on question Q3 on the Midterm exam divided
by the maximum possible points on that question.)
Example: Suppose we want to compute the numeric score for Course Learning Outcome
2 and the row for that outcome in the above table is of the form
2 100*(0.4*MIDTERM:Q7or8 + 0.4*FINAL:Q3 + 0.2*FINAL:Q4)
1. We get a number for a student S1 by
(a) taking S1 ’s score on the seventh/eighth question on the Midterm and dividing
it by the maximum possible points on that question;
(b) doing a similar computation on S1 ’s score on the third and fourth questions on
the final exam;
(c) adding up the three values obtained in the last two steps after multiplying
them by 40, 40, and 20 respectively;
2. We repeat the above for each undergraduate student; and
3. The numbers are averaged over all the students in the whole class.
Performance Metric
The grading was done to ensure that a 40 percent score implied that the basic con-
cept had been grasped, a score of 60 percent or more indicated a superior performance, a
score less than 30 percent implied that the basic concepts had not been learned, with a 30
to 40 percent score implying a marginal state:
Class average Performance threshold
< 30% Unsatisfactory
30 to < 40% Marginal
40 to < 60% Satisfactory
60 to 100% Excellent
Results
The following numeric assessment scores were obtained through the process out-
lined above:
2
Average over all outcomes = 2.2
This entire course is devoted to one program outcome (knowledge of the theoretical con-
cepts of computing). So, we simply substitute a numeric value for the entries in the perfor-
mance column in the above table (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory,
and excellent respectively) and compute the average over all outcomes as shown above.
The numeric contribution of this course towards the program outcome knowledge of
the theoretical concepts of computing is the average over all outcomes as shown in the line
below the above table.
Remarks
The performance scores have decreased compared to last year. Over the last three
years, the numeric performance scores are as follows.
Outcome Number 2008 2009 2010
1 3 2 1
2 3 3 2
3 3 3 3
4 3 2 2
5 2 3 3
Clearly, there is an overall reduction in performance this year. First, unlike other
years, there were much fewer students with a strong background in slightly rigorous
logical reasoning.
Second, a sizeable number of students did not do well in the midterm but decided
to keep going without making significant adjustments.
Third, at the end of the semester, there were a large number of plagiarism cases that
indicated that quite a few students were relying on others. Such reliance leads to poor
examination scores.
• A surprising observation is the downward trend in Outcome 1 over the last three
semesters. Since the related material is basically an application of material covered
in CSE 122, I recommend more strenuous application of complexity analysis and
recurrence equations in that course.
• Outcome 4, which is about abstract logical concepts, was marginal last year as well.
This is a long-standing problem that this course shares with CSE 342 Formal Lan-
guage and Automata. The only way to correct this problem is improve the prepara-
tion in Discrete Mathematics as well as basic concepts of logic and rigorous proofs.
I recommend that an appropriate Mathematics course be made compulsory for all
CS sophomores.
• The degradation in Outcome 2 is basically a symptom of an overall poor showing
this year as discussed above and is connected with Outcome 1.
3
• Outcomes 3 and 5 remain at a satisfactory level. This is good news because it shows
that the attempt to making graph algorithms intuitive is succeeding and that the
additional time and energy spent in covering NP completeness, the most difficult
topic in the course, is paying off.
Remedial Actions
• Next time I teach this course, I shall spend time reviewing basic ideas of proofs,
material not covered in the Discrete Mathematics course; as well as simple complexity
analysis, material that is covered in CSE 122.
• As before, students refrain from asking questions owing to the fear of losing prestige
before their peers. In 2009, when I repeatedly asked for anonymous offline questions
and answered them it was greatly appreciated. This semester, I reduced the number
of times I did so; next semester, I shall attempt to increase that number.
• The plagiarism cases are disturbing. It raises the question of the effectiveness of
homework problems. One possibility is to add an one-hour tutorial session to this
course.
4
Assessment for CS 351 Undergraduates in Spring 2010
Course Outcomes
Assessment Methodology:
This is broken into 2 parts, obtaining course outcomes, and mapping the course outcomes
to the department's program outcomes. The steps in assessing the course outcomes are as
follows:
1. Each course outcome was tied to one or more questions in the midterm and final
exam.
2. Formulas were used to compute a normalized class score for each question.
3. A table of weights per question per course outcome was computed.
4. The table was aggregated along outcomes to obtain a column of question
significance weights for each course outcome.
5. The normalized class score for each midterm and final exam question was
combined with the weights for each course outcome, generating a weighted
contribution from each question to each course outcome.
6. The weighted contributions from each test question to each course outcome were
summed for each course outcome to get an aggregated assessment measure of the
achievement of the course outcome.
7. The one graduate student in the class was excluded in this analysis.
For the midterm and final exams the scores for each student were known and some
questions were worth more than one point, so each student's score on each question was
used to generate the contributions to each outcome.
For both the midterm and the final the importance of each question to each course
outcome was determined and stored as a number in [0, 10] where 0 is “no significance”.
Each question was allowed to have a nonzero significance to only one course outcome.
These “significance” numbers were then converted to normalized weights for each
outcome by adding them up for all questions for the given course outcome, then dividing
1
each stored number by the total for that course outcome. The sum of each course
outcome's weights is then 1.000.
For the midterm exam the normalized numeric score of each question was computed by
dividing the number correct (out of 33 students) on each question by the number of
students, giving a normalized class score in the range [0.0, 1.0] for each question.
For the final exam the numeric normalized score of each question was computed by
averaging the students' grades on each question and dividing them by the maximum
score possible on that question, giving a normalized class score in the range [0.0, 1.0]
for each question.
For the numeric contribution of each question to each course outcome the normalized
class score for that question was multiplied by the course outcome weight for that
question/course outcome combination.
The score for each course outcome was then computed by adding up the contributions of
each question to that course outcome.
Some of the test questions could have been applied to multiple course outcomes, so a
single course outcome was chosen for each question from among those the question
supported. Generally the choice was by which course outcome seemed most relevant,
but in some cases it was because that course outcome had few other questions that
applied. The first column is the question number from the midterm or final, the second
is the course outcome to which it applies, the third is the relevance measure (0 to 10) of
the question to the course outcome, the fourth is the weight computed from the
relevance's for all questions for the single course outcome, the fifth is the actual
composite grade of the class on that question, normalized to 0.0 to 1.0, and the last is the
contribution of that grade to the course outcome to which the question applies. The result
is the following table:
2
Question Outcome Relevance Weight Grade Contribution
Midterm - 1 1 4 4/14 3.90/6.00 0.1857
Midterm - 2 2 3 3/6 4.50/10.00 0.2250
Midterm - 3 3 8 8/25 0.80/3.00 0.3200
Midterm - 4 4 2 2/15 1.90/2.00 0.1267
Midterm - 5 4 2 2/15 1.20/2.00 0.0800
Midterm - 6 4 6 6/15 3.50/4.00 0.3500
Midterm - 7 2 3 3/6 1.00/2.00 0.2500
Midterm - 8 5 8 8/28 3.00/4.00 0.2143
Final-1 5 10 10/28 6.60/7.00 0.3367
Final-2 3 7 7/25 3.00/3.00 0.2800
Final-3 5 10 10/28 4.70/5.00 0.3357
Final-4 1 10 10/14 4.50/5.00 0.6429
Final-5 4 5 5/15 5.70/6.00 0.3167
Final-6 3 10 10/25 5.80/7.00 0.3314
Tallying these results by outcome gives the final scores for each course outcome.
The following numeric assessment scores for the course outcomes were obtained:
Based on the answers to the questions, it was felt that a score of less than 50 percent
implied that the tested concepts had not been learned, a 50 to 80 percent score implying
marginal capability, an 80 to 95 percent score implied satisfactory capability, and 95 or
above meant there was an excellent grasp of the subject matter. Thus, all the outcomes
were acceptable or excellent. Outcomes 6 and 7 were never measured via test questions
on the midterm or final.
Throughout the course quizzes and assignments were used to evaluate grasp of the basics
and the course material was adjusted to ensure additional exposure to advanced concepts
as the students were having few difficulties with the basics. The tests focused on subject
matter beyond the basics, and as such the above scores reflect their grasp of the advanced
3
concepts. The approach was successful, but the assessment, being based on the testing of
primarily the advanced concepts, especially in area 2, was not indicative of the student’s
levels of achievement.
The course outcomes have been related to the stated program outcomes via the mapping
given in the following table. A program “relevance weight” was generated for each
related outcome pair. These weights and the mappings have been listed in the following
table:
Discussion:
The lower levels seem to be indicative of the redirection of the test questions to less
emphasis on the basics and more emphasis on the advanced knowledge. This can be
4
remedied by targeting the basics more fully in the tests in the future.
5
CSE/IT353: (Data and Computer Communication)
Class Assessment (FALL2010)
Class outcomes:
1. The basic concepts, definitions, and mechanisms, at different hardware and
software levels, which constitute the underlying operations of data and computer
communication systems.
2. Design and applications of the local, metropolitan, and wide area networks (LAN,
MAN, and WAN, respectively); including the utilization of the most advanced
wired and wireless technologies and protocols.
The methodology that we deployed to calculate the percentages for the assessment is
as follows:
1) For each of the above listed class outcomes, identify the corresponding students obtained
grades in exams, home-works, projects, and quizzes.
2) For exams only, divide the exam sections (T-F, MCQ, SA) of questions into categories
with each question relating to one of the class’s outcomes, with minimal (or none overlap
between outcomes. Compute the average percentage for each category over different exam
sections, and weight each average by a scale that reflects its contribution importance to the
outcome evaluation (assessment). We also weight True/False question more than MCQ
because of the latter degree of difficulty, and some complains about MCQ questions
ambiguity.
3) For each obtained percentage, in 1 above, that pertains to certain outcome, weight with a
corresponding factor (depending on how many different grades, or partial grades, are
contributing to one outcome). For example, if an exam is the only contributor, we take the
exam percentage as the final assessment percentage. Exams are also factored with a degree
of difficulty since it is an open for the highest obtained student grade to set the rest of the
class normalized scores. Another case, if all different contributing subjects (quizzes,
HWs, and Projects) are there, then we reweight them differently (every semester) based on
the judgment of the instructor, with respect to the amount of students efforts and the
degree of difficulty per each subject.
Assessment Result: (The percentages represent the students weighted average scores over
exams, quizzes, and home-works related to each of the above class outcomes)
The scale that has been used to assess each of the class outcomes is as follows:
Conclusion
Possible further improvements:
As in last year of assessments, the most troublesome is the first outcome about the
basic definitions of subject, yet score moved from 69% last year to 77 % this year.
The remaining outcomes remain comfortably satisfactory, Yet, the internetworking
subject remained 78% and needs more attention next year. I will revisit and revise the
T/F and MCQ questions based on some mid-semester students feedback/survey, to
make them more precise/clear, also I will continue to show a sample exam to students
and get their feedback, which was very helpful for most of them last year.
This course affects the third and the fifth program outcomes. We will deal with each
by substituting a numeric value for performance (1, 2, 3, and 4 for unsatisfactory,
marginal, satisfactory, and excellent, respectively) and computing the average.
To strengthen the student’s knowledge of the physical layer of the data and computer
communication subject, some basic architectural hardware and operating system
software components are to be covered in the class. Hence, there is a direct mapping
between the first class’s outcome and the third program’s outcome, specifically to
its system and architecture components.
Clearly the second and third class outcomes contribute directly into the fourth
program’s outcome, exposing the students to one or more computer science
application areas such as the design of LAN, MAN, and WAN wired/wireless/optical
links’ varieties, both the underlying topologies and protocols.
1
(Notation: Midterm:Q3 represents the score on question Q3 on the Midterm exam divided
by the maximum possible points on that question. The end-semester database design and
implementation project appears as the last question on the finals.)
Example: Suppose we want to compute the numeric score for Course Learning Outcome
2 and the row for that outcome in the above table is of the form
2 100*(0.4*MIDTERM:Q7or8 + 0.4*FINAL:Q3 + 0.2*FINAL:Q4)
(a) taking S1 ’s score on the seventh/eighth question on the Midterm and dividing
it by the maximum possible points on that question;
(b) doing a similar computation on S1 ’s score on the third and fourth questions on
the final exam;
(c) adding up the three values obtained in the last two steps after multiplying
them by 40, 40, and 20 respectively;
3. The numbers are averaged over all the students in the whole class.
Performance Metric
The grading was done to ensure that a 70 percent score implied that the basic con-
cept had been grasped, a score of 80 percent or more indicated a superior performance, a
score less than 50 percent implied that the basic concepts had not been learned, with a 50
to 70 percent score implying a marginal state:
Results
The following numeric assessment scores were obtained through the process out-
lined above:
2
Course Class Performance
Outcome Average Score
1 93.4 4
2 72.9 3
3 84.7 4
4 93.8 4
5 79.2 3
6 80.8 4
Average over all outcomes = 3.7
This course is optional for the B.S. in Computer Science program. Hence, it is not used for
program outcome computation.
Remarks
Let us compare the numeric scores with those of the last two years. Clearly, there is
a marked improvement in 2010.
3
• As in the last years, it is gratifying that the first and last course learning outcomes
have excellent scores on average. This means that the overall goal of the course is
met: students can take problem requirements, design a database, and implement
queries and programs on it, i.e., come up with a database solution.
• It is also gratifying to observe that Course Learning Outcome 4 has moved from
marginal to excellent. As indicated last year, my hypothesis was that there was
not enough turn-around time for the homework on Physical Organization; conse-
quently, this year, I tested that hypothesis with a more focused homework on this
topic. It appears to have worked. Greater testing is necessary.
• Course Learning Outcome 5 has also moved from unsatisfactory to satisfactory. This
outcome is based on an introduction to concurrency and recovery, topics covered in
the graduate database course. Like In past years, it was informally observed that
students who did answer that question, performed well. That led me to suspect
that a majority of students chose to not spend much time preparing for this topic
knowing that there will be just one question on it in the finals and it would carry a
relatively small number of points. I shared my suspicion with the class. That may
have led to a change in the students’ approach to preparation.
• As in the last two years, it was observed that students had difficulty in grasping
the theory of the relational model because logic was involved. It is hoped that a
revamped Discrete Mathematics course will mitigate this problem.
Remedial Actions
• Next year, I shall continue to encourage students not to neglect the material under-
lying Outcome 5. Even though it is really an introduction to concepts they will see
in the graduate database course, I have found that students enjoy the material.
• I shall attempt to ensure sufficient turn-around time for the Physical Organization
homework.
4
CS 382: Legal, Ethical and Social Issues in Information Technology
Assessment for Computer Science Undergraduates in Fall 2010
Course Outcomes
Assessment Methodology:
The course assignments are not currently separated by course outcome. All assignments
fully integrate all course outcomes.
Performance Metric
Based on the answers to the questions, it was felt that a 61 percent score implied that the
basic concepts had been grasped, a score of 78 percent to 89 percent indicated a
satisfactory performance, and a score of 90 percent to 100 percent indicated an excellent
understanding of both the concepts and rationale underlying those concepts. A score of
less than 60 percent implied that the basic concepts had not been learned.
1
Results
The following numeric assessment scores were obtained through the process outlined
above:
As a point of interest, the same scores were computed after removing “0”s which
represented assignments not turned in. This yields the following table:
Therefore, excluding the “0”grades for work which was not submitted by the students
resulted in an overall performance of excellent.
As a result of the number of assignments not turned in and failure to attend classes, in a
fall class of 13 students, 2 students, or 15.0 percent of the class, will be required to repeat
the course. This is a significant reduction from the averages of 30% of students retaking
the class in prior semesters. See actions taken as a result of previous assessments below.
Program outcome: the awareness of the ethical and societal impact of developments in
the field of computer science involves all course outcomes.
Thus, the overall impact on program outcome is excellent. Note that with the calculation
including assignments not submitted, which includes two students who will have to
2
retake the course to complete their degree, the overall impact on the program outcome is
Satisfactory.
Remedial Actions:
• Evaluate developing new assignments for Fall 2011to assess Course Outcomes
1, 2, and 3 separately. The new assignments will need to have a primary focus
on each discrete course outcome, but will not exclude the other interrelated
outcomes.
• The students appear to lack an overall understanding of the significance of the
issues addressed in this course. Other department courses could expand the
components addressing the value of learning the legal and ethical issues in
computer science to prepare the students for this course.
• Alumni surveys could include questions regarding the professional value of the
legal and ethical issues addressed in this course for use in discussing the
purpose of class each semester.
3
5/26/2011 Dongwan Shin
Class Assessment Report: CSE 389 Internet and Web Programming in Fall 2010
- Formula used:
10*(0.5*MidQ1 + 0.5*FinalQ1)
4. Hand-on experience on various design - Question 9 and 10 in the final 3. The class average was 75.6, and this item is
1
Assessment Methodology
• Each course outcome was tied to one or more questions in the midterm exam, comprehensive final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
5/26/2011 Dongwan Shin
- Formula used:
10*(0.5* FinalQ9 +
0.5*FinalQ10)
5. Ability to develop state-of-the-art web- - Class project 4. The class average was 93.5, and this item is excellently
based applications met.
- Formula used:
(1*Project)
Conclusion
- Compared to the last assessment on this course, this year’s assessment result shows that Outcome 3 has been improved from “unsatisfactory”
to “satisfactory.” More class sessions on basic concepts on TCP/IP-based network protocols and architecture seems to have helped improve
this outcome.
- Outcome 2 could be improved by having more real-life examples and homework on how and where XML is used, since some students seem to
have had difficulty in understanding and using XML for solving given problems.
Outcomes: CS423 - Compiler Writing
Course Outcomes:
Students should be able to
Assessment Methodology:
• Each course outcome was tied to one or more questions on an exam and/or portions of a project.
• A formula was used to compute a normalized weighted sum from the scores for those questions.
• The numeric score per outcome was averaged over the whole class.
• Data for all students who completed the course was used for this analysis.
The formulas used were:
Course Formula
Outcome
1 100% *Average(Quizzes, Average(FQ1, FQ3, FQ5), Average(MQ1-MQ7))
2 100%*Average(FQ2)
3 75% * Average(FQ6,FQ7,FQ8,FQ9,FQ10) + 25% * Average(L9,L10)
4 100%
*Average(Average(MQ8,MQ9,FQ4,LFQasm,LFQc++),Average(L1,L4,L6,L7,L11))
5 75% * Average(D1, D2, D3,D4,D5) + 25%*Average(L2, L5, L8)
6 100% * Average(I1,I2, I3)
7 100% *Average(D6,D7,PN)
8 50% * (I1eval + I2eval + I3eval + Deval) + 50% * Instructor_Evaluation
(Notation: MQ3 represents the score on question Q3 on the midterm exam divided by the maximum
possible points on that question; FQ indicates a final exam question; LFQasm indicates the assembly
programming lab final exam; LFQc++ indicates the C++ programming lab final exam; D indicates a
design project component; L indicates a lab; I indicates an implementation component), PN indicated the
project notebook, and I# or D followed by “eval” means the evaluations of students performance as a
group member. The overlap in use of measures for outcomes is addressed in the Actions Section
following Program Outcomes. Note that outcome 4 relates to what is implemented in the large programs
in outcome 6, which explains why those formulas are the same.
Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.
1
Unsatisfactory corresponds to less than 60 percent.
Results:
The outlined process results in numeric assessment scores:
Thus, all outcomes were satisfactory. The performance measures translate as follows: Excellent is 4,
Satisfactory is 3, Marginal is 2, and Unsatisfactory is 1.
Note that in 2011, the results for class average includes two students who did not successfully complete
the course. These results also include zeros for one student on the design project and final exam. This
student had a family emergency and received an incomplete in the course. If these were removed from the
class, the results would improve and some would undoubtedly be excellent.
Actions:
The following is being considered:
• This year the writing and presentation was graded separately. This year the separately graded
design discussion and presentation were used for technical communication evaluation.
Program Outcomes:
Three course outcomes (CO) for this course are also program outcomes (PO). These use the same
equations as shown in course outcomes above.
Thus, the three course outcomes that provide a measure of program outcomes are all satisfactory.
Actions:
The following is being considered:
• There was a decline in student performance in a number of categories. It was discovered late in
the semester that the teaching assistant was not providing the quality or level of assistance to
students that has been provided in the past and is expected for this course. This was not reported
to the instructor until the last project was submitted, so intervention was not possible.
• For next year: The TA from last year will not be used again for this course and the new TA will
be more closely supervised.
2
423: Based on course evaluations:
• The most significant criticism by students was that the teaching assistant was not
helpful/knowledgeable and the instructor was not available all of the time. Although there were
other comments, e.g., about problems with the infrastructure, these were actually created by the
TA delivering an incorrect (broken) version of the infrastructure. A better TA is being recruited to
help address this problem. Reducing teaching load next fall will help address instructor
availability.
3
5/26/2011 Dongwan Shin
Class Assessment Report: CSE 441 Cryptography and Applications in Fall 2010
- Formula used:
10*(0.5*FinalQ1 + 0.5*FinalQ2)
2. Knowledge of mathematical concepts - Questions 3, 5, and 6 in the final 3. The class average was 72.4. This item is satisfactorily
behind modern cryptography exam met.
- Formula used:
10*(0.3*FinalQ3 + 0.4*FinalQ5
+ 0.3*FinalQ6)
3. Knowledge of cryptographic protocols, - Questions 7 and 8 in the final 2. The class average was 64.5, and this item is marginally
techniques, and applications exam met.
- Formula used:
1
Assessment Methodology
• Each course outcome was tied to one or more questions in the comprehensive final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
5/26/2011 Dongwan Shin
10*(0.5*FinalQ7 + 0.5*FinalQ8)
4. Ability to evaluate cryptographic protocols, - Questions 2, 7, 9, and 10 in the 3. The class average was 68.00, and this item is
techniques, and applications final exam satisfactorily met.
- Formula used:
10*(0.3*FinalQ2 + 0.3*FinalQ7
+ 0.2*FinalQ9 + 0.2*FinalQ10)
5. Ability to design, implement, and test - 1st and 2nd homework 4. The class average was 89.00, and this item is
security applications based on applied excellently met.
cryptography - Formula used:
1*(0.5*HW2 + 0.5*HW4)
6. Ability to work in group to develop - Class project 4. The class average was 89.56, and this item is
cryptographic applications in order to solve excellently met. The class project was group-based, 3
security problems - Formula used: students in a group.
1*( 1*Project)
7. Technical communication skills in written - Class project (including 4. The class average was 89.56, and this item is
and oral form presentation and final report) excellently met. The assessment of this item is based on
two project deliverables: technical reports and in-class
- Formula used: presentation.
1*( 1*Project)
Conclusion
- Compared to the last assessment on this course offered in Fall 2008, this year’s assessment result shows that Outcome 2 has been improved
from “marginally met” to “satisfactorily met.” The previous effort to cover relevant topics in Math 221 seems to have worked for this.
- Outcome 3 could be improved by introducing small hand-on implementation projects covering different cryptographic protocols such as
digital signature.
Course Outcomes: CS476 – Visualization:
Students should be able to
Assessment Methodology:
• Each course outcome was tied to one or more questions on an exam and/or portions of a project.
• A formula was used to compute a normalized weighted sum from the scores for those questions.
• The numeric score per outcome was averaged over the whole class.
• Data for all graded undergraduate students who completed the course was used for this analysis.
Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.
Unsatisfactory corresponds to less than 60 percent.
Results:
The outlined process results in numeric assessment scores (Note this does not include results for auditors
or a student who has not completed an incomplete for the course.):
Actions:
Given that there is only one student who enrolled in and completed the undergraduate course for a grade,
there is not enough data for reasonable assessment or to use in feedback for course improvement.
CSE 113: Introduction to Programming
Course Assessment for Undergraduates in Spring 2011
Outcome Metrics
Midterm Section 2: Lexical Scoping
Final Section 0: Basic Concepts
1
Midterm Section 3: Binary Numbers
Midterm Section 4: Boolean Logic
Project 0
2
Final Section 4: Program Design
Midterm Section 5: Short Answer
3
Final Section 3: Short Answer
Project 0
4
Lab Final
Labs1
5
Lab Final
Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.
1The grade for labs is computed by taking the average of all ten lab assignments throughout the semester.
CSE 113 Undergraduate Course Assessment Spring 2011
Only the performance of Computer Science majors was evaluated. There were 7 Computer Science
majors enrolled in the course. Out of the seven, 4 were repeating the course.
Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.
Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent
Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 1 0 3 3 78.76 Satisfactory
2 1 1 3 2 70.07 Satisfactory
3 1 0 5 1 74.83 Satisfactory
4 3 2 0 2 57.21 Marginal
5 2 2 1 2 66.15 Satisfactory
Of the seven students, one stopped showing up close to the end of the semester. Another student stopped
turning in labs towards the end of the semester. Since most of students were repeating the class,
enthusiasm was lackluster among the set.
The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
Satisfactory comprehension of basic concepts. These concepts were taught in class to emphasize the fact
that one must learn certain aspects of how computers work in order to learn how the C language works.
The first five weeks of the semester were dedicated to these basic concepts.
2
CSE 113 Undergraduate Course Assessment Spring 2011
One student did not take one of the exams used for the calculation of this metric. This is the unsatisfactory
outcome. This was the highest scoring outcome.
Students were given many design problems and were told to develop solutions that were solvable in C.
Of the seven students in the data set, one did not attempt the metrics used in scoring this Outcome. That is
where the Unsatisfactory rating comes from. Some students, despite emphasizing the importance, did not
take the project (which was used in scoring this outcome) seriously.
The performance measurement for Outcome 3 was Satisfactory. Of the seven students in the data set, one
did not attempt the metrics used in scoring this Outcome. Since most of the students were repeating the
class, these students may have helped to contribute to the higher Outcome due to the fact that they may
finally get the basic elements in programming.
The result for outcome 4 was Marginal. This was the lowest scoring outcome. This data is more skewed
than the rest because towards the end of the semester which this data was taken from, results show a lack
of motivation. There were quite a few students who did not turn in portions of the project. The schedule
provided ample time for the completion of the assignment, however students are usually busy with other
classes as well. During the development times for the projects, both the instructor and class TAs help
extended office hours.
As with the rest of the outcomes, one student completely stopped coming to class towards the end of the
semester and another student did not turn in many of the labs that were calculated in this course outcome.
Many students did poorly on the lab final. This could have been due to the fact that this was the first time
students had to accomplish a small programming assignment without any help from the instructor or TA.
The result for outcome 5 was Satisfactory. Throughout the semester, students had ample time to get
familiar with the programming environment and tools. The TA and instructor were at all the lab sessions
to troubleshoot any problems. This score is a bit misleading in the fact that some students stopped turning
in labs at the end of the semester. This was possibly due to their busy schedules or lack of interest during
the busy time.
3
CSE 113 Undergraduate Course Assessment Spring 2011
During the class lectures, the students were exposed to many topics in Imperative design and C
Programming. This includes top-down design and many of the C language concepts. These topics were
reinforced with homework assignments or exam questions.
In response to the previous course assessment (Fall 2010), 4 new small homework assignments were
developed to target Outcome 3. The outcome score went from 67.09 to 74.83. Though this does show
some improvement, these results need much more data. Variance may come from the fact that 4 out of the
7 in this assessment's set (Spring 2011) were repeats from last semester and there are drastically less
numbers in the data set. A comparison with Fall 2011 is suggested.
The Fall 2010 assessment suggested to improve Outcome 4 by introducing more practice with basic
programming elements. One more lab was created this semester to assist the students in being able to
implement linked lists. This lab was entitled “pointers to structures”. While the outcome for this semester
was only marginal, a comparison with Fall 2011 is suggested due to the fact that 2/7 of the set did not
accomplished the required assignments in order to have an adequate assessment.
Instructor believes that labs should incorporate more testing assignments rather than just one lab final at
the end. This would allow students to realize what they are lacking instead of just asking for help as a
normal lab assignment. Perhaps a quiz at the beginning of each lab as is done in many of the chemistry or
physics lab courses.
Another point to contemplate is in programming project design. More lectures towards these
methodologies can be developed in order to solidify ground with these concepts. In addition, a larger
homework or mini-project can be given during the first half of the semester focusing on imperative design
may be given because not many of the advanced programming concepts have to be known in order for
students to learn this.
4
Class Assessment: Spr2011 CSE122-- Algorithms and Data Structures
Class outcomes:
At the end of the course, a student should be able to:
1. Understand fundamental data structures and their benefits;
4. Design and implement simple software applications using appropriate algorithms and
data structures.
The methodology that we deployed to calculate the above percentages for the
assessment is as follows:
1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual exam questions, homework, projects, and quizzes that relate to such
outcome. Each of the contributing modules maps to only one class outcome, i.e., all
are orthogonal with respect to the class outcomes. All class outcomes are covered
throughout the instruction modules used in the assessment process.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., exam questions, homework, projects, and
quizzes. The obtained average is divided by the corresponding assigned points to said
component to get a percent score. In the process, modules contributing scores to an
outcome will be weighted by different factors based on their importance.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, quizzes and semester projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3 co4
SEMESTER % 86% 86% 79% 91%
Performance Metrics:
The scale that has been used to assess each of the class outcomes is as follows:
Outcome 1: I think by going slow over basic data types and abstraction at the
begging tremendously in concreting the students’ understanding of
such foundation knowledge. Examples also aided in the process and
being humble and down to the level of the students letting them
know that it is OK to be confused but it is very smart if you do not
hide and keep asking questions. Breaking their fear of the subject is
very important.
Outcome 3: It was the hardest subject; lots of actual coding examples and clear
definitions of different asymptotic run time complexity functions
(function of program input), distinguishing between them. Moreover,
the medium input analysis where the choice of pointer versus array
implementation (time vs. space) based on the target application,
without bias.
A common point to all of the above actions to enhance and better achieve the
class outcomes is to keep posting the class lecture notes on the class website. In
addition, continue updating the notes as the class progresses and notify the
students of any update on the class website. Hence students have continued access
to the notes. Students are showing more progress and many appreciated the note-
posting for easy access all the time. I will continue to seek mid-semester
anonymous student views on the covered subjects, to report on my teaching, the
covered topics, and any other points of their concerns. Those evaluations proved
to be very helpful to adjust the class teaching, based upon input from other
department faculty who practiced such a process. It was very helpful, at the
beginning of every class topic to explain the three major questions: WHY?
WHAT? HOW?
Program outcome 1a: “the ability to design, implement, and test small software programs”
involves course outcomes 1, 2, 3 and 4.
Course Outcomes
Assessment Methodology:
Each course outcome was tied to one or more questions in the homework,
quizzes, project, midterm exams and final exam.
A formula was used to compute a normalized weighted sum from the scores for
those questions.
A table containing one numeric score per student per course outcome was
computed.
The table was aggregated along course outcomes to obtain a numeric score per
outcome averaged over the whole class.
For example, the numeric score for Course Outcome 1 for student S1 was obtained by
(1) taking S1's score on the HW1 and dividing it by the maximum possible points on
HW1, i.e. normalized to [0, 1];
(2) doing a similar computation on S1's scores on the other parts that corresponds to
Course Outcome1;
(3) adding up the values obtained in (1) and (1) after multiplying them by their
percentage in the final score respectively, for example, the percentage for HW1 is
6.25%;
(4) diving the value obtained in (3) by the sum of percentages of all the parts related
to Course Outcome 1;
(5) multiplying the value obtained in (4) with 100;
(6) repeating the above for each student;
1
(7) averaging these numbers over the whole class to get a numeric assessment score
for Outcome 1 averaged over the whole class.
(8) the same process was applied for each outcome.
Course Formula
Outcome
1 100*(HW1*5% + P1*5% + QZ1*1.875%+ QZ2*1.875% +
MEQ1*3.6% + MEQ2*1.2% + MEQ3*2% + MEQ4*2%)/(5% + 5% +
1.875% + 1.875 + 3.6% + 1.2% + 2% + 2%)
2 100*(HW2*5% + QZ3*1.875% + MEQ5*2%)/(5% + 1.875% + + 2%)
3 100*(P2*5% + QZ4*1.875% + QZ5*1.875% + QZ6*1.875% +
MEQ1*1.6% + MEQ2*2% + MEQ3*2% + MEQ6*2% +
FEQ3*3%)/(5% + 1.875% + 1.875% + 1.875% + 1.6% + 2% + 2% +
2% + 2.7%)
4 100*( P3*5% + P4*5% +QZ7*1.875% + QZ8*1.875% + QZ9*1.875%
+ FEQ1*1.35% + FEQ2*2.25% + FEQ3*3.15% + FEQ4*3% +
FEQ5*3% + FEQ6*3%)/( 5% + 5% + 1.875% + 1.875% + 1.875% +
1.35% + 2.25% + 3.15% + 3% + 3% + 3%)
5 100*(FEQ1*0.45%+FEQ2*0.45%+FEQ3*0.9%)/(0.45%+0.45%+0.9%)
6 100*(P4*5.8% + FEQ1*2.25% + FEQ2*2.25% + FEQ3*4.5% +
FEQ7*3% )/(5.8% + 2.25% + 2.25% + 4.5% + 3%)
7 100*(P2*5% + P3*5% + P4*5% + P5*5% + HW2*5% + MEQ5*2% +
MEQ6*2% + FEQ7*3%)/(5% + 5% + 5% + 5% + 2% + 2% + 3%)
(Notation: HW1 represents the normalized score on Homework 1; QZ1 represents the
normalized score on Quiz 1; P1 represents the normalized score on Project 1; MEQ1
represents the normalized score on question 1 of midterm exam; FEQ1 represents the
normalized score on question 1 of final exam.)
Performance Metric
Based on the answers to the questions, it was felt that a 60 percent score implied that the
basic concept had been grasped, a score of 80 percent or more indicated a superior
performance, a score less than 40 percent implied that the basic concepts had not been
learned with a 40 to 60 percent score implying a marginal state:
2
Results
The following numeric assessment scores were obtained through the process outlined
above:
We will deal with each in turn by substituting a numeric value for performance (1, 2, 3,
and 4 for unsatisfactory, marginal, satisfactory, and excellent respectively) and
computing the average.
3
This was the second time I taught this course. The course content was modified compared
with that of Spring 2010. Signals were introduced in Spring 2011 and the content of shell
and shell programming was reduced as planned in the report of Spring 2010. I plan to
further revise the content in the future.
In Spring 2011, all the course materials were presented in powerpoints as proposed in the
report of Spring 2010. The feedback from students was positive. I’ll continue to improve
the quality of the slides.
I also prepared more in-class questions for students. The interaction with students in the
class was positive. I’ll keep the strategy for future teaching.
An existing problem is how to teach the system calls to make the learning more
interesting. I plan to explore problem based learning strategy in the future. Driven by
challenging problems, students can work actively on how to solve real problems with the
system calls.
4
Class Assessment: 2011 CSE324-- Principles of Programming Languages
Class outcomes:
4. The ability to critique and properly utilize languages from each of the above
paradigms in building desired software solutions or the design of a new language.
The methodology that we deployed to calculate the above percentages for the
assessment is as follows:
1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual exam questions, homework, projects, and quizzes that relate to such
outcome. Each of the contributing modules maps to only one class outcome, i.e., all
are orthogonal with respect to the class outcomes. All class outcomes are covered
throughout the instruction modules used in the assessment process.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., exam questions, homework, projects, and
quizzes. The obtained average is divided by the corresponding assigned points to said
component to get a percent score. In the process, modules contributing scores to an
outcome will be weighted by different factors based on their importance, e.g.,
True/False exams questions are higher than MCQ questions, followed by the SA
questions.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, quizzes and semester projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3 co4
SEMESTER % 0.85% 0.86% 0.93% 0.87%
Performance Metrics:
The scale that has been used to assess each of the class outcomes is as follows:
Outcome 3: It has the largest score improvement from 74% to 93% (+19%) In
addition to the quizzes, homework, projects, reports, and exams I
have continued to challenge the students every class with some extra
point questions in their quizzes and exams about the different
language paradigms. I continue to notice that students stay more alert
in the class and raise many useful discussions (some are even
challenging) and asked many useful follow up questions. I intend to
keep doing that in future classes to further maintain such score.
1) First program outcome “the ability to design, implement, and test small software
programs, as well as large programming projects” is directly affected by the third
course outcomes.
In class, we analyze and critique languages like Ada with its “exception” handling
mechanism and concurrent tasking facility, relating to critical military applications
where such features are very useful. Moreover, we explore the powerful lisp and
Prolog capabilities, mapping them to AI and “expert systems” implementation,
respectively.
Course
Outcomes
At the end of the course, a student should be able to:
1. Understand the functions, structures, and history of operating systems;
2. Master process management concepts including scheduling, synchronization, and
deadlocks;
3. Grasp concepts of memory management including virtual memory;
4. Master issues related to storage systems, file system interface and
implementation, and disk management;
5. Become acquainted with Linux kernel programming.
Assessment
Methodology:
• Each course outcome was tied to one or more questions in the midterm and final exam,
or to one of the lab assignments. A formula was used to compute a normalized
weighted sum of the scores for those questions.
• A table containing one numeric score per student per course outcome was computed.
The table was aggregated along course outcomes to obtain a numeric score per outcome
averaged over the whole class.
(Notation: MTT3 represents the average score on question 3 on the take-home midterm exam.
MTI3 represents the average score on question Q3 on the in-class midterm. Lab2 includes all
parts of the lab assignment: code reading, design, and implementation.)
Performance
Metric
Based on the answers to the questions, it was felt that a 50 percent score implied that the
basic concept had been grasped, a score of 75% or more indicated a superior performance, a
score less than 35 percent implied that the basic concepts had not been learned with a 35 to
50 percent score implying a marginal state:
Class Average Performance
< 35% Unsatisfactory
35 – 49% Marginal
50 – 74% Satisfactory
75 – 100% Excellent
Results
The following numeric assessment scores were obtained through the process outlined above:
Outcome Class Average Performance
1 81.8
Excellent
2 83.5 Excellent
3 87.8 Excellent
4 89.6 Excellent
5 85.3 Excellent
Scores for extra-credit problems were treated as if they were just another question on the
particular topic area and did not artificially inflate the numbers in the above table.
Program outcomes:
1b. The ability to design, implement, and test large programming projects involves the course
outcome 5.
3. Knowledge of the concepts and techniques of operating systems and OS-level
programming involves course outcomes 1, 2, 3, and 4.
6. The capacity to work as part of a team involves the course outcome 5.
Comments:
• For all lab assignments, students chose their group mates based on their interests. This
approach is fine. However, imbalance in the performance existed among groups. For the
most part, groups had a variety of specialties, which helped implementation. One group
split and the team members had to find other groups to participate with.
• It was Jitendra Tummula’s second time as the TA of this course. He was still not
acquainted with Linux kernel programming from the previous instance of this course.
There was not enough time to train him before labs began. He had detailed slides about
the background and assignment requirements of each lab. But there were still many
questions asked by the students that he could not answer. His preparation for the labs was
insufficient to run the labs as they existed. The lab assignments were rewritten during the
semester with more of a focus on implementation of various principles instead of being
LINUX modification. About half of the new lab assignments were developed and used
during this course. The rest of the lab assignments are in the process of being written by a
new TA and the instructor.
• The system administrators upgraded the operating systems in the lab once after the first
lab assignment. It was difficult for the students to use the systems in the lab until they
became stable. A stable lab environment is important and helpful.
• Six lab assignments were given. It took between two to four lab sessions to complete
each of them. There were milestones for nearly every lab and in-class presentations
associated with steps along the way to completion. Peer and instructor feedback was
given for all lab assignments at each milestone, to ensure students didn’t get too far down
the implementation road with major design issues.
5/26/2011 Dongwan Shin
- Formula used:
100*(0.2*MidQ1 + 0.2*MidQ2 +
0.2*MidQ3 + 0.2FinalQ1 +
0.2HW1)
2. Ability to elicit requirements from - Questions 4, 5, 6, 7 of Midterm 3. The class average was 72.6. This item is satisfactorily met.
clients and specify them exam, Questions 2, 3, 4 of Final
Exam, and Quizzes 1, 2
- Formula used:
100*(0.1*MidQ4 + 0.1*MidQ5 +
0.1*MidQ6 + 0.1*MidQ7+0.15*
FinalQ2 + 0.15*FinalQ3 +
0.2*FinalQ4 + 0.05*Quiz1 +
0.05*Quiz2)
3. Ability to perform detailed design - Question 8 of Midterm, Questions 2. The class average was 54.9. This item is marginally met.
through the architectural design, interface 5, 6, 7, 8 of Final Exam, and Quiz
design, object design, and the use of 3, 4
design patterns
- Formula used:
100*(0.1*MidtermQ8 +
0.2*FinalQ5 + 0.2*FinalQ6 +
0.2*FinalQ7 + 0.2*FinalQ8 +
0.05*Quiz3 + 0.05*Quiz4)
4. Ability to perform implementation - Question 9 of Final exam, and 3. The class average was 73.9. This item is satisfactorily met.
from design specification Class project implementation
specification
5/26/2011 Dongwan Shin
- Formula used:
100*(0.5*FinalQ9 +
0.5*ProjectImp)
5. Ability to plan and apply various - Question 10 of Final exam, Class 3. The class average was 72.2. This item is satisfactorily met.
testing techniques project final report
- Formula used:
100*(0.5*FinalQ10 +
0.5*ProjectReport)
6. Practical experience of using UML - Class project requirement 4. The class average was 87.8. This item is excellently met.
and OOP specification, Class project design
specification, and Class project
implementation specification
- Formula used:
100*(0.4*ProjectReq +
0.3*ProjectDes + 0.3*ProjectImp)
7. Ability to work in group to produce a - Class project peer review, Class 4. The class average was 93.0. This item is excellently met.
large-scale software product project presentation
- Formula used:
100*(0.8*ProjectPeer +
0.2*ProjectPresentation)
The course learning outcome 3 (O3), 4 (O4), and 5 (O5) contribute to the program outcome P1b(Large Prog). Hence, the numeric score
for assessment against the program outcomes is as follows:
The course learning outcome 2 (O2) and 6 (O6) contribute to the program outcome P5(Tech Comm). Hence, the numeric score for
assessment against the program outcomes is as follows:
P5 = (3 + 4)/2 = 3.5
The course learning outcome 7 (O7) contributes to the program outcome P6 (Group). Hence, the numeric score for assessment against the
program outcomes is as follows:
P6 = 4(O7) = 4
Conclusion
- Compared to the last assessment on this course offered in Spring 2010, this year’s assessment result shows that Outcomes 2 and 3 have
improved from “unsatisfactory” to “marginal” and “marginal” to “satisfactory,” respectively. In order to address the steep learning curve of
IBM Software Architect, I have not only introduced the tool earlier and spending more labs with it, but I have also required students to use the
tool for their homework and final project. This seems to have worked to improve the outcomes.
- CSE 213 is a prerequisite for this course, which introduces object-oriented programming to students before they take this course. This could
improve the outcome 3.
1
Assessment Methodology
• Each course outcome was tied to one or more questions in the midterm/final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
CSE 342: Formal Language and Automata Theory
Assessment of Spring 2011 Class
(Andrew Sung, Instructor)
There are 17 undergraduate and one graduate student in the class. This assessment is based on
data of undergraduate students only.
Course Outcomes
1. have understood the basic theorems on finite automata, regular languages, and regular
expressions, and be able to solve basic problems about them
2. have understood the basic theorems on context-free languages and pushdown automata,
and be able to solve basic problems about them
3. have been exposed to some other classes of languages and automata, e.g. context-
sensitive and recursively enumerable languages, and Turing machines
The course covered finite automata, regular languages, context-free languages, and
pushdown automata, roughly corresponding to the major contents of chapters 1-7 of the
original Hopcroft, Motwani, Ullman textbook (first edition, 1979).
Other topics, including Turing machines, recursively enumerable languages, Chomsky
hierarchy of languages, decidability, were very briefly discussed at introductory level (i.e.
no theorems, formal proofs, etc.), due to time limitations.
Some advanced topics on theory of computation and computational complexity were
occasionally discussed by the instructor at appropriate times as motivating material.
Assessment Methodology
• The first exam covers finite automata and regular languages, corresponding to Outcome 1.
• The second exam is a take-home exam and covers topics corresponding to Outcome 1 and
Outcome 2.
• The final exam primarily covers context-free languages and pushdown automata,
corresponding to Outcome 2.
• No questions of relevance to Outcome 3 were given in any of the exams.
The average scores of the three exams of the class are interpreted according to the
performance metric established by the CS faculty for evaluating outcomes, as follows
1
75 – 100% Excellent
Calculations based on the class’s normalized average score of the three exams results in the
assessment that the achievement of course outcomes 1 and 2 have been satisfactory, and
outcome 3 unsatisfactory.
This entire course is devoted to one program outcome (knowledge of the theoretical concepts
of computing). So, substituting a numeric value for the entries in the performance column in
the above table (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory, and excellent
respectively) and computing the average, giving a score of
(3+3+1)÷3 = 3.3 for that program outcome.
Comments:
For next year’s class, if this instructor is to teach the class again, he plans to spend more
time on topics pertain to Outcome 3 by reducing the amount of coverage on topics pertain
to Outcome 2.
Several students had complained about the TA’s grading.
Class attendance was the best in a number of years.
2
CSE 389: Information Protection
Course Assessment for Undergraduates in Spring 2011
Outcome Metrics
Midterm Q4
Midterm Q5
1
Final Q11
Final Q13
Project 1
2
Project 2
Midterm Q10
3 Midterm Q12
Final Q4
Final Q2
4 Final Q12
Final Q14
Midterm Q7
Midterm Q8
5
Final Q1
Final Q10
Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.
Only the performance of Computer Science majors was evaluated. There were 18 Computer Science
majors enrolled in the course.
CSE 113 Undergraduate Course Assessment Spring 2011
Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.
Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent
Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 2 0 4 12 86.64 Excellent
2 1 0 4 13 87.2 Excellent
3 2 1 6 9 83.12 Satisfactory
4 1 1 6 10 81.57 Satisfactory
5 2 3 6 7 74.97 Satisfactory
Of the 18 students, one stopped showing up in the middle of the semester. As the table shows, this
student's performance was not indicative of the rest of the class.
The performance measurement for objective 1 was Excellent. This was the second highest scoring
outcome. Outcome was accomplished through daily discussions of current event topics as well as lecture
material. A total of 4-5 weeks was spent on the material used to score this outcome.
Of the 18 students, one student did not take the Final exam, which was used in scoring this metric.
2
CSE 113 Undergraduate Course Assessment Spring 2011
Risk assessment was a major portion of the course. Students had two main projects where they had to
draft a contract with a client, perform a vulnerability assessment of the client's assets, and write up a
report on findings as well as remediation recommendations.
This was done in groups, but individuals' grades were determined based on peer evaluations in addition to
the group's grade.
Of the 18 students, one student did not participate in the group projects, which was used in the scoring of
this metric. This outcome was still the highest scoring Outcome despite this fact.
Firewalls and access control was covered jointly due to the firewall's use of access control lists. Using
firewalls to teach access control is a good practical way of showing that different types of packets require
different types of access from no access to no restrictions.
Of the 18 students, one student did not take the Final exam, which was used in scoring this metric.
The result for outcome 5 was Satisfactory. This was the lowest scoring outcome. A total of 2-3 weeks was
spent in class to cover this Outcome. The topics covered by this Outcome were taught at the end of the
semester.
Of the 18 students, one student did not take the Final exam, which was used in scoring this metric.
Networking knowledge was not as strong as the instructor would have liked. A networking primer was
taught at the beginning of class to catch students up to the OSI model and other networking basics.
Course can be improved if students had more mastery of this material.
3
CSE 113 Undergraduate Course Assessment Spring 2011
Vulnerability assessments were key to this course. There were two given throughout the semester. One
network setup was constructed by the instructor and every test by the students were in a controlled
environment. Since this was the first attempt at many students to break into a system, many made
mistakes that would have been unacceptable in a real vulnerability test. This emphasizes the fact that this
course should do two projects, one in a controlled environment and the second could possibly be a real
world test. This will help Outcome 2.
4
Course Outcomes: CS451 – Parallel Processing:
Students should be able to
No formal assessment has yet been done for this course. Future assessment will follow the
outline below
Assessment Methodology:
• Each course outcome was tied to one or more questions on an exam and/or portions of a project.
• A formula was used to compute a normalized weighted sum from the scores for those questions.
• The numeric score per outcome was averaged over the whole class.
• Data for all students who completed the course was used for this analysis.
(Notation: MQ3 represents the score on question 3 on the midterm exam divided by the maximum
possible points on that question; H indicates a homework assignment; E indicates an exploratory project;
F3 indicates Final exam question 3; and Pc1 indicates component 1 of the major semester project.
Evaluations are graded student evaluations of other student projects.) “average” just indicates to average
the scores for listed components with equal weighting.
Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.
Unsatisfactory corresponds to less than 60 percent.
Results:
The outlined process results in numeric assessment scores:
Actions:
The previous assessment of the course specifically recommended improvement on OpenMP and MPI
programming. Both of these have gone from class average of low to mid 70s to mid to high 80s. All
performance is acceptable and does not need specific improvement.
The one thing that could still use some improvement is in further separation of criteria for assessment.
Class Assessment: Spr2011 CSE452-- Introduction to Sensor Networks
Class outcomes:
At the end of the course, a student should be able to:
1. Understand the basic concepts, keywords/terminologies, constraints, and challenges
of the sensor networks technology, covering different types of hardware and
software levels of individual sensors' motes; and the sensor networks significant
importance in vast number of civil and military applications.
2. Understand the special sensor networks MAC and network protocols, with the key
factors that distinguishes them from traditional networks protocols.
3. Have hands-on experience on the design, modeling, and analysis of some real live
sensor networks' applications, to be deployed in their actual terrains, with the
addition of some neural networks models that adds "smartness" to the designed
sensor network models.
The methodology that we deployed to calculate the above percentages for the
assessment is as follows:
1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual lab experiment, homework, and projects that relate to such outcome. Each
of the contributing modules maps to only one class outcome, i.e., all are orthogonal
with respect to the class outcomes. All class outcomes are covered throughout the
instruction modules used in the assessment process.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., labs, homeworks, and projects,. The obtained
average is divided by the corresponding assigned points to said component to get a
percent score. In the process, modules contributing scores to an outcome will be
weighted by different factors based on their importance.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, final semester field projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3
SEMESTER % 94% 90% 91%
Performance Metrics:
The scale that has been used to assess each of the class outcomes is as follows:
Outcome 1: Students done very well in understanding the existing sensors in the
lab (MICA-Z&IRIS) and their capabilities and functions. They were
also able to install all required software drivers and the mail Tiny-OS
underlying operating system.
Outcome 2: In all of my lectures I have full attendance and attention of the class
students. The students responses to my question on the sensor
networks fundamentals reflected their understandings of the different
Sensor networks' special MAC and Networks protocols (e.g., S-MAC
and IEEE 802.15.4-ZigBee, routless routing protocols of MASNET).
Outcome 3: The students really enjoyed that part of the class when they went to
their first field trip to setup an experimental "smart asynchronous
event detector sensor network unit" in its actual deployment terrain,
at New Mexico Tech. After collecting all sensed data for about half a
day, the students compiled and analyzed such sensed filed data, and
with the help of some neural modeling software they were able to
evaluate their system. Though the system performance was not that
great, yet the students were able to justify and have a plan for how to
make it work better in the future!