Sunteți pe pagina 1din 108

Outcomes: CSE 101 – Introduction to Computer Science and Information Technology

Course Outcomes:
Students should be able to
1. understand, at the introductory level, computer architecture, operating systems and networks,
automata and models of computation, programming languages and compilers, algorithms,
databases, security and information assurance, artificial intelligence, graphics, and
social/ethical issues of computing.
(Assessment through homework and exams)

Assessment Methodology:
• The course outcome was tied to all of the questions on the exam and the homework and the
professional brief writing assignments.
• The student’s class average was used as the overall metric.
• The numeric score per outcome was averaged over the whole class.
• Data for all students who completed the course was used for this analysis.
Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.
Unsatisfactory corresponds to less than 60 percent.
Results:
The outlined process results in numeric assessment scores:

Outcome Class Average Performance


1 73.6 Satisfactory

If the class average is considered only for students who completed the course (submitted assignments and
took exams), then the assessment results are:

Outcome Class Average Performance


1 85.6 Satisfactory

Thus, the outcome was satisfactory. The performance measures translate as follows: Excellent is 4,
Satisfactory is 3, Marginal is 2, and Unsatisfactory is 1.
Actions:
The following is being considered:
• In 2009 both faculty and student evaluations concluded that the course really needed more contact
time and more homework for students to have a better understanding of the material and the
discipline. Therefore, the course was changed to a two-credit course with two hours of class time,
which was put in place for 2010. There was no significant change in assessment results from this
change.
• Further consideration to the alignment of the course with the program outcomes will be
considered for redevelopment of the course for fall 2010. The faculty will discuss whether this
course should impact program outcomes, at an introductory level, e.g., for Theory, Technical
Communications, and Ethics. This 2009 consideration was postponed until after the results for the
course modifications could be evaluated.

1
• In 2010, students specifically suggested that they might understand the material better if only one
faculty member did the main course content. Therefore, for 2011, it is being considered whether
the primary instructor should teach all of the assessed material and only have other faculty come
in to present their research areas.
Program Outcomes:
No course outcomes (CO) for this course are currently mapped to program outcomes (PO).

2
CSE 113: Introduction to Programming
Course Assessment for Undergraduates in Spring 2010

1 Course Learning Outcomes


After successfully completing this course, a student should be able to:
1. Understand basic concepts, e.g. number systems, Boolean logic, data representation;
2. Grasp the process of computational problem solving from problem description to top-down
program design based on functions and algorithms;
3. Understand concepts underlying the C language, especially parameter passing, lexical scoping,
and modularity;
4. Implement small programs in C displaying good programming style and thorough documentation.
Students should be able to use appropriate data types (strings, multidimensional arrays, pointers,
structures, and linked lists) and control flow structures (selection, iteration and recursion).
5. Use tools including editors, compilers, debuggers, and versioning systems.

2 Assessment Methodology
For the purpose of objective assessment, numeric scores were calculated to measure each student’s
mastery of each learning outcome. The following table shows the metrics used for assessment. The
metrics include midterm and final exam questions as well as grades on the projects and the final lab
assignment. An average of students’ lab assignment grades was also used to measure their ability to use
the software tools introduced. Homework and quiz grades were not used for assessment because they
were part of the initial topic presentation and were thus not a good measurement of topic mastery.

Outcome Metrics
Midterm Section 3: Binary Numbers
1 Midterm Section 4: Boolean Logic
Final Section 0: Basic Concepts
Project 0 Design
2
Final Section 4: Program Design
Midterm Section 2: Lexical Scoping
3 Final Section 1: Pointer Arithmetic
Final Section 2: Activation Records
Project 0
4 Project 1
Lab Final
Labs1
5
Lab Final
Table 1: Metrics for Performance Assessment

Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved

1
The grade for labs is computed by taking the average of all twelve lab assignments throughout the semester.
CSE 113 Undergraduate Course Assessment Spring 2010

dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.

Only the performance of Computer Science majors was evaluated. There were eleven Computer Science
majors enrolled in the course.

2.1 Performance Measurement

Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.

Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent
Table 2: Numerical Ranges for
Performance Measurement

3 Results for Course Outcomes


The following table provides a summary of the results for each course learning outcome, calculated as
described above. The table also provides the number of students who achieved each performance level, in
order to give a better picture of the performance distribution of the class.

Number of Students Average


Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 3 4 2 2 57.97 Marginal
2 6 2 3 0 47.46 Unsatisfactory
3 3 3 3 2 62.04 Marginal
4 5 0 4 2 47.95 Unsatisfactory
5 4 0 3 4 57.38 Marginal
Table 3: Performance Measurements for Course Learning Outcomes

One student only showed up for class a handful of times before the Midterm and submitted only three
assignments. The student did not choose to Withdraw from he class, so his grades are included in the
above scores. The student’s lack of participation, however, is not at all indicative of the performance of
the class as a whole. If the student’s scores were not considered, then the above scores would be 5-6
percentage points higher.

2
CSE 113 Undergraduate Course Assessment Spring 2010

3.1 Outcome 1: Basic Concepts

The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
marginal comprehension of basic concepts. However, the scores shown here are not representative of the
performance of the class as a whole. Students from Electrical Engineering, Physics, and Mathematics
achieved much better performance in this area. A likely explanation is that the Freshmen CS majors did
not have a very strong background in mathematics. A continued effort must me made to introduce
mathematics and to provide sufficient homework for the students to master mathematical concepts such as
number systems and Boolean logic.

3.2 Outcome 2: Top-Down Problem Solving

The analysis here shows that design skills were the weakest area of this class. The conclusion is consistent
with the observations of the instructor. There were four students who did not even turn in the design
documents for Project 0. The lack of submission could be a combination of motivational issues and a lack
of direction. Nevertheless, the scores for the rest of the students were also low for the design documents.
In the current instructor’s opinion, the course curriculum needs to place more emphasis on project design
principles. See the Remedial Action section for a discussion of some suggestions.

3.3 Outcome 3: C Language Concepts

The performance measurement for outcome 3 was marginal, but it was the highest score of any of the
outcomes. If the score is dropped from the student who failed to take the midterm and the final, then the
average becomes 68.24, Satisfactory. The evaluation shows that the class successfully understood
concepts underlying the C language. Since Fall 2009, CSE 113 has been focused on language and
program design, since the more theoretical aspects were transferred to CSE 101. The result for objective 3
shows that the emphasis on the C language has been effective in improving student understanding.

3.4 Outcome 4: Small Program Implementation

The result for outcome 4 was Unsatisfactory. More than anything, however, the results show a lack of
motivation. There were three students who did not turn in either of the projects nor the lab final. The
schedule provided ample time for the completion of the assignments. During the development times for
the projects, both the instructor and class TAs help extended office hours. None of the students asked for
help. Their lack of participation gave them 0% scores for outcome 4. Removing these students from the
average, we obtain a new measurement of 65.94, Satisfactory. Removing one other student from the
average who did not turn in either Project 1 or the Lab Final, we obtain 74.93, which is an encouraging
score. This score much better reflects the performance of the students who made the effort to complete
and submit the projects.

3.5 Outcome 5: Tool Usage

The result for outcome 5 was Marginal. The average score is slightly misleading, however, because the
majority of students achieved either Satisfactory of Excellent performance. The results are influenced by
four students who did not attend the lab final and who did not submit over 40% of the lab assignments.
By not submitting their lab assignments, they clearly did not demonstrate any mastery of the tools

3
CSE 113 Undergraduate Course Assessment Spring 2010

presented. A much better score is obtained by considering students who turned in at least 60% of the lab
assignments. In this case, we obtain a score of 82.93, Satisfactory.

4 Results for Information Technology Students


There was only one Information Technology (IT) student in the Spring 2010 class. The student scored
Unsatisfactory for objectives 1, 4, and 5; and Marginal for objectives 2 and 3. However, little analysis can
be drawn from one student’s performance alone.

5 Results for Program Outcomes


This course pertains to one program outcome, outcome 1.a, which is “the ability to design, implement,
and test small software programs.” This program outcome can be correlated with course outcomes 2, 4,
and 5. By averaging the performance scores for these three outcomes, an overall score of 50.93, or
Unsatisfactory, is obtained. As noted in Sections 3.4 and 3.5, the results are greatly reduced by students
who did not submit projects. If we remove the four students who only turned in up to one project and no
lab final, we get a score of 72.44, which is Satisfactory.

During the class lectures, the students were exposed to many topics in Computer Science including
programming language paradigms, computer architecture, and operating systems. However, these topics
were reinforced with homework assignments or exam questions because they did not correspond to
primary course objectives. Therefore, the contributions of this course to other program objectives cannot
be numerically measured and will be best analyzed by the professors of the students’ later courses.

6 Analysis and Remedial Action


Overall, the results this semester are acceptable. In previous years, a reduction in performance has been
observed for the Spring classes as compared to the Fall classes. Objective comparison is difficult because
the curriculum for CSE 113 was first implemented in the Fall of 2009, so there are no previous
assessment reports for spring classes. The most discouraging aspect of this semester was the lack of
participation by a handful of students. An effort was made to discuss difficulties with the students, both
by the instructor and by New Mexico Tech’s Center for Student Success, but the participation of the
students did not improve.

The students who regularly attended lecture and lab this semester and attempted to complete assignments
achieved at least a marginal understanding of the material. Several students achieved Satisfactory to
Excellent performance for a majority of the objectives. Therefore, the basic approach in material
presentation seems sound. In particular, the assignment of small homework problems throughout the first
half of the semester helped students to understand the basics of the C language and was an improvement
over previous semesters.

The greatest weakness that this assessment has revealed is in programming project design. In the
instructor’s opinion, the curriculum for this course needs to have more focus on Software Engineering
strategies, as well as more applied program design exercises. Many students refused to turn in design
documents, and many others clearly spent much less time on project design than implementation. Future
instructors must make design methodologies interesting and useful to the students. Software planning

4
CSE 113 Undergraduate Course Assessment Spring 2010

should be introduced early in the semester and reinforced in homework and lab assignments instead of
only in the projects. A useful change would be to add an interactive group “software planning” lab during
the first half of the semester. In addition, it would be useful to modify lab assignment later in the semester
to correlate with the project design activities so that students could focus on project design before
beginning implementation.

One more suggestion for future classes would be to incorporate more applied software design into the
discussion of C data types. A strong effort was made in the second half of the semester to map problems
to data structure representations. However, it would be better to discuss practical applications of
structures, unions, and linked lists as they are introduced early in the semester.

5
CSE 113: Introduction to Programming
Course Assessment for Undergraduates in Fall 2010

1 Course Learning Outcomes


After successfully completing this course, a student should be able to:
1. Understand data representation and typing and the C programming language at an introductory
level
2. Develop and solve problems from description to implementation
3. Understand the basic elements of imperative programming: variables, flow control, functions, and
recursion
4. Implement and use basic data structures: arrays, strings, and linked lists
5. use tools such as editors, compilers, and debuggers in the process of developing small to medium
sized computer programs
2 Assessment Methodology
For the purpose of objective assessment, numeric scores were calculated to measure each student’s
mastery of each learning outcome. The following table shows the metrics used for assessment. The
metrics include midterm and final exam questions as well as grades on the projects and the final lab
assignment. An average of students’ lab assignment grades was also used to measure their ability to use
the software tools introduced. Homework and quiz grades were not used for assessment because they
were part of the initial topic presentation and were thus not a good measurement of topic mastery.

Outcome Metrics
Midterm Section 2: Lexical Scoping
1 Final Section 0: Basic Concepts
Midterm Section 3: Binary Numbers
Project 0
2
Final Section 4: Program Design
Midterm Section 5: Short Answer
3
Final Section 3: Short Answer
Project 0
4
Lab Final
Labs1
5
Lab Final

Table 1: Metrics for Performance Assessment

Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.

Only the performance of Computer Science majors was evaluated. There were 42 Computer Science
majors enrolled in the course.
1The grade for labs is computed by taking the average of all ten lab assignments throughout the semester.
CSE 113 Undergraduate Course Assessment Fall 2010

2.1 Performance Measurement

Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.

Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent

Table 2: Numerical Ranges for


Performance Measurement

3 Results for Course Outcomes


The following table provides a summary of the results for each course learning outcome, calculated as
described above. The table also provides the number of students who achieved each performance level, in
order to give a better picture of the performance distribution of the class.

Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 2 3 20 16 80.2 Satisfactory
2 6 2 13 20 77.66 Satisfactory
3 9 6 21 5 67.09 Satisfactory
4 10 1 12 18 72.07 Satisfactory
5 8 1 11 21 75.54 Satisfactory

Table 3: Performance Measurements for Course Learning Outcomes

Five students stopped showing up during the semester. Only one student choose to Withdraw from the
class, so that student's grades are not included in the above scores. These student’s lack of participation,
however, is not at all indicative of the performance of the class as a whole.

3.1 Outcome 1: Data Representation

The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
Satisfactory comprehension of basic concepts. These concepts were taught in class to emphasize the fact
that one must learn certain aspects of how computers work in order to learn how the C language works.
The first five weeks of the semester were dedicated to these basic concepts.

Of the 41 students, one student did not take either of the exams that were used to score this metric.

2
CSE 113 Undergraduate Course Assessment Fall 2010

3.2 Outcome 2: Design to Implementation Problem Solving

Students were given many design problems and were told to develop solutions that were solvable in C.

Despite repeated attempts to emphasize the importance of design, students did not take it seriously. One
homework was given (which was not calculated into Outcome 2). Students had the opportunity to redo
the assignment for more points, but many chose not to. More students had Unsatisfactory ratings than
Outcome 1 because a significant portion of Project 0 was used to calculate Outcome 2 of this course
metric.

There were 20 students who had an outcome of Excellent because they took advantage of the extra credit
offered which where programming based. The extra credit tested more advanced concepts implementing
data structures.

3.3 Outcome 3: Basic Elements in Programming

The performance measurement for Outcome 3 was satisfactory, but it was the weakest score of any of the
outcomes. The metrics used for this Outcome were open ended questions which tested students
knowledge on these basic programming elements. Of the two metrics used in the calculation of the
outcome, students overall did better on the final than the midterm. Below is a table showing averages of
the data set from the two metrics used.

Metric Average Score


Midterm Section 5: Short 62.8
Answer
Final Section 3: Short Answer 71.2

This suggests that students were gaining a mastery of these basic elements in programing by the end of
the course.

3.4 Outcome 4: Small Program Implementation

The result for outcome 4 was Satisfactory. This data is more skewed than the rest because towards the end
of the semester which this data was taken from, results show a lack of motivation. There were quite a few
students who did not turn in portions of the project. The schedule provided ample time for the completion
of the assignment, however students are usually busy with other classes as well. During the development
times for the projects, both the instructor and class TAs help extended office hours.

Many students did poorly on the lab final. This could have been due to the fact that this was the first time
students had to accomplish a small programming assignment without any help from the instructor or TA.

3.5 Outcome 5: Tool Usage

The result for outcome 5 was Satisfactory. Throughout the semester, students had ample time to get
familiar with the programming environment and tools. The TA and instructor were at all the lab sessions
to troubleshoot any problems. This score is a bit misleading in the fact that some students stopped turning
in labs at the end of the semester. This was possibly due to their busy schedules or lack of interest during
the busy time.

3
CSE 113 Undergraduate Course Assessment Fall 2010

4 Results for Information Technology Students


There were only three Information Technology (IT) students in the Fall 2010 class. The following are the
results for the students. Due to the lack of numbers, the outcomes may not be indicative of all IT students.

Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 0 0 0 3 88.33 Excellent
2 0 0 1 2 83.17 Satisfactory
3 1 1 1 0 62.13 Marginal
4 1 0 2 0 70.17 Satisfactory
5 0 2 1 0 64 Marginal

5 Results for Program Outcomes


This course pertains to one program outcome, outcome 1.a, which is “the ability to design, implement,
and test small software programs.” This program outcome can be correlated with course outcomes 2, 4,
and 5. By averaging the performance scores for these three outcomes, an overall score of 75.09, or
Satisfactory, is obtained.

During the class lectures, the students were exposed to many topics in Imperative design and C
Programming. This includes top-down design and many of the C language concepts. These topics were
reinforced with homework assignments or exam questions.

6 Analysis and Remedial Action


There is a drastic difference between the fall and spring semesters in terms of number of CSE students
enrolled in the class. Instead of comparing this semester's outcomes, with the previous (Spring 2010), a
comparison will be made between this semester's outcomes with previous year's (Fall 2009) outcome.

Metric Year Number in Set Outcome 1 Outcome 2 Outcome 3 Outcome 4 Outcome 5

Fall 2009 30* Satisfactory Marginal Satisfactory Satisfactory Mastery

Fall 2010 41 Satisfactory Satisfactory Satisfactory Satisfactory Satisfactory

*this number includes IT students while the Fall 2010 outcomes include only CSE students.

Outcome 2 improved from Marginal to Satisfactory while Outcome 5 was reduced from Mastery to
Satisfactory. Enrollment has gone up from Fall 2009 to Fall 2010.

Outcome 3 (understanding the basic elements of programming) was the lowest scoring outcome of the
course outcomes. Instructor feels that not enough practice was given to the students to master this
outcome. One suggestion for improvement is to assign more homework targeting this course outcome.
Half way through the course, students were finishing up learning about basic concepts (Outcome 1) and

4
CSE 113 Undergraduate Course Assessment Fall 2010

were just starting to learn about basic elements of programming (Outcome 3), hence the lower scores on
the midterm than on the final.

Outcome 4 was the second weakest scoring outcome. Students were motivated at the beginning of the
semester to implement the small programming assignments. Toward the end of the semester and
especially after introducing linked lists, students were unmotivated to learn. There is a big gap between
teaching students basic programming elements (Outcome 3) and basic data structures (Outcome 4).
Especially after the labs where students where introduced to their first data structures, many of them
seemed to have given up. This translated to the lack of motivation for project 0 which was used as part of
the metric for Outcome 4.

Instructor believes that labs should be re-written to introduce more of the concepts needed to manipulate
the basic data structures. Students understand pointers and structs separately, but putting them together
poses a challenge. Perhaps introducing a lab that covers the synthesis of these two concepts. This would
build the confidence in the students to be able to handle implementing these basic data structures.

Another point to contemplate is in programming project design. More lectures towards these
methodologies can be developed in order to solidify ground with these concepts. In addition, a larger
homework or mini-project can be given during the first half of the semester focusing on imperative design
may be given because not many of the advanced programming concepts have to be known in order for
students to learn this.

5
Class Assessment: Spring 2010 CSE122-- Algorithms and Data Structures

Assessment Results
Course Outcomes
At the end of the course, a student should be able to:
1. Understand fundamental data structures and their benefits;
2. Understand various sorting and searching algorithms;
3. be able to analyze the performance of such algorithms and data structures;
4. Design and implement simple software applications using appropriate algorithms and data
structures.

Assessment Methodology:
Each course outcome was tied to one or more questions in the comprehensive final exam,
midterm exam, quizzes and homework
A formula was used to compute a normalized weighted sum from the scores for those exam
questions, quizzes and homework
A table containing one numeric score per student per outcome was computed
The table was aggregated along outcomes to obtain a numeric score per outcome averaged over
the whole class

For example, the numeric score for Course Outcome 1 for student S1 was obtained by
1. Taking S1's score on the third question on the final exam and dividing it by the maximum
possible points on that question
2. Doing a similar computation on S1's score on the second question on the midterm exam, the
first homework, questions one, two and three on second homework, questions two and three on
the third homework, and quizzes one and two;
3. Adding up the 7 values obtained in (1) and (2) after multiplying them by the respective
coefficients
4. Repeating the above for each undergraduate student;
5. Averaging these numbers over the whole class to get a numeric assessment score for Outcome
1 averaged over the whole class.
6. The same process was applied for each outcome.

Course Formula
Outcome
Outcome 1 0.1*H1all + 0.1*H21,2,3 + 0.1*H32,3 + 0.15*Q1 + 0.15*Q2 + 0.2*M2 + 0.2*F3
Outcome 2 0.5*Report + 0.2*H53 + 0.1*Q4 + 0.2*M3
Outcome 3 0.3*H72,3 + 0.25*M4 + 0.25*M5+ 0.25*F4
Outcome 4 0.6*ProgAssignall + 0.2*M6 + 0.2*F5
Notations: H21,2,3 means homework 2 questions one + two + three, H is for homework, M for
Midterm, F for finals and Q for quizzes. The suffix is the question number. ProgAssignall is the
sum of four programming assignments given in class.

Performance Metric
Based on the answers to the questions, it was felt that a score between 69 and 80 percent implied
that the basic concept had been grasped, a score of 81 percent or more indicated a superior
performance, a score less than 55 percent implied that the basic concepts had not been learned
with a 56 to 68 percent score implying a marginal state:

Class Average Performance


< 55% Unsatisfactory
56 - 69% Marginal
70 - 79% Satisfactory
80 - 100% Excellent

Results
The following numeric assessment scores were obtained through the process outlined above:
Outcome Class Average Performance
1 72.48% Satisfactory
2 80.49% Excellent
3 79.93% Excellent
4 71.63% Satisfactory
Thus, outcomes 2 and 3 are excellent, and 1 and 4 are satisfactory.

Scores for Program Outcomes:


This course affects one program outcome. We will deal with it by substituting a numeric value
for performance (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory, and excellent
respectively) and computing the average.

Outcome Class Average Performance Overall


1 72.48% 3
2 80.49% 4
3.5
3 79.93% 4
4 71.63% 3
Class Outcomes Outcome 1 Outcome 2 Outcome 3 Outcome 4
Conclusions I made the The following This was most Starting simple
(Possible mistake of not procedures helped confusing to many programming
improvements of understanding in learning students. Working assignments in the
class outcomes) how much the algorithms: out many beginning and
students already Going over examples helped. gradually
knew. So knowing multiple examples. Paying individual increasing the
how much they Trying to help attention to those level was
knew and taking students ‘visualize’ students who something I found
one step at a time how each found appropriate.
teaching from the algorithm works. performance Showing some
basic would not Providing video analysis to be implementation
confuse the links for practice. challenging examples and
students and also Making students helped. explaining step by
help create an write down each Announcing step interested
interest towards step made it fun beforehand that many as they
the subject. and better to an exams question understood them
Treating each understand each contains better.
student as an algorithm. Practice performance
individual and made it perfect. In analysis made
helping them with class reasoning students put in
their level of and questioning their best effort to
understanding also helped. understand them.
helped improve Showing both
theirs and the simple and
whole class’s complex examples
performance. was needed.
Coordinating
among peers was
vital.
CSE 122: Algorithms and Data Structures
Assessment Report for Fall 2010

Background Information
The CSE 122: Algorithms and Data Structures course has been identified as one of the most
retention-sensitive courses that are required of Computer Science and Information Technology
majors at New Mexico Tech. As part of the department’s efforts to continually improve the
retention of CS and IT students, therefore, it was decided in 2009-2010 that regular faculty
members, instead of TAs or Lecturers who have been teaching the course for several years,
should take responsibility and teach the course on a rotating basis, starting with senior faculty.
This instructor, Andrew Sung, thus took the first turn in the fall of 2010; he had last taught
CS122 (renamed CSE122 recently) more than 10 years ago.
In NMT’s sample curriculum for CS and IT majors, CSE122 is taken during semester 2. As a
result, the CSE122 class is always much larger in spring than in fall, since the majority of the
lower-division students are usually able to follow the sample curriculum. Due to the recently
declined enrollment in CS/IT, a total of only eight students enrolled in the fall 2010 class,
including three non-majors, (at least) two of them had failed the course before and were
repeating it for fulfilling their degree requirements, and five CS/IT majors.
An entry survey was conducted at the semester’s beginning of the small class’ background,
preference, and level of knowledge regarding computer science, mathematics, and programming
languages. Perhaps surprisingly, all students reported sufficient familiarity with the C language
to make coverage of any part of it unnecessary. As expected, it was clear that the subject students
were least familiar with is formal techniques for the analysis of algorithms and data structures.
Accordingly, course outcomes and assessment method were developed, as follows.

Course Outcomes
At the end of the course, a student should have learned and understood:
1. the basics of programming methodology
2. the basic techniques for the design and analysis of algorithms
3. a variety of examplary data structures and algorithms
Students are also expected to have:
4. gained experience in intermediate-level programming in C

Assessment Method
As no TA was assigned and the instructor, in response to the class’ background, had adopted
only a tentative syllabus to allow for flexible scheduling of lectures on the selected topics, a
simple assessment method based on the class’ performance in the (midterm and final) exams and
the four programming assignments was used to measure student learning with respect to the
course outcomes. At the end, five students (three majors and two non-majors) received A or A-,
a major had missed a number of lectures and some assignments due to sickness received C, a
non-major received a D, and another (possibly major) student received F for having missed most
lectures, assignments, and exams for unknown reasons. (Repeated efforts by the instructor to
contact the student had failed.)
In the instructor’s assessment, the student learning has met the course outcomes to a low-
satisfactory degree based on an overall average of 63%, according to the department’s score-
based assessment criteria shown below. This corresponds to a class score of 3 (satisfactory) with
respect to the department’s educational program outcome, item 1a (small scale programming),
which is the only outcome that the course contributes to.
As far as CS majors only are concerned, the overage average is 67% (satisfactory) with respect
to course outcomes, and a score of 3 (satisfactory) with respect to the relevant educational
program outcome.
Class average Performance
< 40% Unsatisfactory
40 – 59% Marginal
60 – 74% Satisfactory
75 – 100% Excellent

Recommendations to the Department and for Future Instructors


 Change the textbook, which was chosen by a previous Instructor. The text “Data Structures
and Algorithm Analysis in C”, 2nd Edition, by Weiss, is an excellent text for a senior or
introductory-graduate course on the subject. For a freshman level course, it is far too
advanced and comprehensive.
 If and when departmental resource permits and the enrollment justifies, offer different
sections for CS/IT majors and non-majors.
 Provide TA for spring semester classes, since even though the spring classes tend to be much
smaller, the higher percentage of non-majors requires more TA support.
 Ensure adequate coordination among the faculty to maintain reasonable consistency in course
contents and level of difficulty.
CSE 213: Introduction to Object Oriented Programming
Course Assessment for Undergraduates in Fall 2010

1 Course Learning Outcomes


After successfully completing this course, a student should be able to:
1. Justify the philosophy of object-oriented design and the concepts of encapsulation, abstraction,
inheritance, and polymorphism;

2. Design, implement, test, and debug simple programs in an object-oriented programming


language;

3. Describe how the class mechanism supports encapsulation and information hiding;

4. Design, implement, and test the implementation of “is-a” relationships among objects using a
class hierarchy and inheritance;

5. Compare and contrast the notions of overloading and overriding methods in an object-oriented
language;

6. Explain the relationship between the static structure of the class and the dynamic structure of the
instances of the class; and

7. Describe how iterators access the elements of a container.

2 Assessment Methodology
For the purpose of objective assessment, numeric scores were calculated to measure each student’s
mastery of each learning outcome. The following table shows the metrics used for assessment. The
metrics include midterm and final exam questions as well as grades on the projects and the homework
assignments. Quiz grades were not used for assessment because they were part of the initial topic
presentation and were thus not a good measurement of topic mastery.

Outcome Metrics
Midterm Short Answer Question 5
Midterm Free Answer Question 1
Midterm True/False Question 3
1 Final Multiple Choice Question 8
Final Short Answer Question 1
Final Short Answer Question 2
Final Short Answer Question 9
Project 0
Project 1
2
Final Free Answer Question 1
Homework 3
Midterm Short Answer Question 2
3 Midterm Short Answer Question 10
Final Short Answer Question 10
Project 1
4
Final Free Answer Question 2
5 Final Multiple Choice Question 4
Final Multiple Choice Question 10
CSE 213 Undergraduate Course Assessment Fall 2010

Midterm Multiple Choice Question 3


Midterm Multiple Choice Question 5
6 Midterm Short Answer Question 1
Final Short Answer Question 6
7 Final Short Answer Question 8

Table 1: Metrics for Performance Assessment

Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some projects included extra credit
opportunities. For this assessment, grades were limited to a maximum of 100% to make average scores
representative of the class as a whole. Finally, a score was computed for each outcome by averaging each
student’s individual score for the outcome.

Only the performance of Computer Science majors was evaluated. There were 12 Computer Science
majors enrolled in the course.

2.1 Performance Measurement

Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.

Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent

Table 2: Numerical Ranges for


Performance Measurement

3 Results for Course Outcomes


The following table provides a summary of the results for each course learning outcome, calculated as
described above. The table also provides the number of students who achieved each performance level, in
order to give a better picture of the performance distribution of the class.

Number of Students Average


Outcome Unsatisfactory Marginal Satisfactory Excellent Score
1 2 3 5 2 69
2 1 1 5 5 76.8
3 1 1 1 9 85.8
4 3 0 4 5 69.8

2
CSE 213 Undergraduate Course Assessment Fall 2010

5 2 0 3 7 83.3
6 3 0 3 6 73.6
7 3 0 3 6 72.2

Table 3: Performance Measurements for Course Learning Outcomes

One student only showed up for class a handful of times before the Midterm and stopped showing up
after the midterm test. The student did not choose to Withdraw from the class, so their grades are included
in the above scores. The student’s lack of participation, however, is not at all indicative of the
performance of the class as a whole.

The class outcomes were objective 3 was excellent while the rest were satisfactory.

3.1 Outcome 1: Object-oriented Philosophy

The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
satisfactory comprehension of Object-oriented philosophy. Improvements in this area would include more
practice in specific concepts of polymorphism and inheritance.

3.2 Outcome 2: Design and implementation of small programs

This area is strong since the OOP language selected was Java. Many of the basic programming concepts
can be carried over from CSE113 and many of the students were able to pick up the additional OOP
knowledge with ease. Perhaps more challenging small programs could be chosen.

3.3 Outcome 3: Class mechanism supports encapsulation

This was the strongest of the objective outcomes. This was a fairly easy objective to accomplish because
the concepts are fairly easy to grasp. This objective can be combined with objective 1 as a part of OOP
philosophy.

3.4 Outcome 4: Design and implement “is-a” relationship

This was one of the lower scoring outcomes. Since the course philosophy focuses more on Object-
oriented Programming rather than just Java, this course objective can be raised if there was an additional
programming assignment testing knowledge of the “is-a” concept given in another OOP language such as
C++.

3.5 Outcome 5: Concept of overloading vs overriding

This was another strong objective outcome. This is another relatively easy concept to teach to the students
and many of them picked up on it easily. One improvement would to be give an assignment in another
OOP language.

3
CSE 213 Undergraduate Course Assessment Fall 2010

3.6 Outcome 6: Relationship between class and object

This was another strong objective outcome. This is another relatively easy concept to teach to the students
and many of them picked up on it easily. One improvement would to be give an assignment in another
OOP language.

3.7 Outcome 7: Iterators on a container

This objective was created before the lecturer was hired. This seems like a very specific objective and
therefore there is very little data to support this. Perhaps this objective could be changed to “Collections
and operations to collections”.

4 Results for Program Outcomes


This course pertains to one program outcome, outcome 1.a, which is “the ability to design, implement,
and test small software programs.” This program outcome can be correlated with course outcomes 2 and
4. The numeric assessment score for this program outcome is obtained by substituting a numeric value for
performance (1, 2, 3, and 4 for Unsatisfactory, Marginal, Satisfactory, and Excellent, respectively) and
computing the average performance.

Program Outcome: Writing small programs involves Course Outcomes 2 and 4:

Outcome Set Average


Performance
Average Performance
2 76.8 3
3
4 69.8 3

5 Analysis and Remedial Action


This was the first time this class was offered at NMT. The philosophy of this introduction to object
oriented programming is different from most universities in that this class is for students in their second
year. Most of the reference material and course books available are either for beginning programming or
senior level Computer Science majors. Finding a middle ground to teach more theory while strengthening
program objective 1a still needs work. Course material needs to be developed further for this level due to
the lack of reference material available.

One suggestion would be to have a Java primer at the beginning of the semester (5 weeks) where students
can learn the basics of Java syntax, compilation, and use. This would be small Java exercises which
would help with the course and program outcomes. After the initial period, then the course can get into
Object-oriented design and concepts.

4
CS 221: Computer System Organization
Assessment for Undergraduates in Fall 2010

Course Outcomes

At the end of the course, a student should be able to

1. understand computer abstraction and technology; understand the arithmetic for


computers;
2. grasp the principles of designing instruction sets;
3. analyze single-cycle, multi-cycle, and pipelining datapath and controls;
understand the finite state for the controls in multi-cycle datapath; and detect
hazards and solve the hazards in pipelining datapath;
4. understand and analyze the memory hierarchy, cache organization, and virtual
memory;
5. program in Intel 80x86 assembly language variants in the process of developing
and debugging small computer programs;
6. program in MIPS assembly language variants in the process of developing and
debugging small computer programs;
7. understand the ethics and legal aspects of computer programming and use;
8. assess and analyze I/O systems and storage systems;

Assessment Methodology:

! The assessment for students in this class consisted of four parts: quizzes (10%),
homework (20%), projects (10%), midterm exam (30%), and final exam (30%).
! Quizzes and homework covered all the aspects of course outcomes.
! Projects involved the 2-7 of course outcomes.
! Midterm exam and final exam were used to test all the aspects of course outcomes.
! The formula in table 1 was used to evaluate a student’s performance in this class.
! The formula in table 2 was used to evaluate the course outcomes.
! Only undergraduate student data was used for this analysis.

Table 1. Formula for evaluating a student’s performance


Assessment for
Formula
students
100*(0.1*student’s Quiz score/full quiz score + 0.2*student’s
homework score/full homework score + 0.1* student’s project
Final score score/full project score + 0.3 * student’s midterm-exam
score/full midterm-exam score + 0.3*student’s final-exam
score/full final score)

1
Table 2. Formula for measuring course outcomes in the final exam
Assessment
for course Formula
outcomes
N

$$ (student i's outcome score for question j in the outcome evalution # w )


i "1 j
j

Outcome 100 #
N # $ (full mark for question j in the outcome evalution # w )
evaluation j
j

where w j is weighted factor, $w j " 1.

Performance Metric

Based on the answers to the questions in the final test, it was felt that a 50 percent score
implied that the basic concept had been grasped, a score of 75% or more indicated a
superior performance, a score less than 35 percent implied that the basic concepts had not
been learned with a 35 to 50 percent score implying a marginal state:

Class average Performance Performance Value

< 35% Unsatisfactory 1


35 – 49% Marginal 2
50 – 74% Satisfactory 3
75 – 100% Excellent 4

Results

The following numeric assessment scores were obtained through the process outlined
above:

Outcome Class Median Class Performance Performance


Average Value
1 62.1 64.0 Satisfactory 3
2 83.3 82.6 Excellent 4
3 87.5 80.5 Excellent 4
4 47.8 49.8 Marginal 2
5 85.9 86.4 Excellent 4
6 80.0 77.1 Excellent 4
7 90.0 88.8 Excellent 4
8 66.7 71.6 Satisfactory 3

Thus, outcome 2, 3, 5, 6 and 7 were excellent; outcomes 1 and 8 were satisfactory;


outcome 4 was marginal.

2
Scores for Program Outcomes:
This course affects two program outcomes. We will deal with each in turn by
substituting a numeric value for performance (1, 2, 3, and 4 for unsatisfactory, marginal,
satisfactory, and excellent respectively) and computing the average.

Program outcome 3: Knowledge of the fundamental principles of programming


languages, systems, and machine architecture -- Course outcomes 1, 2, 3, 4 and 8 relate to
this program outcome.

Outcome Overall
3 PL/Sys/Arch 3.2

Program outcome 7: Awareness of the legal, ethical, and societal impact of developments
in the field of computer science -- Course outcome 7 relates to this program outcome.

Outcome Overall
7 Ethics 4

According to the assessment in last two years, proposed strategies have been employed to
improve the outcome of this course. The course objective has been updated, so that the
new objective reflects the current content taught in the class. More fundamental
knowledge was covered to fulfill the prerequisites of subsequent courses including,
Operating Systems, Computer Architecture, and Compiler Writing. From the outcomes,
the refined objective clarified the content specifically, and the detailed objective guided
the teaching better. With another strategy, more effort has been focus on previous weak
objective, fundamental knowledge associated with Operating Systems and Computer
Architecture is intensified, and different variations and extensions of architecture were
compared. This strategy successfully enhanced the outcome of previous weak topic.
Some new strategies are proposed below to address new issues in this semester.

Remedial Actions:

The following remedial actions are being contemplated.

! Outcome 4 was Marginal. This can be improved by spending more lecture time
and integrating more homework assignments covering the subject areas in the
outcome. Additionally, some advanced topics about of these subject areas are also
covered by the course of Computer Architecture. The approaches addressing the
problems include: first, this course objective could be adjusted to reduce the
overlap between Computer System Organization and Computer Architecture by
briefly introducing advanced topics of these areas in this course and, second, the

3
course schedule could be adjusted to allow more lecture time for these subject
areas.
! Outcomes 1 could be improved by strengthening the analysis and computational
expertise in the next teaching by affording several more small exercises in
homework and quiz.

4
New Mexico Tech
Department of Computer Science and Engineering
Course Assessment Report
CSE 222: Systems Programming
Spring 2010
Instructor: Jun Zheng

Course Outcomes

1. Have knowledge of the fundamentals of UNIX/Linux operating system


2. Have knowledge of the basic principles of UNIX/Linux shell and shell
programming
3. Have knowledge of the basic principles of UNIX/Linux file systems
4. Have knowledge of the basic principles of UNIX/Linux processes and inter-
process communication (IPC)
5. Have knowledge of the basic principles of UNIX/Linux socket-based network
programming
6. be good system programmers and know how to develop, design and implement
application programs which access Unix/Linux system functions through system
calls and library routines.

Assessment Methodology:

 Each course outcome was tied to one or more questions in the homework,
quizzes, project, midterm exams and final exam.
 A formula was used to compute a normalized weighted sum from the scores for
those questions.
 A table containing one numeric score per student per course outcome was
computed.
 The table was aggregated along course outcomes to obtain a numeric score per
outcome averaged over the whole class.

For example, the numeric score for Course Outcome 1 for student S1 was obtained by
(1) taking S1's score on the HW1 and dividing it by the maximum possible points on
HW1, i.e. normalized to [0, 1];
(2) doing a similar computation on S1's scores on the other parts that corresponds to
Course Outcome1;
(3) adding up the values obtained in (1) and (1) after multiplying them by their
percentage in the final score respectively, for example, the percentage for HW1 is
6.25%;
(4) diving the value obtained in (3) by the sum of percentages of all the parts related
to Course Outcome 1;
(5) multiplying the value obtained in (4) with 100;
(6) repeating the above for each student;
(7) averaging these numbers over the whole class to get a numeric assessment score
for Outcome 1 averaged over the whole class.

1
(8) the same process was applied for each outcome.

The formulas used for each Course Outcome were:

Course Outcome Formula


1 100*(HW1*5.8% + P1*5.8% + QZ1*1%+ ME1Q1*3% +
ME1Q2*1.5% + ME1Q3*1.5%)/(5.8% + 5.8% + 1% + 3% +
1.5% + 1.5%)
2 100*(HW2*5.8% + QZ2*1% + QZ3*1% + ME1Q1*3% +
ME1Q2*1.5% + ME1Q4*1.5% + ME1Q5*1.5% +
ME1Q6*1.5%)/(5.8% + 1% + 1% + 3% + 1.5%+ 1.5% + 1.5%
+ 1.5%)
3 100*(P2*5.8% + QZ4*1% + ME2Q1*3% + ME2Q2*3% +
ME2Q3*3% + ME2Q4*1.5% + ME2Q5*1.5% + ME2Q6*1.5%
+ ME2Q2*1.5%)/(5.8% + 1% + 3% + 3% + 3% + 1.5% + 1.5%
+ 1.5% + 1.5%)
4 100*( P3*5.8% + QZ5*1% + QZ6*1% + FEQ1*2.25% +
FEQ2*2.25% + FEQ3*4.5% + FEQ4*3% + FEQ5*3% +
FEQ6*3%)/( 5.8% + 1% + 1% + 2.25% + 2.25% + 4.5% + 3%
+ 3% + 3%)
5 100*(P4*5.8% + FEQ1*2.25% + FEQ2*2.25% + FEQ3*4.5%
+ FEQ7*3% )/(5.8% + 2.25% + 2.25% + 4.5% + 3%)
6 100*(P2*5.83% + P3*5.83% + P4*5.83% + ME2Q6*1.5% +
ME2Q7*1.5% + FEQ5*3% + FEQ6*3% + FEQ7*3% )/(5.83%
+ 5.83% + 5.83% + 1.5% + 1.5% + 3% + 3% + 3%)

(Notation: HW1 represents the normalized score on Homework 1; QZ1 represents the
normalized score on Quiz 1; P1 represents the normalized score on Project 1; ME1Q1
represents the normalized score on question 1 of midterm exam 1; ME2Q1 represents the
normalized score on question 1 of midterm exam 2; FEQ1 represents the normalized
score on question 1 of final exam.)

Performance Metric
Based on the answers to the questions, it was felt that a 60 percent score implied that the
basic concept had been grasped, a score of 80% or more indicated a superior
performance, a score less than 40 percent implied that the basic concepts had not been
learned with a 40 to 60 percent score implying a marginal state:

Class average Performance

< 40% Unsatisfactory


40 – 59% Marginal
60 – 79% Satisfactory
80 – 100% Excellent

2
Results

The following numeric assessment scores were obtained through the process outlined
above:

Outcome Class Average Performance


1 87.14 Excellent
2 74.31 Satisfactory
3 72.68 Satisfactory
4 78.39 Satisfactory
5 80.91 Excellent
6 84.21 Excellent

Thus, Outcomes 1, 5 and 6 were excellent, and Outcomes 2 to 4 were satisfactory.

Relationship of the Course to Program Outcomes:


This course affects two program outcomes:
1. The ability to design, implement, and test small software programs;
2. Knowledge of the theoretical concepts of computing and of the fundamental principles
of programming languages, systems, and machine architectures;

We will deal with each in turn by substituting a numeric value for performance (1, 2, 3,
and 4 for unsatisfactory, marginal, satisfactory, and excellent respectively) and
computing the average.

Program outcome 1 involves course outcome 6.

Course Class Average Performance Overall


Outcome
6 84.21 4 4

Program outcome 2 involves course outcome 1 to 5.

Course Class Average Performance Overall


Outcome
1 87.14 4 3.4
2 74.31 3
3 72.68 3
4 78.39 3
5 80.91 4

Comments and Remedial Actions:

This was the first time I taught this course. The followings are the problems I met this
semester and the proposed solutions:

3
1. I used whiteboard for most of the class content which made the pace of the class
slow. I plan to put all course materials in powerpoints to solve the problem
2. I used 8 weeks to cover shell and shell programming which made the time left not
enough to cover other important topics such as signals. I plan to cut this part to 4
weeks.
3. Since most of class content were system calls for the corresponding topics,
students were easy to lose concentration. I plan to prepare more example
programs to explain the system calls. I’ll also prepare some in-class questions so
that students can actively join the class discussion.
4. Another improvement is to give students more small homework assignments to
reinforce the concepts learned in class.

4
Class Assessment: 2010 CSE324: (Principles of Programming Languages)

Class outcomes:

1. Clear understanding of the major design concepts of a programming language


(e.g., syntax, semantics, typing system, recursions, abstraction, polymorphic &
generic features, etc).

2. Clear understanding of the trade-offs between important language design goals:


security, efficiency, power, robustness, and complexity.

3. Clear understanding of the major linguistic differences between major languages’


paradigms: imperative, functional, object-oriented, and logic.

4. The ability to critique and properly utilize languages from each of the above
paradigms in building desired software solutions or the design of a new language.

The methodology that we deployed to calculate the above percentages for the
assessment is as follows:

1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual exam questions, homework, projects, and quizzes that relate to such
outcome. Each of the contributing modules maps to only one class outcome, i.e., no
“one-to-many” mapping of one module to more than one class outcome.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., exam questions, homework, projects, and
quizzes. The obtained average is divided by the corresponding assigned points to said
component to get a percent score. In the process, modules contributing scores to an
outcome will be weighted by different factors based on their importance, e.g.,
True/False exams questions are higher than MCQ questions, followed by the SA
questions.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, quizzes and semester projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3 co4
SEMESTER % 74% 80% 74% 86%

Performance Metrics:

The scale that has been used to assess each of the class outcomes is as follows:

Class average Performance


< 40% Unsatisfactory (1)
40-54% Marginal (2)
55 – 79% Satisfactory (3)
80 – 100% Excellent (4)

Outcome Class Performance


Average
1 74% Satisfactory (3)
2 80% Excellent (4)
3 74% Satisfactory (3)
4 86% Excellent (4)
Conclusion

Possible further improvements of the Class outcomes:

Outcome 1: A noticeable lack of improvement over last year (-6%). I need to


increase the dosage of practical and life examples to ease the dryness
of the subject. I need also to further communicate with the instructor
of the prerequisite class of CS324 to focus on basic definitions such
as: “proof by induction” and “Abstract Data Types”, essential to
understand the base for the programming “recursion” concept and
object oriented domain of languages, respectively.

Outcome 2: Another noticeable improvement (5%+), due to the continuation of


last year’s plan of giving more examples from well known languages
such as C, C++, Java, FORTRAN, etc, about tradeoffs of different
language design factors. Moreover, progress might be due to the
clear stating of the disadvantages of covered modern languages, and
justifying their use in some applications. In addition, it also might be
due my coverage of some "clean" paradigms languages'
representatives, before stepping in simi-paradigm languages (e.g.,
Smalltalk vs. C++/Java). I also continued to refine my definition of
keywords in language design based on my increased reading in the
literature; students are showing better understanding. I plan to keep
the same approach for more improvement.

Outcome 3: The 74% is still acceptable with a slight improvement over last year
(+3%). In addition to the quizzes, homework, projects, reports, and
exams I have continued to challenge the students every class with
some extra point questions in their quizzes and exams about the
different language paradigms. I continue to notice that students stay
more alert in the class and raise many useful discussions (some are
even challenging) and asked many useful follow up questions. I
intend to keep doing that in future classes to further improve the
score.

Outcome 4: There is a slight improvement (+2%). As in the last year, I have


received many positive comments about the semester projects. I
continued to ask the students to report on some prominent modern
programming languages (not covered in class) from the literature.
The resulting semester projects reports continued to be very
promising. In addition to some required languages, students
dissected some of their own interest programming languages. I think
we will keep such an approach since it is really working, especially
after refining my assignment problem statement to be clearer, based
on student feedback.

A common point to all of the above actions to enhance and better achieve the
class outcomes is to keep posting the class lecture notes on the class website. In
addition, continue updating the notes as the class progresses and notify the
students of any update on the class website. Hence students have continued access
to the notes. Students are showing more progress and many appreciated the note-
posting for easy access all the time. I also intend to seek monthly anonymous
student views on the covered subjects, to report on my teaching, the covered
topics, and any other points of their concerns. Those evaluations will be very
helpful to adjust the class teaching, based upon input from other department
faculty who practiced such a process.

Scores for Program Outcomes:


This course affects the first, third, and fourth program outcomes. We will deal with
each in turn by substituting a numeric value for performance (1, 2, 3, and 4 for
unsatisfactory, marginal, satisfactory, and excellent respectively) and computing the
average.

1) First program outcome “the ability to design, implement, and test small software
programs, as well as large programming projects” is directly affected by the third
course outcomes.

To strengthen the student’s knowledge of different programming languages’


concepts, they will design, implement, and test a set of small programs from different
languages’ paradigms/categories. Hence, there is a direct mapping between the third
class’s outcome and the first program’s outcome.

Outcome Class Performance Overall


Average
3 74% 3 3.0
2) Third program outcome “Knowledge of fundamental principles of programming
languages, systems, and machine architecture” is directly affected by first and
second course outcomes, specifically its programming language component.

Outcome Class Performance Overall


Average
1 74% 3
2 80% 4 3.5

3) Fourth program outcome “Exposure to one or more computer science application


areas” is directly affected by the fourth course outcomes.

In class, we analyze and critique languages like Ada with its “exception” handling
mechanism and concurrent tasking facility, relating to critical military applications
where such features are very useful. Moreover, we explore the powerful lisp and
Prolog capabilities, mapping them to AI and “expert systems” implementation,
respectively.

Outcome Class Performance Overall


Average
4 86% 4 4.0
CSE 325: Principals of Operating Systems

Assessment for Undergraduates in Spring 2010


August 10, 2010

Course Outcomes

At the end of the course, a student should be able to

1. understand the functions, structures, and history of operating systems;


2. master various process management concepts including scheduling,
synchronization, and deadlocks;
3. grasp concepts of memory management including virtual memory;
4. master issues related to storage systems, file system interface and implementation,
and disk management;
5. get acquainted with Linux kernel programming.

Assessment Methodology:

• Each course outcome was tied to one or more questions in the midterm and final
exam, or to one of the lab assignments. A formula was used to compute a
normalized weighted sum from the scores for those questions.
• A table containing one numeric score per student per course outcome was
computed. The table was aggregated along course outcomes to obtain a numeric
score per outcome averaged over the whole class.
• Only undergraduate student data was used for this analysis.

The formulas used were:

Course Outcome Formula


1 100*(0.5*MidtermQ1 + 0.5* MidtermQ2)
2 100*(0.1*MidtermQ3 + 0.15*MidtermQ4 + 0.15* MidtermQ5 +
0.15* MidtermQ6 + 0.15*MidtermQ7 + 0.15*Lab2 + 0.15*
Lab3)
3 100*(0.1*FinalQ1 + 0.1*FinalQ3 + 0.2*FinalQ4 + 0.2*FinalQ8
+ 0.1*FinalQ9 + 0.3*Lab4)
4 100*(0.2*FinalQ2 + 0.2*FinalQ5 + 0.3*FinalQ6 + 0.3*FinalQ7)
5 100*(0.25*Lab1 + 0.25*Lab2 + 0.25*Lab3 + 0.25*Lab4)

(Notation: MidtermQ3 represents the score on question Q3 on the midterm exam divided
by the maximum possible points on that question. Lab2 includes all parts of the lab
assignment: code reading, design, and implementation.)

1
Performance Metric

Based on the answers to the questions, it was felt that a 50 percent score implied that the
basic concept had been grasped, a score of 75% or more indicated a superior
performance, a score less than 35 percent implied that the basic concepts had not been
learned with a 35 to 50 percent score implying a marginal state:

Class average Performance

< 35% Unsatisfactory


35 – 49% Marginal
50 – 74% Satisfactory
75 – 100% Excellent

Results

The following numeric assessment scores were obtained through the process outlined
above:

Outcome Class Average Performance


1 92.6 Excellent
2 74.0 Satisfactory
3 72.8 Satisfactory
4 80.6 Excellent
5 83.5 Excellent

Scores for Program Outcomes:


This course affects two program outcomes. We will deal with each in turn by
substituting a numeric value for performance (1, 2, 3, and 4 for unsatisfactory, marginal,
satisfactory, and excellent respectively) and computing the average.

Program outcomes:
1b. the ability to design, implement, and test large programming projects involves the
course outcome 5.
3. knowledge of the concepts and techniques of operating systems and OS-level
programming involves course outcomes 1, 2, 3, and 4.
6. the capacity to work as part of a team involves the course outcome 5.

2
Program Course Class Average Performance Overall
Outcome Outcome
1b 5 83.5 4 4

Program Course Class Average Performance Overall


Outcome Outcome
1 92.6 4
2 74.0 3
3 3.5
3 72.8 3
4 80.6 4

Program Course Class Average Performance Overall


Outcome Outcome
6 5 83.5 4 4

Comments:

• In the labs, students chose their group mates based on their interests. This
approach is fine. However, imbalance in the performance existed among groups.
Qualification test may be given to evaluate students’ programming skills and the
results can be used to choose students for different groups.
• It was Venkata Jitendra Tumma’s first time to be the TA of this course. He was
not acquainted with Linux kernel programming. There was not enough time to
train him before labs began. I gave him detailed slides about the background and
assignment requirements of each lab. But still there were some questions asked by
the students that he could not answer. Training of the TA is necessary in the
future.
• The system administrators upgraded the operating systems in the lab once after
the first lab assignment. It made the 2.6.18 kernel not runnable on UML. Students
switched to the latest 2.6.32 kernel. However, the new kernel codes were not well
documented. Students spent much time in reading and understanding the kernel
codes. A stable lab environment is important and helpful.
• Four lab assignments were given. It took about three to four lab sessions to
complete each of them. It would be better to break the big problems into several
smaller ones and have each lab assignment contain two to three milestones for
completion, so that the students may not feel the lab assignments are too difficult
or challenging.

3
5/26/2011 Dongwan Shin

Class Assessment Report: CSE 326 Software Engineering in Spring 2010

Course Learning Outcomes Assessment Methods1 Assessment Results2


1. Knowledge of different software - Questions 1, 2, 3 of Midterm 2. The class average was 57.7. Based on the footnoted
development life cycle models exam, and Question 1 of Final exam assessment results, this item is marginally met.

- Formula used:
100*(0.2*MidQ1 + 0.4*MidQ2 +
0.2*MidQ3 + 0.2FinalQ1)
2. Ability to elicit requirements from - Questions 4, 5, 6, 7 of Midterm 2. The class average was 64.2. This item is marginally met.
clients and specify them exam, Questions 2, 3, 4 of Final
Exam, and Quizzes 1, 2

- Formula used:
100*(0.1*MidQ4 + 0.1*MidQ5 +
0.1*MidQ6 + 0.1*MidQ7+0.15*
FinalQ2 + 0.15*FinalQ3 +
0.2*FinalQ4 + 0.05*Quiz1 +
0.05*Quiz2)
3. Ability to perform detailed design - Question 8 of Midterm, Questions 1. The class average was 43. This item is unsatisfactorily met.
through the architectural design, interface 5, 6, 7, 8 of Final Exam, and Quiz
design, object design, and the use of 3, 4
design patterns
- Formula used:
100*(0.1*MidtermQ8 +
0.2*FinalQ5 + 0.2*FinalQ6 +
0.2*FinalQ7 + 0.2*FinalQ8 +
0.05*Quiz3 + 0.05*Quiz4)
4. Ability to perform implementation - Question 9 of Final exam, and 3. The class average was 79.1. This item is satisfactorily met.
from design specification Class project implementation
specification

- Formula used:
5/26/2011 Dongwan Shin

100*(0.5*FinalQ9 +
0.5*ProjectImp)
5. Ability to plan and apply various - Question 10 of Final exam, Class 3. The class average was 67.2. This item is satisfactorily met.
testing techniques project final report

- Formula used:
100*(0.5*FinalQ10 +
0.5*ProjectReport)
6. Practical experience of using UML - Class project requirement 4. The class average was 84.5. This item is excellently met.
and OOP specification, Class project design
specification, and Class project
implementation specification

- Formula used:
100*(0.4*ProjectReq +
0.3*ProjectDes + 0.3*ProjectImp)
7. Ability to work in group to produce a - Class project peer review, Class 4. The class average was 90.4. This item is excellently met.
large-scale software product project presentation

- Formula used:
100*(0.8*ProjectPeer +
0.2*ProjectPresentation)

 Assessment in regard to program-level outcome:

 The course learning outcome 3 (O3), 4 (O4), and 5 (O5) contribute to the program outcome P1b(Large Prog). Hence, the numeric score
for assessment against the program outcomes is as follows:

P1b = (1 + 3 + 3)/3 = 2.3

 The course learning outcome 2 (O2) and 6 (O6) contribute to the program outcome P5(Tech Comm). Hence, the numeric score for
assessment against the program outcomes is as follows:
5/26/2011 Dongwan Shin

P5 = (3 + 4)/2 = 3.5

 The course learning outcome 7 (O7) contributes to the program outcome P6 (Group). Hence, the numeric score for assessment against the
program outcomes is as follows:

P6 = 4(O7) = 4

 Conclusion

- Compared to the last assessment on this course offered in Spring 2009, this year’s assessment result shows that Outcome 3 has become worse
from “marginal” to “unsatisfactory.” This is because students still had problems in understanding object design and some design patterns.
Though we had lab sessions on the topics of object design and design patterns along with the introduction to IBM Software Architecture, it
seems that student did not benefit from the tool much, since it required a steep learning curve. This could be improved by introducing the tool
earlier in the semester and spending more labs using the tool.
- Output 2 and Output 3 could be improved by requiring students to take CSE 213 (Introduction to OOP).

1
Assessment Methodology
• Each course outcome was tied to one or more questions in the midter/final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
New Mexico Tech
Department of Computer Science and Engineering
Course Assessment Report
CSE 331: Computer Architecture
Fall 2010

Instructor: Jun Zheng

Course Outcomes
1. Ability to design, implement, and test small software programs;
2. Knowledge of principles of modern computer system design;
3. Knowledge of principles of instruction set design;
4. Knowledge of basic pipelining techniques;
5. Knowledge of instruction level parallelism;
6. Knowledge of memory hierarchy design: Cache, virtual memory, virtual
machines;
7. Ability to evaluate the effectiveness of different computer architectures for
specific uses;

Assessment Methodology:

• Each course outcome was tied to one or more questions in the homework,
quizzes, midterm exams and final exam.
• A formula was used to compute a normalized weighted sum from the scores for
those questions.
• A table containing one numeric score per student per course outcome was
computed.
• The table was aggregated along course outcomes to obtain a numeric score per
outcome averaged over the whole class.

For example, the numeric score for Course Outcome 1 for student S1 was obtained by
(1) taking S1's score on the HW1 and dividing it by the maximum possible points on
HW1, i.e. normalized to [0, 1];
(2) doing a similar computation on S1's scores on the other parts that corresponds to
Course Outcome1;
(3) adding up the values obtained in (1) and (1) after multiplying them by their
percentage in the final score respectively, for example, the percentage for HW1 is
6.25%;
(4) diving the value obtained in (3) by the sum of percentages of all the parts related
to Course Outcome 1;
(5) multiplying the value obtained in (4) with 100;
(6) repeating the above for each student;

1
(7) averaging these numbers over the whole class to get a numeric assessment score
for Outcome 1 averaged over the whole class.
(8) the same process was applied for each outcome.

The formulas used for each Course Outcome were:


Course Outcome Formula
1 100*(HW3*2.5% + HW4*2.5% + ME2Q5*6% + FEQ5*4.5% +
FEQ6*4.5%)/(2.5% + 2.5% + 6% + 4.5% + 4.5% )
2 100*(HW1*2.5% + QZ1*2% + ME1Q1*3% + ME1Q2*2% +
ME1Q3*3% + FEQ2*3% )/(2.5% + 2% + 3% + 2% + 3% + 3%)
3 100*(HW2*2.5% + P1*2.5% + ME1Q2*2% + ME1Q3*3%)/
(2.5% + 2.5% + 2% + 3% )
4 100*( HW2*2.5% + QZ2*2% + ME1Q4*3% + ME1Q5*3% +
FEQ2*1.5% )/( 2.5% + 2% + 3% + 3% + 1.5%)
5 100*( HW4*2.5% + QZ5*2% + P3*2.5% + P4*2.5% +
ME2Q4*6% + ME2Q5*6% + FEQ3*4.5% + FEQ6*4.5%)/(2.5%
+ 2% + 2.5% + 2.5% + 6% + 6% + 4.5% + 4.5%)
6 100*(HW3*3% + QZ3*2% + QZ4*2% + QZ6*2% + P2*2.5% +
ME1Q6*4% + ME1Q7*2% + ME2Q2*6% + ME2Q3*6% +
FEQ4*4.5% + FEQ5*4.5% + FEQ7*3%)/(3% + 2% + 2% + 2%
+ 2.5% + 4% + 2% + 6% + 6% + 4.5% + 4.5% + 3%)
7 100*(HW1*2.5% + QZ1*2% + QZ2*2% + ME1Q1*3% +
ME1Q2*2% + ME1Q3*3% + FEQ2*3% + FEQ7*3%)/(2.5% +
2% + 2% + 3% + 2% + 4% + 3% + 3%)
(Notation: HW1 represents the normalized score on Homework 1; QZ1 represents the
normalized score on Quiz 1; P1 represents the normalized score on Project 1; ME1Q1
represents the normalized score on question 1 of midterm exam 1; ME2Q1 represents the
normalized score on question 1 of midterm exam 2; FEQ1 represents the normalized
score on question 1 of final exam.)

Performance Metric

Based on the answers to the questions, it was felt that a 60 percent score implied that the
basic concept had been grasped, a score of 80% or more indicated a superior
performance, a score less than 40 percent implied that the basic concepts had not been
learned with a 40 to 60 percent score implying a marginal state:

Class average Performance

< 40% Unsatisfactory


40 – 59% Marginal
60 – 79% Satisfactory
80 – 100% Excellent

2
Results

The following numeric assessment scores were obtained through the process outlined
above:

Outcome Class Average Performance


1 75.82 Satisfactory
2 86.12 Excellent
3 79.51 Satisfactory
4 70.23 Satisfactory
5 77.83 Satisfactory
6 66.12 Satisfactory
7 76.63 Satisfactory

Thus, Outcome 2 was excellent, and Outcomes 1 and 3 to 7 were satisfactory.

Relationship of the Course to Program Outcomes:


This course affects three program outcomes:
1a. The ability to design, implement, and test small software programs;
3. Knowledge of the theoretical concepts of computing and of the fundamental principles
of programming languages, systems, and machine architectures;
5. Technical communication skills in written and oral form;

We will deal with each in turn by substituting a numeric value for performance (1, 2, 3,
and 4 for unsatisfactory, marginal, satisfactory, and excellent respectively) and computing
the average.

Program outcome 1a involves course outcome 1.

Course Class Average Performance Overall


Outcome
1 75.82 3 3

Program outcome 3 involves course outcome 2, 3, 4, 5, 6, and 7.

Course Class Average Performance Overall


Outcome
2 86.12 4 3.17
3 79.51 3
4 70.23 3

3
5 77.83 3
6 66.12 3
7 76.63 3

Program outcome 5 involves course outcome 7.

Course Class Average Performance Overall


Outcome
7 76.63 3 3

Comments and Remedial Actions:

The following table shows the comparison of three years’ scores for the outcomes.

Outcome Fall 2008 Fall 2009 Fall 2010


1 70.6 76.09 75.82
2 70.7 85.43 86.12
3 70.4 81.26 79.51
4 66.3 70.72 70.23
5 77.6 76.11 77.83
6 57.0 63.68 66.12
7 69.3 79.07 76.63

There is a significant improvement on the scores of most outcomes in Fall 2009


compared with Fall 2008 which is the result of giving more homework assignments and
more in-class examples. Those strategies were kept in Fall 2010 and the scores were
maintained well.

In Fall 2010, course projects were assigned to students to help students understand certain
course materials (for outcomes 3, 5 and 6). The results are not significant but
encouraging. For example, an improvement on the score for outcome 6 (memory
hierarchy) can be observed which is consistently the weakest one among all outcomes.
Therefore, I plan to improve the quality of the course projects next time so that students
can gain hand-on experience on the course materials.

Another improvement in Fall 2010 was the quality of powerpoint slides which was
successful. There were few complain about the slides. I’ll continue to improve it.

Although I tried to encourage students to participate in-class discussion, the feedback I


got was not very good. So I will find new ways to make the class more interactive.

4
A significant failure of Fall 2010 semester was the new textbook adopted. I will move
back the one adopted in Fall 2008 and Fall 2009 but improve the quality of home
assignments since this is the primary reason I switched the textbook.

5
CSE 342: Formal Language and Automata Theory
Assessment of Spring 2010 Class
(Instructor: Andrew Sung)

There are 20 undergraduate and one graduate student in the class. This assessment is based on
data of undergraduate students only.

Course Outcomes

At the end of the course, a student should


1. have understood the basic theorems on finite automata, regular languages, and regular
expressions, and be able to solve basic problems about them
2. have understood the basic theorems on context-free languages and pushdown automata,
and be able to solve basic problems about them
3. have been exposed to some other classes of languages and automata, e.g. context-
sensitive and recursively enumerable languages, and Turing machines

Spring 2010 Course Description

 The course covered finite automata, regular languages, context-free languages, and
pushdown automata, roughly corresponding to the major contents of chapters 1-7 of the
original Hopcroft, Motwani, Ullman textbook (first edition, 1979).
 Other topics, including Turing machines, recursively enumerable languages, Chomsky
hierarchy of languages, decidability, were briefly discussed at appropriate times.
 Selected advanced topics on theory of computation and computational complexity were
occasionally discussed mainly as motivating material.

Assessment Methodology

• The first exam covers finite automata and regular languages, corresponding to Outcome 1.
• The second exam is a take-home exam and covers topics corresponding to Outcome 1 and
Outcome 2.
• The final exam primarily covers context-free languages and pushdown automata,
corresponding to Outcome 2.
• No questions of relevance to Outcome 3 were given in any of the exams.

The average scores of the three exams of the class are interpreted according to the
performance metric established by the CS faculty for evaluating outcomes, as follows

Class average Performance


< 40% Unsatisfactory
40 – 59% Marginal
60 – 74% Satisfactory
75 – 100% Excellent

1
Calculations based on the class’s normalized average score of the three exams results in the
assessment that the achievement of course outcomes 1 and 2 have been satisfactory, and
outcome 3 unsatisfactory.

Implications for Program Outcomes

This entire course is devoted to one program outcome (knowledge of the theoretical concepts
of computing). So, substituting a numeric value for the entries in the performance column in
the above table (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory, and excellent
respectively) and computing the average, giving a score of
(3+3+1)÷3 = 3.3 for that program outcome.

Comments:

 The majority of the CSE342 class in spring 2010 completed the new (revised through
consultation of CSE and math faculties) Math221 course, which is the prerequisite for
CSE342 and taught by the math department.
 The TA was responsible and effective.

2
CS 344: Design & Analysis of Algorithms
Assessment for Undergraduate CS Majors in
Fall 2010
Author: Subhasish Mazumdar

Course Learning Outcomes


At the end of the course, a student should be able to
1. use complexity as a metric for resources consumed by algorithms;

2. understand paradigms of algorithm design and techniques for analysis;

3. appreciate advanced data structures, combinatorial and graph algorithms

4. prove the correctness of algorithms; and

5. grasp the theory of NP-completeness.


Course Assessment Methodology

• Each course outcome was tied to one or more questions in the Midterm and final
exam.

• A formula was used to compute a normalized weighted sum from the scores for
those questions.

• A table containing one numeric score per student per course outcome was com-
puted.

• The table was aggregated along course outcomes to obtain a numeric score per out-
come averaged over the whole class.

• Only data for undergraduate CS majors was used for this analysis.

The formulas used were


Course Formula
Learning linking outcome to
Outcome # graded items
1 100*( 1*MIDTERM:Q3 )
2 100*( 0.4*MIDTERM:Q6 + 0.4*FINAL:Q3 + 0.2*FINAL:Q4 )
3 100*( 0.4*MIDTERM:Q4 + 0.6*FINAL:Q5 )
4 100*( 1*MIDTERM:Q5 )
5 100*( 1*FINAL:Q6 )

1
(Notation: Midterm:Q3 represents the score on question Q3 on the Midterm exam divided
by the maximum possible points on that question.)
Example: Suppose we want to compute the numeric score for Course Learning Outcome
2 and the row for that outcome in the above table is of the form
2 100*(0.4*MIDTERM:Q7or8 + 0.4*FINAL:Q3 + 0.2*FINAL:Q4)
1. We get a number for a student S1 by
(a) taking S1 ’s score on the seventh/eighth question on the Midterm and dividing
it by the maximum possible points on that question;
(b) doing a similar computation on S1 ’s score on the third and fourth questions on
the final exam;
(c) adding up the three values obtained in the last two steps after multiplying
them by 40, 40, and 20 respectively;
2. We repeat the above for each undergraduate student; and
3. The numbers are averaged over all the students in the whole class.

Performance Metric

The grading was done to ensure that a 40 percent score implied that the basic con-
cept had been grasped, a score of 60 percent or more indicated a superior performance, a
score less than 30 percent implied that the basic concepts had not been learned, with a 30
to 40 percent score implying a marginal state:
Class average Performance threshold
< 30% Unsatisfactory
30 to < 40% Marginal
40 to < 60% Satisfactory
60 to 100% Excellent

Results

The following numeric assessment scores were obtained through the process out-
lined above:

Course Class Performance


Outcome Average Score
1 25.2 1
2 38.7 2
3 53.1 3
4 34.8 2
5 40.0 3

2
Average over all outcomes = 2.2
This entire course is devoted to one program outcome (knowledge of the theoretical con-
cepts of computing). So, we simply substitute a numeric value for the entries in the perfor-
mance column in the above table (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory,
and excellent respectively) and compute the average over all outcomes as shown above.

The numeric contribution of this course towards the program outcome knowledge of
the theoretical concepts of computing is the average over all outcomes as shown in the line
below the above table.

Remarks
The performance scores have decreased compared to last year. Over the last three
years, the numeric performance scores are as follows.
Outcome Number 2008 2009 2010
1 3 2 1
2 3 3 2
3 3 3 3
4 3 2 2
5 2 3 3

Clearly, there is an overall reduction in performance this year. First, unlike other
years, there were much fewer students with a strong background in slightly rigorous
logical reasoning.
Second, a sizeable number of students did not do well in the midterm but decided
to keep going without making significant adjustments.
Third, at the end of the semester, there were a large number of plagiarism cases that
indicated that quite a few students were relying on others. Such reliance leads to poor
examination scores.
• A surprising observation is the downward trend in Outcome 1 over the last three
semesters. Since the related material is basically an application of material covered
in CSE 122, I recommend more strenuous application of complexity analysis and
recurrence equations in that course.
• Outcome 4, which is about abstract logical concepts, was marginal last year as well.
This is a long-standing problem that this course shares with CSE 342 Formal Lan-
guage and Automata. The only way to correct this problem is improve the prepara-
tion in Discrete Mathematics as well as basic concepts of logic and rigorous proofs.
I recommend that an appropriate Mathematics course be made compulsory for all
CS sophomores.
• The degradation in Outcome 2 is basically a symptom of an overall poor showing
this year as discussed above and is connected with Outcome 1.

3
• Outcomes 3 and 5 remain at a satisfactory level. This is good news because it shows
that the attempt to making graph algorithms intuitive is succeeding and that the
additional time and energy spent in covering NP completeness, the most difficult
topic in the course, is paying off.

Remedial Actions

• Next time I teach this course, I shall spend time reviewing basic ideas of proofs,
material not covered in the Discrete Mathematics course; as well as simple complexity
analysis, material that is covered in CSE 122.

• The approach towards graph algorithms and NP completeness is working and


should be unchanged.

• I recommend that the department seriously consider teaching a theoretical con-


cepts course that covers discrete mathematics, logic, and the notion of mathematical
proofs.

• As before, students refrain from asking questions owing to the fear of losing prestige
before their peers. In 2009, when I repeatedly asked for anonymous offline questions
and answered them it was greatly appreciated. This semester, I reduced the number
of times I did so; next semester, I shall attempt to increase that number.

• The plagiarism cases are disturbing. It raises the question of the effectiveness of
homework problems. One possibility is to add an one-hour tutorial session to this
course.

4
Assessment for CS 351 Undergraduates in Spring 2010

Course Outcomes

At the end of the course, a student should have

1. Basic understanding of simulation concepts;


2. Basic understanding of optimization algorithms;
3. Basic understanding of systems modeling;
4. Basic skills for developing simulations and models;
5. Basic understanding of advanced complex systems models;
6. Awareness of rudimentary software engineering needs; and
7. Basic understanding of impact of proper ethics on the profession and the
individual.

Assessment Methodology:

This is broken into 2 parts, obtaining course outcomes, and mapping the course outcomes
to the department's program outcomes. The steps in assessing the course outcomes are as
follows:

1. Each course outcome was tied to one or more questions in the midterm and final
exam.
2. Formulas were used to compute a normalized class score for each question.
3. A table of weights per question per course outcome was computed.
4. The table was aggregated along outcomes to obtain a column of question
significance weights for each course outcome.
5. The normalized class score for each midterm and final exam question was
combined with the weights for each course outcome, generating a weighted
contribution from each question to each course outcome.
6. The weighted contributions from each test question to each course outcome were
summed for each course outcome to get an aggregated assessment measure of the
achievement of the course outcome.
7. The one graduate student in the class was excluded in this analysis.

For the midterm and final exams the scores for each student were known and some
questions were worth more than one point, so each student's score on each question was
used to generate the contributions to each outcome.

For both the midterm and the final the importance of each question to each course
outcome was determined and stored as a number in [0, 10] where 0 is “no significance”.
Each question was allowed to have a nonzero significance to only one course outcome.
These “significance” numbers were then converted to normalized weights for each
outcome by adding them up for all questions for the given course outcome, then dividing

1
each stored number by the total for that course outcome. The sum of each course
outcome's weights is then 1.000.

outcome weight = outcome significance / outcomes significance total

For the midterm exam the normalized numeric score of each question was computed by
dividing the number correct (out of 33 students) on each question by the number of
students, giving a normalized class score in the range [0.0, 1.0] for each question.

normalized question score = number students correct / total number of students

For the final exam the numeric normalized score of each question was computed by
averaging the students' grades on each question and dividing them by the maximum
score possible on that question, giving a normalized class score in the range [0.0, 1.0]
for each question.

normalized question score = average of all scores / max possible

For the numeric contribution of each question to each course outcome the normalized
class score for that question was multiplied by the course outcome weight for that
question/course outcome combination.

outcome contribution from question = outcome weight * normalized question


score

The score for each course outcome was then computed by adding up the contributions of
each question to that course outcome.

outcome score = sum of outcome contributions from questions

Test Questions To Course Outcomes

Some of the test questions could have been applied to multiple course outcomes, so a
single course outcome was chosen for each question from among those the question
supported. Generally the choice was by which course outcome seemed most relevant,
but in some cases it was because that course outcome had few other questions that
applied. The first column is the question number from the midterm or final, the second
is the course outcome to which it applies, the third is the relevance measure (0 to 10) of
the question to the course outcome, the fourth is the weight computed from the
relevance's for all questions for the single course outcome, the fifth is the actual
composite grade of the class on that question, normalized to 0.0 to 1.0, and the last is the
contribution of that grade to the course outcome to which the question applies. The result
is the following table:

2
Question Outcome Relevance Weight Grade Contribution
Midterm - 1 1 4 4/14 3.90/6.00 0.1857
Midterm - 2 2 3 3/6 4.50/10.00 0.2250
Midterm - 3 3 8 8/25 0.80/3.00 0.3200
Midterm - 4 4 2 2/15 1.90/2.00 0.1267
Midterm - 5 4 2 2/15 1.20/2.00 0.0800
Midterm - 6 4 6 6/15 3.50/4.00 0.3500
Midterm - 7 2 3 3/6 1.00/2.00 0.2500
Midterm - 8 5 8 8/28 3.00/4.00 0.2143
Final-1 5 10 10/28 6.60/7.00 0.3367
Final-2 3 7 7/25 3.00/3.00 0.2800
Final-3 5 10 10/28 4.70/5.00 0.3357
Final-4 1 10 10/14 4.50/5.00 0.6429
Final-5 4 5 5/15 5.70/6.00 0.3167
Final-6 3 10 10/25 5.80/7.00 0.3314

Tallying these results by outcome gives the final scores for each course outcome.

Course Outcome Results

The following numeric assessment scores for the course outcomes were obtained:

Outcome Class Category


Average
1 0.83 Satisfactory
2 0.48 Unsatisfactory
3 0.93 Satisfactory
4 0.87 Satisfactory
5 0.89 Satisfactory
6 - -
7 - -

Based on the answers to the questions, it was felt that a score of less than 50 percent
implied that the tested concepts had not been learned, a 50 to 80 percent score implying
marginal capability, an 80 to 95 percent score implied satisfactory capability, and 95 or
above meant there was an excellent grasp of the subject matter. Thus, all the outcomes
were acceptable or excellent. Outcomes 6 and 7 were never measured via test questions
on the midterm or final.

Throughout the course quizzes and assignments were used to evaluate grasp of the basics
and the course material was adjusted to ensure additional exposure to advanced concepts
as the students were having few difficulties with the basics. The tests focused on subject
matter beyond the basics, and as such the above scores reflect their grasp of the advanced

3
concepts. The approach was successful, but the assessment, being based on the testing of
primarily the advanced concepts, especially in area 2, was not indicative of the student’s
levels of achievement.

Contributions of Course Outcomes to Program Outcomes

The course outcomes have been related to the stated program outcomes via the mapping
given in the following table. A program “relevance weight” was generated for each
related outcome pair. These weights and the mappings have been listed in the following
table:

Course Applicable Weight


Outcome Program
Outcome
1 1a 2
2 2 2
3 3 2
4 4 3
5 2 2
6 N/A -
7 N/A -

The result for each program output is as follows:

Program outcome 1a: Uses only course outcome 1 – Satisfactory


Program outcome 1b: Not applicable.
Program outcome 2: Uses course outcomes 2 and 5 – Satisfactory
Program outcome 3: Uses only course outcome 3 – Satisfactory
Program outcome 4: Uses only course outcome 4 – Satisfactory
Program outcome 5: Not applicable.
Program outcome 6: Not applicable.
Program outcome 7: Not applicable.

Discussion:

The lower levels seem to be indicative of the redirection of the test questions to less
emphasis on the basics and more emphasis on the advanced knowledge. This can be

4
remedied by targeting the basics more fully in the tests in the future.

5
CSE/IT353: (Data and Computer Communication)
Class Assessment (FALL2010)

Class outcomes:
1. The basic concepts, definitions, and mechanisms, at different hardware and
software levels, which constitute the underlying operations of data and computer
communication systems.

2. Design and applications of the local, metropolitan, and wide area networks (LAN,
MAN, and WAN, respectively); including the utilization of the most advanced
wired and wireless technologies and protocols.

3. The internetworking technology and applications: relays and protocols.

The methodology that we deployed to calculate the percentages for the assessment is
as follows:

1) For each of the above listed class outcomes, identify the corresponding students obtained
grades in exams, home-works, projects, and quizzes.

2) For exams only, divide the exam sections (T-F, MCQ, SA) of questions into categories
with each question relating to one of the class’s outcomes, with minimal (or none overlap
between outcomes. Compute the average percentage for each category over different exam
sections, and weight each average by a scale that reflects its contribution importance to the
outcome evaluation (assessment). We also weight True/False question more than MCQ
because of the latter degree of difficulty, and some complains about MCQ questions
ambiguity.

3) For each obtained percentage, in 1 above, that pertains to certain outcome, weight with a
corresponding factor (depending on how many different grades, or partial grades, are
contributing to one outcome). For example, if an exam is the only contributor, we take the
exam percentage as the final assessment percentage. Exams are also factored with a degree
of difficulty since it is an open for the highest obtained student grade to set the rest of the
class normalized scores. Another case, if all different contributing subjects (quizzes,
HWs, and Projects) are there, then we reweight them differently (every semester) based on
the judgment of the instructor, with respect to the amount of students efforts and the
degree of difficulty per each subject.

Assessment Result: (The percentages represent the students weighted average scores over
exams, quizzes, and home-works related to each of the above class outcomes)

Total score for outcome #1 is 77pts out of 100pts (77%)


Total score for outcome #2 is 72pts out of 100pts (72%)
Total score for outcome #3 is 78pts out of 100pts (78%)
Performance Metrics:

The scale that has been used to assess each of the class outcomes is as follows:

Class average Performance


< 40% Unsatisfactory (1)
40-54% Marginal (2)
55 – 79% Satisfactory (3)
80 – 100% Excellent (4)

Outcome Class Performance


Average
1 77% Satisfactory
2 72% Satisfactory
3 78% Satisfactory

Conclusion
Possible further improvements:

Outcome 1: More in class examples in addition to the homework exercises and


quizzes in the basic signaling and data encoding topics.

Outcome 2: More assignments on the LAN/MAN/WAN protocol design,


including programming assignments, paper homework, and reports.
Based on the speed of topics covering in the class, I am still planning
might assign an additional third programming project in the area of
LLC Data-link sublayer (error/flow control) over TCP/IP sockets
connections.

Outcome 3: More on the most advanced internetworking protocols, especially


over the wireless networking (wireless TCP for WSNs) and fiber
optics domains (e.g., examples are packet over ATM, SONET, and
UDWDM).

As in last year of assessments, the most troublesome is the first outcome about the
basic definitions of subject, yet score moved from 69% last year to 77 % this year.
The remaining outcomes remain comfortably satisfactory, Yet, the internetworking
subject remained 78% and needs more attention next year. I will revisit and revise the
T/F and MCQ questions based on some mid-semester students feedback/survey, to
make them more precise/clear, also I will continue to show a sample exam to students
and get their feedback, which was very helpful for most of them last year.

Scores for Program Outcomes:

This course affects the third and the fifth program outcomes. We will deal with each
by substituting a numeric value for performance (1, 2, 3, and 4 for unsatisfactory,
marginal, satisfactory, and excellent, respectively) and computing the average.

1) Third program outcome “knowledge of fundamental principles of programming


languages, systems, and machine architectures” involves the first
course outcomes.

To strengthen the student’s knowledge of the physical layer of the data and computer
communication subject, some basic architectural hardware and operating system
software components are to be covered in the class. Hence, there is a direct mapping
between the first class’s outcome and the third program’s outcome, specifically to
its system and architecture components.

Class Class Performance Overall


Outcome Average
1 77% 3 3

2) Fourth program outcome “Exposure to one or more computer science application


areas” involves second and third class outcomes.

Clearly the second and third class outcomes contribute directly into the fourth
program’s outcome, exposing the students to one or more computer science
application areas such as the design of LAN, MAN, and WAN wired/wireless/optical
links’ varieties, both the underlying topologies and protocols.

Class Class Performance Overall


Outcome Average
2 72% 3 3
3 78% 3
CS 373: Introduction to Database Systems
Assessment for Undergraduate CS Majors in
Spring 2010
Author: Subhasish Mazumdar
Course Learning Outcomes
At the end of the course, a student should be able to
1. build conceptual models using Entity-Relationship (ER) diagrams;
2. understand the theory and use of the relational model;
3. convert from an ER schema to a relational schema;
4. appreciate the impact of physical data organization;
5. grasp the introductory concepts behind database concurrency control, recovery, in-
tegrity, security, and distributed databases;
6. design and implement a database on the Oracle DBMS using SQL and PL/SQL
starting with the description of a small real-world problem.

Course Assessment Methodology


• Each course outcome was tied to either one or more questions in the Midterm and
final exam or an end-semester database design-and-implementation project.
• A formula was used to compute a normalized weighted sum from the scores for
those questions.
• A table containing one numeric score per student per course outcome was com-
puted.
• The table was aggregated along course outcomes to obtain a numeric score per out-
come averaged over the whole class.
• Only data for undergraduate CS majors was used for this analysis.
The formulas used were
Course Formula
Learning linking outcome to
Outcome # graded items
1 100*( 0.9*MIDTERM:Q2 + 0.1*FINAL:Q4 )
2 100*( 0.5*MIDTERM:Q4 + 0.1*FINAL:Q2 + 0.4*FINAL:Q3 )
3 100*( 1.0*MIDTERM:Q3 )
4 100*( 1.0*FINAL:Q4 )
5 100*( 1.0*FINAL:Q6 )
6 100*( 0.2*FINAL:Q5 + 0.8*FINAL:Q7 )

1
(Notation: Midterm:Q3 represents the score on question Q3 on the Midterm exam divided
by the maximum possible points on that question. The end-semester database design and
implementation project appears as the last question on the finals.)

Example: Suppose we want to compute the numeric score for Course Learning Outcome
2 and the row for that outcome in the above table is of the form
2 100*(0.4*MIDTERM:Q7or8 + 0.4*FINAL:Q3 + 0.2*FINAL:Q4)

1. We get a number for a student S1 by

(a) taking S1 ’s score on the seventh/eighth question on the Midterm and dividing
it by the maximum possible points on that question;
(b) doing a similar computation on S1 ’s score on the third and fourth questions on
the final exam;
(c) adding up the three values obtained in the last two steps after multiplying
them by 40, 40, and 20 respectively;

2. We repeat the above for each undergraduate student; and

3. The numbers are averaged over all the students in the whole class.

Performance Metric

The grading was done to ensure that a 70 percent score implied that the basic con-
cept had been grasped, a score of 80 percent or more indicated a superior performance, a
score less than 50 percent implied that the basic concepts had not been learned, with a 50
to 70 percent score implying a marginal state:

Class average Performance threshold


< 50% Unsatisfactory
50 to < 70% Marginal
70 to < 80% Satisfactory
80 to 100% Excellent

Results

The following numeric assessment scores were obtained through the process out-
lined above:

2
Course Class Performance
Outcome Average Score
1 93.4 4
2 72.9 3
3 84.7 4
4 93.8 4
5 79.2 3
6 80.8 4
Average over all outcomes = 3.7

This course is optional for the B.S. in Computer Science program. Hence, it is not used for
program outcome computation.

Remarks
Let us compare the numeric scores with those of the last two years. Clearly, there is
a marked improvement in 2010.

Course Outcome Number 2008 2009 2010


1 4 4 4
2 3 3 3
3 4 3 4
4 1 2 4
5 NG 1 3
6 4 4 4

3
• As in the last years, it is gratifying that the first and last course learning outcomes
have excellent scores on average. This means that the overall goal of the course is
met: students can take problem requirements, design a database, and implement
queries and programs on it, i.e., come up with a database solution.

• It is also gratifying to observe that Course Learning Outcome 4 has moved from
marginal to excellent. As indicated last year, my hypothesis was that there was
not enough turn-around time for the homework on Physical Organization; conse-
quently, this year, I tested that hypothesis with a more focused homework on this
topic. It appears to have worked. Greater testing is necessary.

• Course Learning Outcome 5 has also moved from unsatisfactory to satisfactory. This
outcome is based on an introduction to concurrency and recovery, topics covered in
the graduate database course. Like In past years, it was informally observed that
students who did answer that question, performed well. That led me to suspect
that a majority of students chose to not spend much time preparing for this topic
knowing that there will be just one question on it in the finals and it would carry a
relatively small number of points. I shared my suspicion with the class. That may
have led to a change in the students’ approach to preparation.

• As in the last two years, it was observed that students had difficulty in grasping
the theory of the relational model because logic was involved. It is hoped that a
revamped Discrete Mathematics course will mitigate this problem.

Remedial Actions

• To move Outcome 2 to Excellent, I recommend that the department seriously con-


sider teaching a theoretical concepts course that covers discrete mathematics, logic,
and the notion of mathematical proofs.

• Next year, I shall continue to encourage students not to neglect the material under-
lying Outcome 5. Even though it is really an introduction to concepts they will see
in the graduate database course, I have found that students enjoy the material.

• I shall attempt to ensure sufficient turn-around time for the Physical Organization
homework.

4
CS 382: Legal, Ethical and Social Issues in Information Technology
Assessment for Computer Science Undergraduates in Fall 2010

Course Outcomes

At the end of the course, a student should be able to

1. Apply theories of existing law in the areas of computer and information


technology;
2. Evaluate alternative courses of action in situations where there is not presently
applicable law;
3. Recognize ethical and societal issues in technology.

Assessment Methodology:

The course assignments are not currently separated by course outcome. All assignments
fully integrate all course outcomes.

The numeric scores were calculated by:

1. Taking the mean of student scores on assignment 1;


2. Similarly, the mean of student scores for each assignment;
3. Averaging the scores from all of the assignments into a single percentage score.

Performance Metric

Based on the answers to the questions, it was felt that a 61 percent score implied that the
basic concepts had been grasped, a score of 78 percent to 89 percent indicated a
satisfactory performance, and a score of 90 percent to 100 percent indicated an excellent
understanding of both the concepts and rationale underlying those concepts. A score of
less than 60 percent implied that the basic concepts had not been learned.

Class average Performance

< 60% Unsatisfactory


61 – 77% Marginal
78 – 89% Satisfactory
90 – 100% Excellent

1
Results

The following numeric assessment scores were obtained through the process outlined
above:

Outcome Class Average Performance


1 82.25 Satisfactory
2 82.25 Satisfactory
3 82.25 Satisfactory

Thus, overall outcomes were satisfactory.

As a point of interest, the same scores were computed after removing “0”s which
represented assignments not turned in. This yields the following table:

Outcome Class Average Performance


1 95.25 Excellent
2 95.25 Excellent
3 95.25 Excellent

Therefore, excluding the “0”grades for work which was not submitted by the students
resulted in an overall performance of excellent.

As a result of the number of assignments not turned in and failure to attend classes, in a
fall class of 13 students, 2 students, or 15.0 percent of the class, will be required to repeat
the course. This is a significant reduction from the averages of 30% of students retaking
the class in prior semesters. See actions taken as a result of previous assessments below.

Scores for Program Outcomes


This course affects one program outcome. We will deal with this outcome by
substituting a numeric value for performance (1, 2, 3, and 4 for unsatisfactory, marginal,
satisfactory, and excellent respectively). We will use the scores based on assignments
submitted, rather than the lower scores which included assignments not submitted.

Program outcome: the awareness of the ethical and societal impact of developments in
the field of computer science involves all course outcomes.

Outcome Class Average Performance Overall


1 95.25 4
2 95.25 4 4
3 95.25 4

Thus, the overall impact on program outcome is excellent. Note that with the calculation
including assignments not submitted, which includes two students who will have to

2
retake the course to complete their degree, the overall impact on the program outcome is
Satisfactory.

Remedial Actions:

The following remedial actions are being contemplated.

• Evaluate developing new assignments for Fall 2011to assess Course Outcomes
1, 2, and 3 separately. The new assignments will need to have a primary focus
on each discrete course outcome, but will not exclude the other interrelated
outcomes.
• The students appear to lack an overall understanding of the significance of the
issues addressed in this course. Other department courses could expand the
components addressing the value of learning the legal and ethical issues in
computer science to prepare the students for this course.
• Alumni surveys could include questions regarding the professional value of the
legal and ethical issues addressed in this course for use in discussing the
purpose of class each semester.

The following remedial actions have been taken as a result of assessments.

• The department now requires Software Engineering CS 326 as a pre or co


requisite for CS 382. Students exposed to courses with larger code projects have
an expanded ability to understand the complexity of legal and ethical issues
related to software quality and the computer science profession generally.
• Department Chair Dr. Lorie Liebrock is now attending a class session each
semester and emphasizing to the students the importance of understanding law
and ethics regarding the technology.

3
5/26/2011 Dongwan Shin

Class Assessment Report: CSE 389 Internet and Web Programming in Fall 2010

Learning Outcomes Assessment Methods1 Assessment Results2


1. Knowledge of web and internet protocols - Question 1 in the midterm exam 3. The class average was 77.8. Based on the footnote 2,
this item is satisfactorily met.
- Formula used:
10*(1*MidtermQ1)
2 Knowledge of the semantics and syntax of - Questions 1 through 8 in the final 3. The class average was 65.7, and this item is
HTML, JavaScript, Servlets/JSP, and XML- exam, and homework 1 through 5 satisfactorily met.
based web services
- Formula used:
100*(0.5*FinalQ1-8/80 +
0.5*HW1-5/550)
3. Knowledge of internet architecture and - Question 1 in the midterm and 3. The class average was 71.5, and this item is
application models based on it question 1 in the final exam satisfactorily met.

- Formula used:
10*(0.5*MidQ1 + 0.5*FinalQ1)
4. Hand-on experience on various design - Question 9 and 10 in the final 3. The class average was 75.6, and this item is

1
Assessment Methodology
• Each course outcome was tied to one or more questions in the midterm exam, comprehensive final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
5/26/2011 Dongwan Shin

techniques/patterns exams satisfactorily met.

- Formula used:
10*(0.5* FinalQ9 +
0.5*FinalQ10)

5. Ability to develop state-of-the-art web- - Class project 4. The class average was 93.5, and this item is excellently
based applications met.
- Formula used:
(1*Project)

Conclusion

- Compared to the last assessment on this course, this year’s assessment result shows that Outcome 3 has been improved from “unsatisfactory”
to “satisfactory.” More class sessions on basic concepts on TCP/IP-based network protocols and architecture seems to have helped improve
this outcome.
- Outcome 2 could be improved by having more real-life examples and homework on how and where XML is used, since some students seem to
have had difficulty in understanding and using XML for solving given problems.
Outcomes: CS423 - Compiler Writing

Course Outcomes:
Students should be able to

1. Essential concepts: understand the essential concepts, mechanisms, and algorithms


used in compilers.
2. Algorithmic tradeoffs: determine tradeoffs between algorithms for compilers for
different settings;
3. Decision triple: analyze a language, architecture, and application triple to determine
which optimizations are important and what intermediate representations are needed
to support them;
4. Algorithm implementation: implement lexical analysis, parsing, semantic analysis, code
generation, and basic optimizations;
5. Software engineering: demonstrate ability to apply software engineering process;
6. Large programs: demonstrate ability to program substantial software systems;
7. Technical communication: demonstrate ability to present technical information in
written and spoken form;
8. Group projects: demonstrate ability to manage substantial group projects;

(Assessment through homework, exams, individual projects, and team projects)

Assessment Methodology:
• Each course outcome was tied to one or more questions on an exam and/or portions of a project.
• A formula was used to compute a normalized weighted sum from the scores for those questions.
• The numeric score per outcome was averaged over the whole class.
• Data for all students who completed the course was used for this analysis.
The formulas used were:
Course Formula
Outcome
1 100% *Average(Quizzes, Average(FQ1, FQ3, FQ5), Average(MQ1-MQ7))
2 100%*Average(FQ2)
3 75% * Average(FQ6,FQ7,FQ8,FQ9,FQ10) + 25% * Average(L9,L10)
4 100%
*Average(Average(MQ8,MQ9,FQ4,LFQasm,LFQc++),Average(L1,L4,L6,L7,L11))
5 75% * Average(D1, D2, D3,D4,D5) + 25%*Average(L2, L5, L8)
6 100% * Average(I1,I2, I3)
7 100% *Average(D6,D7,PN)
8 50% * (I1eval + I2eval + I3eval + Deval) + 50% * Instructor_Evaluation
(Notation: MQ3 represents the score on question Q3 on the midterm exam divided by the maximum
possible points on that question; FQ indicates a final exam question; LFQasm indicates the assembly
programming lab final exam; LFQc++ indicates the C++ programming lab final exam; D indicates a
design project component; L indicates a lab; I indicates an implementation component), PN indicated the
project notebook, and I# or D followed by “eval” means the evaluations of students performance as a
group member. The overlap in use of measures for outcomes is addressed in the Actions Section
following Program Outcomes. Note that outcome 4 relates to what is implemented in the large programs
in outcome 6, which explains why those formulas are the same.
Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.

1
Unsatisfactory corresponds to less than 60 percent.

Results:
The outlined process results in numeric assessment scores:

Outcome 2010 2011 Class Performance


Class Average Average
1 83.4 84.7 Satisfactory
2 80.0 87.3 Satisfactory
3 84.1 81.2 Satisfactory
4 84.9 80.1 Satisfactory
5 82.3 86.1 Satisfactory
6 87.3 84.8 Satisfactory
7 88.8 89.1 Satisfactory

8 88.8 81.9 Satisfactory

Thus, all outcomes were satisfactory. The performance measures translate as follows: Excellent is 4,
Satisfactory is 3, Marginal is 2, and Unsatisfactory is 1.

Note that in 2011, the results for class average includes two students who did not successfully complete
the course. These results also include zeros for one student on the design project and final exam. This
student had a family emergency and received an incomplete in the course. If these were removed from the
class, the results would improve and some would undoubtedly be excellent.
Actions:
The following is being considered:
• This year the writing and presentation was graded separately. This year the separately graded
design discussion and presentation were used for technical communication evaluation.
Program Outcomes:
Three course outcomes (CO) for this course are also program outcomes (PO). These use the same
equations as shown in course outcomes above.

CO PO Class Average Performance Performance


6 1b Large Programs 84.8 Satisfactory 3
7 5 Technical 89.1 Satisfactory 3
Communication
8 6 Group Projects 81.9 Satisfactory 3

Thus, the three course outcomes that provide a measure of program outcomes are all satisfactory.
Actions:
The following is being considered:
• There was a decline in student performance in a number of categories. It was discovered late in
the semester that the teaching assistant was not providing the quality or level of assistance to
students that has been provided in the past and is expected for this course. This was not reported
to the instructor until the last project was submitted, so intervention was not possible.
• For next year: The TA from last year will not be used again for this course and the new TA will
be more closely supervised.

2
423: Based on course evaluations:
• The most significant criticism by students was that the teaching assistant was not
helpful/knowledgeable and the instructor was not available all of the time. Although there were
other comments, e.g., about problems with the infrastructure, these were actually created by the
TA delivering an incorrect (broken) version of the infrastructure. A better TA is being recruited to
help address this problem. Reducing teaching load next fall will help address instructor
availability.

3
5/26/2011 Dongwan Shin

Class Assessment Report: CSE 441 Cryptography and Applications in Fall 2010

Course Learning Outcomes Assessment Methods1 Assessment Results2


1. Knowledge of the security goals of - Questions 1 and 2 in the final 4. The class average was 84.4. Based on the footnoted
cryptography exam assessment results, this item is excellently met.

- Formula used:
10*(0.5*FinalQ1 + 0.5*FinalQ2)
2. Knowledge of mathematical concepts - Questions 3, 5, and 6 in the final 3. The class average was 72.4. This item is satisfactorily
behind modern cryptography exam met.

- Formula used:
10*(0.3*FinalQ3 + 0.4*FinalQ5
+ 0.3*FinalQ6)
3. Knowledge of cryptographic protocols, - Questions 7 and 8 in the final 2. The class average was 64.5, and this item is marginally
techniques, and applications exam met.

- Formula used:

1
Assessment Methodology
• Each course outcome was tied to one or more questions in the comprehensive final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
5/26/2011 Dongwan Shin

10*(0.5*FinalQ7 + 0.5*FinalQ8)
4. Ability to evaluate cryptographic protocols, - Questions 2, 7, 9, and 10 in the 3. The class average was 68.00, and this item is
techniques, and applications final exam satisfactorily met.

- Formula used:
10*(0.3*FinalQ2 + 0.3*FinalQ7
+ 0.2*FinalQ9 + 0.2*FinalQ10)
5. Ability to design, implement, and test - 1st and 2nd homework 4. The class average was 89.00, and this item is
security applications based on applied excellently met.
cryptography - Formula used:
1*(0.5*HW2 + 0.5*HW4)
6. Ability to work in group to develop - Class project 4. The class average was 89.56, and this item is
cryptographic applications in order to solve excellently met. The class project was group-based, 3
security problems - Formula used: students in a group.
1*( 1*Project)
7. Technical communication skills in written - Class project (including 4. The class average was 89.56, and this item is
and oral form presentation and final report) excellently met. The assessment of this item is based on
two project deliverables: technical reports and in-class
- Formula used: presentation.
1*( 1*Project)

Conclusion

- Compared to the last assessment on this course offered in Fall 2008, this year’s assessment result shows that Outcome 2 has been improved
from “marginally met” to “satisfactorily met.” The previous effort to cover relevant topics in Math 221 seems to have worked for this.
- Outcome 3 could be improved by introducing small hand-on implementation projects covering different cryptographic protocols such as
digital signature.
Course Outcomes: CS476 – Visualization:
Students should be able to

1. understand the design issues for creating effective visualizations;


2. understand information and scientific visualization techniques;
3. read, evaluate, and present technical papers;
4. apply effective visualization techniques to an actual visualization problem and
associated dataset;

(Assessment through homework, exams, and substantial projects)

Assessment Methodology:
• Each course outcome was tied to one or more questions on an exam and/or portions of a project.
• A formula was used to compute a normalized weighted sum from the scores for those questions.
• The numeric score per outcome was averaged over the whole class.
• Data for all graded undergraduate students who completed the course was used for this analysis.

The formulas used were:


Course Formula
Outcome
1 Midterm exam
2 Exploratory projects
3 Topical papers
4 Semester project
(Notation: where a plural occurs, e.g., Topical papers, the average of assignments is used.)

Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.
Unsatisfactory corresponds to less than 60 percent.

Results:
The outlined process results in numeric assessment scores (Note this does not include results for auditors
or a student who has not completed an incomplete for the course.):

Outcome Class Average Performance


1 70 Satisfactory
2 79.5 Satisfactory
3 79.5 Satisfactory
4 78.0 Satisfactory

Thus, all outcomes were satisfactory.

Actions:
Given that there is only one student who enrolled in and completed the undergraduate course for a grade,
there is not enough data for reasonable assessment or to use in feedback for course improvement.
CSE 113: Introduction to Programming
Course Assessment for Undergraduates in Spring 2011

1 Course Learning Outcomes


After successfully completing this course, a student should be able to:
1. Understand data representation and typing and the C programming language at an introductory
level
2. Develop and solve problems from description to implementation
3. Understand the basic elements of imperative programming: variables, flow control, functions, and
recursion
4. Implement and use basic data structures: arrays, strings, and linked lists
5. use tools such as editors, compilers, and debuggers in the process of developing small to medium
sized computer programs
2 Assessment Methodology
For the purpose of objective assessment, numeric scores were calculated to measure each student’s
mastery of each learning outcome. The following table shows the metrics used for assessment. The
metrics include midterm and final exam questions as well as grades on the projects and the final lab
assignment. An average of students’ lab assignment grades was also used to measure their ability to use
the software tools introduced. Homework and quiz grades were not used for assessment because they
were part of the initial topic presentation and were thus not a good measurement of topic mastery.

Outcome Metrics
Midterm Section 2: Lexical Scoping
Final Section 0: Basic Concepts
1
Midterm Section 3: Binary Numbers
Midterm Section 4: Boolean Logic
Project 0
2
Final Section 4: Program Design
Midterm Section 5: Short Answer
3
Final Section 3: Short Answer
Project 0
4
Lab Final
Labs1
5
Lab Final

Table 1: Metrics for Performance Assessment

Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.

1The grade for labs is computed by taking the average of all ten lab assignments throughout the semester.
CSE 113 Undergraduate Course Assessment Spring 2011

Only the performance of Computer Science majors was evaluated. There were 7 Computer Science
majors enrolled in the course. Out of the seven, 4 were repeating the course.

2.1 Performance Measurement

Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.

Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent

Table 2: Numerical Ranges for


Performance Measurement

3 Results for Course Outcomes


The following table provides a summary of the results for each course learning outcome, calculated as
described above. The table also provides the number of students who achieved each performance level, in
order to give a better picture of the performance distribution of the class.

Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 1 0 3 3 78.76 Satisfactory
2 1 1 3 2 70.07 Satisfactory
3 1 0 5 1 74.83 Satisfactory
4 3 2 0 2 57.21 Marginal
5 2 2 1 2 66.15 Satisfactory

Table 3: Performance Measurements for Course Learning Outcomes

Of the seven students, one stopped showing up close to the end of the semester. Another student stopped
turning in labs towards the end of the semester. Since most of students were repeating the class,
enthusiasm was lackluster among the set.

3.1 Outcome 1: Data Representation

The performance measurement for objective 1 shows that Computer Science (CS) majors gained a
Satisfactory comprehension of basic concepts. These concepts were taught in class to emphasize the fact
that one must learn certain aspects of how computers work in order to learn how the C language works.
The first five weeks of the semester were dedicated to these basic concepts.

2
CSE 113 Undergraduate Course Assessment Spring 2011

One student did not take one of the exams used for the calculation of this metric. This is the unsatisfactory
outcome. This was the highest scoring outcome.

3.2 Outcome 2: Design to Implementation Problem Solving

Students were given many design problems and were told to develop solutions that were solvable in C.

Of the seven students in the data set, one did not attempt the metrics used in scoring this Outcome. That is
where the Unsatisfactory rating comes from. Some students, despite emphasizing the importance, did not
take the project (which was used in scoring this outcome) seriously.

3.3 Outcome 3: Basic Elements in Programming

The performance measurement for Outcome 3 was Satisfactory. Of the seven students in the data set, one
did not attempt the metrics used in scoring this Outcome. Since most of the students were repeating the
class, these students may have helped to contribute to the higher Outcome due to the fact that they may
finally get the basic elements in programming.

3.4 Outcome 4: Small Program Implementation

The result for outcome 4 was Marginal. This was the lowest scoring outcome. This data is more skewed
than the rest because towards the end of the semester which this data was taken from, results show a lack
of motivation. There were quite a few students who did not turn in portions of the project. The schedule
provided ample time for the completion of the assignment, however students are usually busy with other
classes as well. During the development times for the projects, both the instructor and class TAs help
extended office hours.

As with the rest of the outcomes, one student completely stopped coming to class towards the end of the
semester and another student did not turn in many of the labs that were calculated in this course outcome.

Many students did poorly on the lab final. This could have been due to the fact that this was the first time
students had to accomplish a small programming assignment without any help from the instructor or TA.

3.5 Outcome 5: Tool Usage

The result for outcome 5 was Satisfactory. Throughout the semester, students had ample time to get
familiar with the programming environment and tools. The TA and instructor were at all the lab sessions
to troubleshoot any problems. This score is a bit misleading in the fact that some students stopped turning
in labs at the end of the semester. This was possibly due to their busy schedules or lack of interest during
the busy time.

4 Results for Program Outcomes


This course pertains to one program outcome, outcome 1.a, which is “the ability to design, implement,
and test small software programs.” This program outcome can be correlated with course outcomes 2, 4,
and 5. By averaging the performance scores for these three outcomes, an overall score of 64.45, or
Marginal, is obtained.

3
CSE 113 Undergraduate Course Assessment Spring 2011

During the class lectures, the students were exposed to many topics in Imperative design and C
Programming. This includes top-down design and many of the C language concepts. These topics were
reinforced with homework assignments or exam questions.

5 Analysis and Remedial Action


There is a drastic difference between the fall and spring semesters in terms of number of CSE students
enrolled in the class. In addition to comparing this semester's outcomes, with the previous (Fall 2010), a
comparison will be made between this semester's outcomes with previous year's (Spring 2010) outcome.

Metric Year Number in Set Outcome 1 Outcome 2 Outcome 3 Outcome 4 Outcome 5

Spring 2010 11 Marginal Unsatisfactory Marginal Unsatisfactory Marginal

Fall 2010 41 Satisfactory Satisfactory Satisfactory Satisfactory Satisfactory

Spring 2011 7 Satisfactory Satisfactory Satisfactory Marginal Satisfactory

In response to the previous course assessment (Fall 2010), 4 new small homework assignments were
developed to target Outcome 3. The outcome score went from 67.09 to 74.83. Though this does show
some improvement, these results need much more data. Variance may come from the fact that 4 out of the
7 in this assessment's set (Spring 2011) were repeats from last semester and there are drastically less
numbers in the data set. A comparison with Fall 2011 is suggested.

The Fall 2010 assessment suggested to improve Outcome 4 by introducing more practice with basic
programming elements. One more lab was created this semester to assist the students in being able to
implement linked lists. This lab was entitled “pointers to structures”. While the outcome for this semester
was only marginal, a comparison with Fall 2011 is suggested due to the fact that 2/7 of the set did not
accomplished the required assignments in order to have an adequate assessment.

Instructor believes that labs should incorporate more testing assignments rather than just one lab final at
the end. This would allow students to realize what they are lacking instead of just asking for help as a
normal lab assignment. Perhaps a quiz at the beginning of each lab as is done in many of the chemistry or
physics lab courses.

Another point to contemplate is in programming project design. More lectures towards these
methodologies can be developed in order to solidify ground with these concepts. In addition, a larger
homework or mini-project can be given during the first half of the semester focusing on imperative design
may be given because not many of the advanced programming concepts have to be known in order for
students to learn this.

4
Class Assessment: Spr2011 CSE122-- Algorithms and Data Structures
Class outcomes:
At the end of the course, a student should be able to:
1. Understand fundamental data structures and their benefits;

2. Understand various sorting and searching algorithms;

3. Analyze the performance of such algorithms and data structures; and

4. Design and implement simple software applications using appropriate algorithms and
data structures.

The methodology that we deployed to calculate the above percentages for the
assessment is as follows:

1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual exam questions, homework, projects, and quizzes that relate to such
outcome. Each of the contributing modules maps to only one class outcome, i.e., all
are orthogonal with respect to the class outcomes. All class outcomes are covered
throughout the instruction modules used in the assessment process.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., exam questions, homework, projects, and
quizzes. The obtained average is divided by the corresponding assigned points to said
component to get a percent score. In the process, modules contributing scores to an
outcome will be weighted by different factors based on their importance.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, quizzes and semester projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3 co4
SEMESTER % 86% 86% 79% 91%

Performance Metrics:

The scale that has been used to assess each of the class outcomes is as follows:

Class average Performance


< 40% Unsatisfactory (1)
40-54% Marginal (2)
55 – 79% Satisfactory (3)
80 – 100% Excellent (4)

Outcome Class Performance


Average
1 86% Excellent (4)
2 86% Excellent (4)
3 79% Satisfactory (3)
4 91% Excellent (4)
Conclusion

Possible further improvements of the Class outcomes:

Outcome 1: I think by going slow over basic data types and abstraction at the
begging tremendously in concreting the students’ understanding of
such foundation knowledge. Examples also aided in the process and
being humble and down to the level of the students letting them
know that it is OK to be confused but it is very smart if you do not
hide and keep asking questions. Breaking their fear of the subject is
very important.

Outcome 2: Explanations of the covered sorting/searching algorithms was


covered with simple examples and very slow, step by step, and
making sure that everyone got the idea! Another important issue is
to show the difference between algorithms, and their proper
environment, especially the close ones, helped a lot.

Outcome 3: It was the hardest subject; lots of actual coding examples and clear
definitions of different asymptotic run time complexity functions
(function of program input), distinguishing between them. Moreover,
the medium input analysis where the choice of pointer versus array
implementation (time vs. space) based on the target application,
without bias.

Outcome 4: A sequence of programming assignment around the same core goal,


where the student build on previous work every new module. The
target goal is to build a useful application (e.g., class information
database). This allowed the students to extend their efforts at any
assignment to fulfill previous requirements, giving them hope to
complete and they did (in most cases). At the end programming
assignments, students felt that they have implemented their own
software package with a clear reasonable task and amenable to be
used as a real life application. Incremental design of the assignments
was very helpful and touching reality with real software (using the
software design knowledge covered in class). Net time teaching, I
will think of group project for more practicing of real software
implementation. The last project explored the students’ research
ability to obtain and develop further their sleeked knowledge with
the incentive of extra credits, they really worked for it and it shows
with 91%.

A common point to all of the above actions to enhance and better achieve the
class outcomes is to keep posting the class lecture notes on the class website. In
addition, continue updating the notes as the class progresses and notify the
students of any update on the class website. Hence students have continued access
to the notes. Students are showing more progress and many appreciated the note-
posting for easy access all the time. I will continue to seek mid-semester
anonymous student views on the covered subjects, to report on my teaching, the
covered topics, and any other points of their concerns. Those evaluations proved
to be very helpful to adjust the class teaching, based upon input from other
department faculty who practiced such a process. It was very helpful, at the
beginning of every class topic to explain the three major questions: WHY?
WHAT? HOW?

Scores for Program Outcomes:

Program outcome 1a: “the ability to design, implement, and test small software programs”
involves course outcomes 1, 2, 3 and 4.

Outcome Class Performance Overall


Average
1 86% 4
2 86% 4
3 79% 3 3.75
4 91% 4
New Mexico Tech
Department of Computer Science and Engineering
Course Assessment Report
CSE 222: Systems Programming
Spring 2011
Instructor: Jun Zheng

Course Outcomes

1. Have knowledge of the fundamentals of UNIX/Linux operating system


2. Have knowledge of the basic principles of UNIX/Linux shell and shell
programming
3. Have knowledge of the basic principles of UNIX/Linux file systems
4. Have knowledge of the basic principles of UNIX/Linux processes and inter-
process communication (IPC)
5. Have knowledge of the basic principles of UNIX/Linux signals
6. Have knowledge of the basic principles of UNIX/Linux socket-based network
programming
7. be good system programmers and know how to develop, design and implement
application programs which access Unix/Linux system functions through system
calls and library routines.

Assessment Methodology:

 Each course outcome was tied to one or more questions in the homework,
quizzes, project, midterm exams and final exam.
 A formula was used to compute a normalized weighted sum from the scores for
those questions.
 A table containing one numeric score per student per course outcome was
computed.
 The table was aggregated along course outcomes to obtain a numeric score per
outcome averaged over the whole class.

For example, the numeric score for Course Outcome 1 for student S1 was obtained by
(1) taking S1's score on the HW1 and dividing it by the maximum possible points on
HW1, i.e. normalized to [0, 1];
(2) doing a similar computation on S1's scores on the other parts that corresponds to
Course Outcome1;
(3) adding up the values obtained in (1) and (1) after multiplying them by their
percentage in the final score respectively, for example, the percentage for HW1 is
6.25%;
(4) diving the value obtained in (3) by the sum of percentages of all the parts related
to Course Outcome 1;
(5) multiplying the value obtained in (4) with 100;
(6) repeating the above for each student;

1
(7) averaging these numbers over the whole class to get a numeric assessment score
for Outcome 1 averaged over the whole class.
(8) the same process was applied for each outcome.

The formulas used for each Course Outcome were:

Course Formula
Outcome
1 100*(HW1*5% + P1*5% + QZ1*1.875%+ QZ2*1.875% +
MEQ1*3.6% + MEQ2*1.2% + MEQ3*2% + MEQ4*2%)/(5% + 5% +
1.875% + 1.875 + 3.6% + 1.2% + 2% + 2%)
2 100*(HW2*5% + QZ3*1.875% + MEQ5*2%)/(5% + 1.875% + + 2%)
3 100*(P2*5% + QZ4*1.875% + QZ5*1.875% + QZ6*1.875% +
MEQ1*1.6% + MEQ2*2% + MEQ3*2% + MEQ6*2% +
FEQ3*3%)/(5% + 1.875% + 1.875% + 1.875% + 1.6% + 2% + 2% +
2% + 2.7%)
4 100*( P3*5% + P4*5% +QZ7*1.875% + QZ8*1.875% + QZ9*1.875%
+ FEQ1*1.35% + FEQ2*2.25% + FEQ3*3.15% + FEQ4*3% +
FEQ5*3% + FEQ6*3%)/( 5% + 5% + 1.875% + 1.875% + 1.875% +
1.35% + 2.25% + 3.15% + 3% + 3% + 3%)
5 100*(FEQ1*0.45%+FEQ2*0.45%+FEQ3*0.9%)/(0.45%+0.45%+0.9%)
6 100*(P4*5.8% + FEQ1*2.25% + FEQ2*2.25% + FEQ3*4.5% +
FEQ7*3% )/(5.8% + 2.25% + 2.25% + 4.5% + 3%)
7 100*(P2*5% + P3*5% + P4*5% + P5*5% + HW2*5% + MEQ5*2% +
MEQ6*2% + FEQ7*3%)/(5% + 5% + 5% + 5% + 2% + 2% + 3%)

(Notation: HW1 represents the normalized score on Homework 1; QZ1 represents the
normalized score on Quiz 1; P1 represents the normalized score on Project 1; MEQ1
represents the normalized score on question 1 of midterm exam; FEQ1 represents the
normalized score on question 1 of final exam.)

Performance Metric
Based on the answers to the questions, it was felt that a 60 percent score implied that the
basic concept had been grasped, a score of 80 percent or more indicated a superior
performance, a score less than 40 percent implied that the basic concepts had not been
learned with a 40 to 60 percent score implying a marginal state:

Class average Performance

< 40% Unsatisfactory


40 – 59% Marginal
60 – 79% Satisfactory
80 – 100% Excellent

2
Results

The following numeric assessment scores were obtained through the process outlined
above:

Outcome Class Average Performance


1 80.42 Excellent
2 71.96 Satisfactory
3 68.86 Satisfactory
4 73.13 Satisfactory
5 74.96 Satisfactory
6 78.18 Satisfactory
7 81.37 Excellent

Thus, Outcomes 1 and 7 were excellent, and Outcomes 2 to 6 were satisfactory.

Relationship of the Course to Program Outcomes:


This course affects two program outcomes:
1. The ability to design, implement, and test small software programs;
2. Knowledge of the theoretical concepts of computing and of the fundamental principles
of programming languages, systems, and machine architectures;

We will deal with each in turn by substituting a numeric value for performance (1, 2, 3,
and 4 for unsatisfactory, marginal, satisfactory, and excellent respectively) and
computing the average.

Program outcome 1 involves course outcome 7.

Course Class Average Performance Overall


Outcome
7 81.37 4 4

Program outcome 2 involves course outcome 1 to 6.

Course Class Average Performance Overall


Outcome
1 80.42 4 3.2
2 71.96 3
3 68.86 3
4 73.13 3
5 74.96 3
6 78.18 3

Comments and Remedial Actions:

3
This was the second time I taught this course. The course content was modified compared
with that of Spring 2010. Signals were introduced in Spring 2011 and the content of shell
and shell programming was reduced as planned in the report of Spring 2010. I plan to
further revise the content in the future.

In Spring 2011, all the course materials were presented in powerpoints as proposed in the
report of Spring 2010. The feedback from students was positive. I’ll continue to improve
the quality of the slides.

I also prepared more in-class questions for students. The interaction with students in the
class was positive. I’ll keep the strategy for future teaching.

An existing problem is how to teach the system calls to make the learning more
interesting. I plan to explore problem based learning strategy in the future. Driven by
challenging problems, students can work actively on how to solve real problems with the
system calls.

4
Class Assessment: 2011 CSE324-- Principles of Programming Languages
Class outcomes:

1. Clear understanding of the major design concepts of a programming language


(e.g., syntax, semantics, typing system, recursions, abstraction, polymorphic &
generic features, etc).

2. Clear understanding of the trade-offs between important language design goals:


security, efficiency, power, robustness, and complexity.

3. Clear understanding of the major linguistic differences between major languages’


paradigms: imperative, functional, object-oriented, and logic.

4. The ability to critique and properly utilize languages from each of the above
paradigms in building desired software solutions or the design of a new language.

The methodology that we deployed to calculate the above percentages for the
assessment is as follows:

1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual exam questions, homework, projects, and quizzes that relate to such
outcome. Each of the contributing modules maps to only one class outcome, i.e., all
are orthogonal with respect to the class outcomes. All class outcomes are covered
throughout the instruction modules used in the assessment process.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., exam questions, homework, projects, and
quizzes. The obtained average is divided by the corresponding assigned points to said
component to get a percent score. In the process, modules contributing scores to an
outcome will be weighted by different factors based on their importance, e.g.,
True/False exams questions are higher than MCQ questions, followed by the SA
questions.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, quizzes and semester projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3 co4
SEMESTER % 0.85% 0.86% 0.93% 0.87%

Performance Metrics:

The scale that has been used to assess each of the class outcomes is as follows:

Class average Performance


< 40% Unsatisfactory (1)
40-54% Marginal (2)
55 – 79% Satisfactory (3)
80 – 100% Excellent (4)

Outcome Class Performance


Average
1 85% Excellent (4)
2 86% Excellent (4)
3 93% Excellent (4)
4 87% Excellent (4)
Conclusion

Possible further improvements of the Class outcomes:

Outcome 1: I think by following my last assessment plan of increasing the


dosage of practical and life examples to ease the dryness of the
subject, the score gained 11%, a big leap. Another factor of such
success is the instruction focus on simplifying the basic definitions
such as: “proof by induction” and “Abstract Data Types”, essential to
understand the base for the programming “recursion” concept and
object oriented domain of languages, respectively.

Outcome 2: Another noticeable improvement (6%+), due to the continuation of


last year’s plan of giving more examples from well known languages
such as C, C++, Java, FORTRAN, etc, about tradeoffs of different
language design factors. Moreover, progress might be due to the
clear stating of the disadvantages of covered modern languages, and
justifying their use in some applications. In addition, it also might be
due my coverage of some "clean" paradigms languages'
representatives, before stepping in simi-paradigm languages (e.g.,
Smalltalk vs. C++/Java). I also continued to refine my definition of
keywords in language design based on my increased reading in the
literature; students are showing better understanding. I plan to keep
the same approach for more improvement.

Outcome 3: It has the largest score improvement from 74% to 93% (+19%) In
addition to the quizzes, homework, projects, reports, and exams I
have continued to challenge the students every class with some extra
point questions in their quizzes and exams about the different
language paradigms. I continue to notice that students stay more alert
in the class and raise many useful discussions (some are even
challenging) and asked many useful follow up questions. I intend to
keep doing that in future classes to further maintain such score.

Outcome 4: There is a slight improvement (+1%). As in the last year, I still


receive many positive comments about the semester projects. I
continued to ask the students to report on some prominent modern
programming languages (not covered in class) from the literature.
The resulting semester projects reports continued to be very
impressive. In addition to some required languages, students still
analyze some of their “own interest” programming languages. I think
we will keep such approach since it is really working, especially
after refining my assignment problem statement to be clearer, based
on student feedback.
A common point to all of the above actions to enhance and better achieve the
class outcomes is to keep posting the class lecture notes on the class website. In
addition, continue updating the notes as the class progresses and notify the
students of any update on the class website. Hence students have continued access
to the notes. Students are showing more progress and many appreciated the note-
posting for easy access all the time. I will continue to seek mid-semester
anonymous student views on the covered subjects, to report on my teaching, the
covered topics, and any other points of their concerns. Those evaluations proved
to be very helpful to adjust the class teaching, based upon input from other
department faculty who practiced such a process.

Scores for Program Outcomes:


This course affects the first, third, and fourth program outcomes. We will deal with
each in turn by substituting a numeric value for performance (1, 2, 3, and 4 for
unsatisfactory, marginal, satisfactory, and excellent respectively) and computing the
average.

1) First program outcome “the ability to design, implement, and test small software
programs, as well as large programming projects” is directly affected by the third
course outcomes.

To strengthen the student’s knowledge of different programming languages’


concepts, they will design, implement, and test a set of small programs from different
languages’ paradigms/categories. Hence, there is a direct mapping between the third
class’s outcome and the first program’s outcome.

Outcome Class Performance Overall


Average
3 93% 3 4.0

2) Third program outcome “Knowledge of fundamental principles of programming


languages, systems, and machine architecture” is directly affected by first and
second course outcomes, specifically its programming language component.
Outcome Class Performance Overall
Average
1 93% 4
2 86% 4 4

3) Fourth program outcome “Exposure to one or more computer science application


areas” is directly affected by the fourth course outcomes.

In class, we analyze and critique languages like Ada with its “exception” handling
mechanism and concurrent tasking facility, relating to critical military applications
where such features are very useful. Moreover, we explore the powerful lisp and
Prolog capabilities, mapping them to AI and “expert systems” implementation,
respectively.

Outcome Class Performance Overall


Average
4 87% 4 4.0
CSE  325:  Principals  of  Operating  Systems  
Assessment  for  Undergraduates  in  Spring  2011  

October  14,  2011  

Course  Outcomes  
At the end of the course, a student should be able to:
1. Understand the functions, structures, and history of operating systems;
2. Master process management concepts including scheduling, synchronization, and
deadlocks;
3. Grasp concepts of memory management including virtual memory;
4. Master issues related to storage systems, file system interface and
implementation, and disk management;
5. Become acquainted with Linux kernel programming.

Assessment  Methodology:  
• Each course outcome was tied to one or more questions in the midterm and final exam,
or to one of the lab assignments. A formula was used to compute a normalized
weighted sum of the scores for those questions.
• A table containing one numeric score per student per course outcome was computed.
The table was aggregated along course outcomes to obtain a numeric score per outcome
averaged over the whole class.

The formulas used were:


Course Formula
Outcom
e
1 (MTT2+MTTEC1+MTTEC2+MTI1+MTI2+MTI4+MTIEC2+F7+F9+F10+FEC1)
* 100 / 75
2 (MTT1+MTT3+MTT4+MTI3+MTI5+MTI6+MTI7+MTI8+MTI9+MIEC2+F8+L
ab3+Lab4) * 100 / 130
3 (F1+F2+F3+F4+Lab5) * 100 / 65
4 (F5+F6+Lab6) * 100 / 45
5 (F7+Lab1+Lab2) * 100 / 60

(Notation: MTT3 represents the average score on question 3 on the take-home midterm exam.
MTI3 represents the average score on question Q3 on the in-class midterm. Lab2 includes all
parts of the lab assignment: code reading, design, and implementation.)

Performance  Metric  
Based on the answers to the questions, it was felt that a 50 percent score implied that the
basic concept had been grasped, a score of 75% or more indicated a superior performance, a
score less than 35 percent implied that the basic concepts had not been learned with a 35 to
50 percent score implying a marginal state:
Class Average Performance
< 35% Unsatisfactory
35 – 49% Marginal
50 – 74% Satisfactory
75 – 100% Excellent

Results  
The following numeric assessment scores were obtained through the process outlined above:
Outcome Class Average Performance
1 81.8   Excellent
2 83.5 Excellent
3 87.8 Excellent
4 89.6 Excellent
5 85.3 Excellent
Scores for extra-credit problems were treated as if they were just another question on the
particular topic area and did not artificially inflate the numbers in the above table.

Scores  for  Program  Outcomes:  


This course affects two program outcomes. We will deal with each in turn by substituting a
numeric value for performance (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory, and
excellent respectively) and computing the average.

Program outcomes:
1b. The ability to design, implement, and test large programming projects involves the course
outcome 5.
3. Knowledge of the concepts and techniques of operating systems and OS-level
programming involves course outcomes 1, 2, 3, and 4.
6. The capacity to work as part of a team involves the course outcome 5.

Program Course Outcome Class Average Performance Overall


Outcome
1b 5 85.3 4 4

Program Course Outcome Class Average Performance Overall


Outcome
1 81.8 4
3 2 83.5 4 4
3 87.8 4
4 89.6 4

Program Course Outcome Class Average Performance Overall


Outcome
6 5 85.3 4 4

Comments:  
• For all lab assignments, students chose their group mates based on their interests. This
approach is fine. However, imbalance in the performance existed among groups. For the
most part, groups had a variety of specialties, which helped implementation. One group
split and the team members had to find other groups to participate with.
• It was Jitendra Tummula’s second time as the TA of this course. He was still not
acquainted with Linux kernel programming from the previous instance of this course.
There was not enough time to train him before labs began. He had detailed slides about
the background and assignment requirements of each lab. But there were still many
questions asked by the students that he could not answer. His preparation for the labs was
insufficient to run the labs as they existed. The lab assignments were rewritten during the
semester with more of a focus on implementation of various principles instead of being
LINUX modification. About half of the new lab assignments were developed and used
during this course. The rest of the lab assignments are in the process of being written by a
new TA and the instructor.
• The system administrators upgraded the operating systems in the lab once after the first
lab assignment. It was difficult for the students to use the systems in the lab until they
became stable. A stable lab environment is important and helpful.
• Six lab assignments were given. It took between two to four lab sessions to complete
each of them. There were milestones for nearly every lab and in-class presentations
associated with steps along the way to completion. Peer and instructor feedback was
given for all lab assignments at each milestone, to ensure students didn’t get too far down
the implementation road with major design issues.
 
5/26/2011 Dongwan Shin

Class Assessment Report: CSE 326 Software Engineering in Spring 2011

Course Learning Outcomes Assessment Methods1 Assessment Results2


1. Knowledge of different software - Questions 1, 2, 3 of Midterm 2. The class average was 60.8. Based on the footnoted
development life cycle models exam, Question 1 of Final exam, assessment results, this item is marginally met.
and Homework 1

- Formula used:
100*(0.2*MidQ1 + 0.2*MidQ2 +
0.2*MidQ3 + 0.2FinalQ1 +
0.2HW1)
2. Ability to elicit requirements from - Questions 4, 5, 6, 7 of Midterm 3. The class average was 72.6. This item is satisfactorily met.
clients and specify them exam, Questions 2, 3, 4 of Final
Exam, and Quizzes 1, 2

- Formula used:
100*(0.1*MidQ4 + 0.1*MidQ5 +
0.1*MidQ6 + 0.1*MidQ7+0.15*
FinalQ2 + 0.15*FinalQ3 +
0.2*FinalQ4 + 0.05*Quiz1 +
0.05*Quiz2)
3. Ability to perform detailed design - Question 8 of Midterm, Questions 2. The class average was 54.9. This item is marginally met.
through the architectural design, interface 5, 6, 7, 8 of Final Exam, and Quiz
design, object design, and the use of 3, 4
design patterns
- Formula used:
100*(0.1*MidtermQ8 +
0.2*FinalQ5 + 0.2*FinalQ6 +
0.2*FinalQ7 + 0.2*FinalQ8 +
0.05*Quiz3 + 0.05*Quiz4)
4. Ability to perform implementation - Question 9 of Final exam, and 3. The class average was 73.9. This item is satisfactorily met.
from design specification Class project implementation
specification
5/26/2011 Dongwan Shin

- Formula used:
100*(0.5*FinalQ9 +
0.5*ProjectImp)
5. Ability to plan and apply various - Question 10 of Final exam, Class 3. The class average was 72.2. This item is satisfactorily met.
testing techniques project final report

- Formula used:
100*(0.5*FinalQ10 +
0.5*ProjectReport)
6. Practical experience of using UML - Class project requirement 4. The class average was 87.8. This item is excellently met.
and OOP specification, Class project design
specification, and Class project
implementation specification

- Formula used:
100*(0.4*ProjectReq +
0.3*ProjectDes + 0.3*ProjectImp)
7. Ability to work in group to produce a - Class project peer review, Class 4. The class average was 93.0. This item is excellently met.
large-scale software product project presentation

- Formula used:
100*(0.8*ProjectPeer +
0.2*ProjectPresentation)

 Assessment in regard to program-level outcome:

 The course learning outcome 3 (O3), 4 (O4), and 5 (O5) contribute to the program outcome P1b(Large Prog). Hence, the numeric score
for assessment against the program outcomes is as follows:

P1b = (2 + 3 + 3)/3 = 2.7


5/26/2011 Dongwan Shin

 The course learning outcome 2 (O2) and 6 (O6) contribute to the program outcome P5(Tech Comm). Hence, the numeric score for
assessment against the program outcomes is as follows:

P5 = (3 + 4)/2 = 3.5

 The course learning outcome 7 (O7) contributes to the program outcome P6 (Group). Hence, the numeric score for assessment against the
program outcomes is as follows:

P6 = 4(O7) = 4

 Conclusion

- Compared to the last assessment on this course offered in Spring 2010, this year’s assessment result shows that Outcomes 2 and 3 have
improved from “unsatisfactory” to “marginal” and “marginal” to “satisfactory,” respectively. In order to address the steep learning curve of
IBM Software Architect, I have not only introduced the tool earlier and spending more labs with it, but I have also required students to use the
tool for their homework and final project. This seems to have worked to improve the outcomes.
- CSE 213 is a prerequisite for this course, which introduces object-oriented programming to students before they take this course. This could
improve the outcome 3.

1
Assessment Methodology
• Each course outcome was tied to one or more questions in the midterm/final exam, individual class project, or homework
• A formula was used to compute a normalized weighted sum from the scores for those questions, class project evaluation, and homework
• A table containing one numeric score per student per outcome was computed
• The table was aggregated along outcomes to obtain a numeric score per outcome averaged over the whole class
• Only CSE major undergraduate student data was used for this analysis
2
Assessment Results
• Considering the difficulties in exam questions, homework, and final project, the average numeric score per outcome is translated as follows:
• A 80 percent score and above implies that the outcome is excellently met
• A score in between 65 and 80 implies that the outcome is satisfactorily met
• A score in between 50 and 65 implies that the outcome is marginally met
• A score less than 50 percent score implies that the outcome is unsatisfactorily met
• The final score for each course outcome ranges over 1-4, where 1 – unsatisfactory, 2 – marginal, 3 – satisfactory, and 4 – Excellent.
CSE 342: Formal Language and Automata Theory
Assessment of Spring 2011 Class
(Andrew Sung, Instructor)

There are 17 undergraduate and one graduate student in the class. This assessment is based on
data of undergraduate students only.

Course Outcomes

At the end of the course, a student should

1. have understood the basic theorems on finite automata, regular languages, and regular
expressions, and be able to solve basic problems about them
2. have understood the basic theorems on context-free languages and pushdown automata,
and be able to solve basic problems about them
3. have been exposed to some other classes of languages and automata, e.g. context-
sensitive and recursively enumerable languages, and Turing machines

Spring 2011 Course Description

 The course covered finite automata, regular languages, context-free languages, and
pushdown automata, roughly corresponding to the major contents of chapters 1-7 of the
original Hopcroft, Motwani, Ullman textbook (first edition, 1979).
 Other topics, including Turing machines, recursively enumerable languages, Chomsky
hierarchy of languages, decidability, were very briefly discussed at introductory level (i.e.
no theorems, formal proofs, etc.), due to time limitations.
 Some advanced topics on theory of computation and computational complexity were
occasionally discussed by the instructor at appropriate times as motivating material.

Assessment Methodology

• The first exam covers finite automata and regular languages, corresponding to Outcome 1.
• The second exam is a take-home exam and covers topics corresponding to Outcome 1 and
Outcome 2.
• The final exam primarily covers context-free languages and pushdown automata,
corresponding to Outcome 2.
• No questions of relevance to Outcome 3 were given in any of the exams.

The average scores of the three exams of the class are interpreted according to the
performance metric established by the CS faculty for evaluating outcomes, as follows

Class average Performance


< 40% Unsatisfactory
40 – 59% Marginal
60 – 74% Satisfactory

1
75 – 100% Excellent

Calculations based on the class’s normalized average score of the three exams results in the
assessment that the achievement of course outcomes 1 and 2 have been satisfactory, and
outcome 3 unsatisfactory.

Implications for Program Outcomes

This entire course is devoted to one program outcome (knowledge of the theoretical concepts
of computing). So, substituting a numeric value for the entries in the performance column in
the above table (1, 2, 3, and 4 for unsatisfactory, marginal, satisfactory, and excellent
respectively) and computing the average, giving a score of
(3+3+1)÷3 = 3.3 for that program outcome.

Comments:

 For next year’s class, if this instructor is to teach the class again, he plans to spend more
time on topics pertain to Outcome 3 by reducing the amount of coverage on topics pertain
to Outcome 2.
 Several students had complained about the TA’s grading.
 Class attendance was the best in a number of years.

2
CSE 389: Information Protection
Course Assessment for Undergraduates in Spring 2011

1 Course Learning Outcomes


After successfully completing this course, a student should be able to:
1. Understand the threat environment of applications and networks
2. Planning a risk assessment of a network
3. Understand basic elements in cryptography and cryptographic systems
4. Understanding the importance of access control using firewalls
5. Understand Host, Data, and Application Security
2 Assessment Methodology
For the purpose of objective assessment, numeric scores were calculated to measure each student’s
mastery of each learning outcome. The following table shows the metrics used for assessment. The
metrics include midterm and final exam questions as well as grades on the projects. Homework and quiz
grades were not used for assessment because they were part of the initial topic presentation and were thus
not a good measurement of topic mastery.

Outcome Metrics
Midterm Q4
Midterm Q5
1
Final Q11
Final Q13
Project 1
2
Project 2
Midterm Q10
3 Midterm Q12
Final Q4
Final Q2
4 Final Q12
Final Q14
Midterm Q7
Midterm Q8
5
Final Q1
Final Q10

Table 1: Metrics for Performance Assessment

Each of the metrics is of approximately equal importance to each corresponding learning objective, so the
numeric scores were calculated as the normalized average of the scores. Normalization of scores involved
dividing each score by the maximum number of possible points. Some lab exercises and projects included
extra credit opportunities. For this assessment, grades were limited to a maximum of 100% to make
average scores representative of the class as a whole. Finally, a score was computed for each outcome by
averaging each student’s individual score for the outcome.

Only the performance of Computer Science majors was evaluated. There were 18 Computer Science
majors enrolled in the course.
CSE 113 Undergraduate Course Assessment Spring 2011

2.1 Performance Measurement

Student grades in the class were assigned using a straight scale. Hence, a similar scale was chosen to
assess overall performance. An average of less than 50% signified a lack of comprehension. An average
between 50% and 65% showed a marginal understanding. A passing grade between a D and a B
demonstrated satisfactory ability, but not perfection. A grade better than an 85% (B) showed excellent
comprehension and effort.

Percentage Performance
0 – 54 Unsatisfactory
55 – 64 Marginal
65 – 85 Satisfactory
85 – 100 Excellent

Table 2: Numerical Ranges for


Performance Measurement

3 Results for Course Outcomes


The following table provides a summary of the results for each course learning outcome, calculated as
described above. The table also provides the number of students who achieved each performance level, in
order to give a better picture of the performance distribution of the class.

Number of Students
Outcome Unsatisfactory Marginal Satisfactory Excellent Score Performance
1 2 0 4 12 86.64 Excellent
2 1 0 4 13 87.2 Excellent
3 2 1 6 9 83.12 Satisfactory
4 1 1 6 10 81.57 Satisfactory
5 2 3 6 7 74.97 Satisfactory

Table 3: Performance Measurements for Course Learning Outcomes

Of the 18 students, one stopped showing up in the middle of the semester. As the table shows, this
student's performance was not indicative of the rest of the class.

3.1 Outcome 1: Understand Threat Environment

The performance measurement for objective 1 was Excellent. This was the second highest scoring
outcome. Outcome was accomplished through daily discussions of current event topics as well as lecture
material. A total of 4-5 weeks was spent on the material used to score this outcome.

Of the 18 students, one student did not take the Final exam, which was used in scoring this metric.

2
CSE 113 Undergraduate Course Assessment Spring 2011

3.2 Outcome 2: Planning risk assessment's

Risk assessment was a major portion of the course. Students had two main projects where they had to
draft a contract with a client, perform a vulnerability assessment of the client's assets, and write up a
report on findings as well as remediation recommendations.

This was done in groups, but individuals' grades were determined based on peer evaluations in addition to
the group's grade.

Of the 18 students, one student did not participate in the group projects, which was used in the scoring of
this metric. This outcome was still the highest scoring Outcome despite this fact.

3.3 Outcome 3: Basic Elements in Cryptography

Basic elements in cryptography include a broad coverage of symmetric vs asymmetric encryption,


cryptographic standards, as well as covering the difference between encryption and hashing. The amount
of time spent on this topic was 2 weeks. The range of prior knowledge students had about cryptography
varied from no experience to having already taken the Cryptography and Applications class (CSE441).
All the students attempted the assignment used as a metric to score this Outcome.

3.4 Outcome 4: Access Control using Firewalls

Firewalls and access control was covered jointly due to the firewall's use of access control lists. Using
firewalls to teach access control is a good practical way of showing that different types of packets require
different types of access from no access to no restrictions.

Of the 18 students, one student did not take the Final exam, which was used in scoring this metric.

3.5 Outcome 5: Understand Host, Data, and Application security

The result for outcome 5 was Satisfactory. This was the lowest scoring outcome. A total of 2-3 weeks was
spent in class to cover this Outcome. The topics covered by this Outcome were taught at the end of the
semester.

Of the 18 students, one student did not take the Final exam, which was used in scoring this metric.

4 Analysis and Remedial Action


Instructor feels that material used to score the Outcomes should be evenly spaced throughout the
semester. Spacing was poor. For instance, Outcome 1 had 4-5 weeks of instruction at the beginning of the
semester while Outcome 5 had only 2-3 weeks at the end of the semester. Another reason why Outcome 5
may have been low was because during this time, students were trying to perform a vulnerability test
(Outcome 2) and may not have had the time and interest.

Networking knowledge was not as strong as the instructor would have liked. A networking primer was
taught at the beginning of class to catch students up to the OSI model and other networking basics.
Course can be improved if students had more mastery of this material.

3
CSE 113 Undergraduate Course Assessment Spring 2011

Vulnerability assessments were key to this course. There were two given throughout the semester. One
network setup was constructed by the instructor and every test by the students were in a controlled
environment. Since this was the first attempt at many students to break into a system, many made
mistakes that would have been unacceptable in a real vulnerability test. This emphasizes the fact that this
course should do two projects, one in a controlled environment and the second could possibly be a real
world test. This will help Outcome 2.

4
Course Outcomes: CS451 – Parallel Processing:
Students should be able to

1. understand the essential concepts, mechanisms, and parallelization approaches used


in modern parallel computing;
2. design and parallelize applications using OpenMP and a shared memory approach;
3. design and parallelize applications using MPI and a distributed memory approach;
4. perform experiments to determine the performance of applications on both shared
memory and distributed memory machines;
5. analyze and model the performance and scalability of parallel implementations;
6. determine whether distributed memory or shared memory will be more effective for an
application;

(Assessment through homework, exams, individual projects, and team projects)

No formal assessment has yet been done for this course. Future assessment will follow the
outline below
Assessment Methodology:
• Each course outcome was tied to one or more questions on an exam and/or portions of a project.
• A formula was used to compute a normalized weighted sum from the scores for those questions.
• The numeric score per outcome was averaged over the whole class.
• Data for all students who completed the course was used for this analysis.

The formulas used were:


Course Formula
Outcome
1 average (MQ1,MQ2,F1,F2,Quizzes)
2 average(MQ5b,H3,F5b,F5c)
3 average(MQ5a,H2,F6b,F6c)
4 average (H2, H3, H4,Pc5)
5 average (F3, F4, F5.2c, F6.2c,Pc6)
6 Pc4

(Notation: MQ3 represents the score on question 3 on the midterm exam divided by the maximum
possible points on that question; H indicates a homework assignment; E indicates an exploratory project;
F3 indicates Final exam question 3; and Pc1 indicates component 1 of the major semester project.
Evaluations are graded student evaluations of other student projects.) “average” just indicates to average
the scores for listed components with equal weighting.

Performance Metric
The performance metric to analyze outcomes for this course is as follows:
Excellent corresponds to greater than 90 percent.
Satisfactory corresponds to 70 to 90 percent.
Marginal corresponds to 60 to 70 percent.
Unsatisfactory corresponds to less than 60 percent.
Results:
The outlined process results in numeric assessment scores:

Outcome Class Average Performance


1 81.1 Satisfactory
2 86.4 Satisfactory
3 87.9 Satisfactory
4 88.6 Satisfactory
5 90.0 Excellent
6 90.3 Excellent

Actions:
The previous assessment of the course specifically recommended improvement on OpenMP and MPI
programming. Both of these have gone from class average of low to mid 70s to mid to high 80s. All
performance is acceptable and does not need specific improvement.

The one thing that could still use some improvement is in further separation of criteria for assessment.
Class Assessment: Spr2011 CSE452-- Introduction to Sensor Networks
Class outcomes:
At the end of the course, a student should be able to:
1. Understand the basic concepts, keywords/terminologies, constraints, and challenges
of the sensor networks technology, covering different types of hardware and
software levels of individual sensors' motes; and the sensor networks significant
importance in vast number of civil and military applications.

2. Understand the special sensor networks MAC and network protocols, with the key
factors that distinguishes them from traditional networks protocols.

3. Have hands-on experience on the design, modeling, and analysis of some real live
sensor networks' applications, to be deployed in their actual terrains, with the
addition of some neural networks models that adds "smartness" to the designed
sensor network models.

The methodology that we deployed to calculate the above percentages for the
assessment is as follows:

1) For each of the above listed class outcomes, identify its contributing modules, such as:
individual lab experiment, homework, and projects that relate to such outcome. Each
of the contributing modules maps to only one class outcome, i.e., all are orthogonal
with respect to the class outcomes. All class outcomes are covered throughout the
instruction modules used in the assessment process.
2) For each outcome, compute the students’ obtained semester average for every
contributing module in (1) above, i.e., labs, homeworks, and projects,. The obtained
average is divided by the corresponding assigned points to said component to get a
percent score. In the process, modules contributing scores to an outcome will be
weighted by different factors based on their importance.
3) For each outcome, add all of its obtained scores from different contributing modules,
listed in (2) above, after weighting each by a factor that reflects the importance of
each module’s score, to obtain the outcome’s final semester percentage score. For
example, final semester field projects have higher factors than other components.
Assessment Result:
Modules/Class Outcome co1 co2 co3
SEMESTER % 94% 90% 91%

Performance Metrics:

The scale that has been used to assess each of the class outcomes is as follows:

Class average Performance


< 40% Unsatisfactory (1)
40-54% Marginal (2)
55 – 79% Satisfactory (3)
80 – 100% Excellent (4)

Outcome Class Performance


Average
1 94% Excellent (4)
2 90% Excellent (4)
3 91% Excellent (4)
Conclusion

Outcome 1: Students done very well in understanding the existing sensors in the
lab (MICA-Z&IRIS) and their capabilities and functions. They were
also able to install all required software drivers and the mail Tiny-OS
underlying operating system.

Outcome 2: In all of my lectures I have full attendance and attention of the class
students. The students responses to my question on the sensor
networks fundamentals reflected their understandings of the different
Sensor networks' special MAC and Networks protocols (e.g., S-MAC
and IEEE 802.15.4-ZigBee, routless routing protocols of MASNET).

Outcome 3: The students really enjoyed that part of the class when they went to
their first field trip to setup an experimental "smart asynchronous
event detector sensor network unit" in its actual deployment terrain,
at New Mexico Tech. After collecting all sensed data for about half a
day, the students compiled and analyzed such sensed filed data, and
with the help of some neural modeling software they were able to
evaluate their system. Though the system performance was not that
great, yet the students were able to justify and have a plan for how to
make it work better in the future!

S-ar putea să vă placă și