Sunteți pe pagina 1din 4

Project #2: Criterion-referenced Evaluation EDTECH 505-4173 November 2, 2010 Members Tanya Blakley (Team Leader) Christina DeLeo

Jennifer Pletcher Part 1: Link to Evaluation Rubric


https://spreadsheets.google.com/a/u.boisestate.edu/viewform?hl=en&formkey=dFZNWTYzZHJrdG1pbFdLT3g5QmJkaXc6MQ#gid=0

Part 2: Tabulation of Evaluation Results


WBLE LINK http://questgarden.com/110/56/1/101006200409/ http://www.starfall.com/ http://www.smartteaching.org/blog/2008/08/the-ultimate-guide-to-blackboard-100-tipstutorials/http://www.smartteaching.org/blog/2008/08/the-ultimate-guide-to-blackboard-100tips-tutorials/ http://www.gcflearnfree.org/everydaylife/ http://education.ed.pacificu.edu/sweb/archer/webquest.html http://www.a-systems.net/part1.htm http://www.moneyinstructor.com/accounting.asp http://www.econedlink.org/lessons/index.php?lid=165&type=student https://www.crayola.com/lesson-plans/detail/purchasing-flower-power-lesson-plan/ http://simplestudies.com/accounting-questions.html Usabilit y 3.6 3.8 3.1 Conten t 3.9 4.0 3.3 Educational Value 3.0 3.6 2.5 Vividnes s 3.8 4.0 3.8 TOTAL 3.6 3.9 3.2

4.0 3.8 3.7 3.7 3.8 3.4 3.5

4.0 3.4 3.5 3.7 4.0 3.8 3.9

3.1 3.3 2.1 2.9 3.1 2.7 2.7

4.0 3.8 4.0 3.5 3.8 4.0 4.0

3.8 3.6 3.3 3.5 3.7 3.5 3.5

Analysis of Evaluation Results This criterion-referenced evaluation was intended to quantitatively measure whether 10 Web-based Learning Environments (WBLEs) met the standards and criterion set forth in the rubric published in the British Journal of Educational Technology by Bayaa, Shehade and Bayaa. A sample of WBLEs was randomly selected by each evaluator. Each evaluator was expected to use the rubric to rank the sites of her counterpart, as well as her own sites. As a result, there exists the potential of bias in the ratings although each was cognizant to avoid this. The empirical results presented in this report are interval data collected from a scale. Evaluators were asked to rate each subcategory using: excellent, good, fair, and poor. The numerical values of 4,3,2 and 1 were respectively assigned to these ratings to calculate an average score for each category. Therefore, to reflect the scale, a rating of excellent would receive the highest value; good, the next highest and so on. The ratings for all the subcategories under the Usability category, for example, would be averaged to produce the Usability score. To supplement the ratings, evaluators also provided optional comments for each of the 5 higher level categories in terms of the subcategories that they scored. Usability ratings for the WBLEs included the following subcategories: purpose, homepage, navigation, design, enjoyment and readability. The ratings of the WBLEs in this category tended to be high, with the lowest rating being 3.1. However, this score did not significantly affect the average score of all 10 sites which was 3.7. The median score was also 3.7 and the mode was 3.8. Based on the subcategory scores and evaluator comments, these high scores can primarily be attributed to ease of navigation, high readability, clear purpose and homepage layout. Content ratings included the following subcategories: authority, accuracy, relevance, sufficiency, appropriateness. The evaluators were very similar in their ratings in the last four categories, assigning all 3s and 4s. The authority category had the largest variation in scores including a few fair and poor ratings. The average of all content scores is 3.7, the median is 3.9 and the mode is 4.0. Educational Value had the most subcategories to rate including: learning activities, activity plan, resources, communication, feedback, rubric and help tools. In terms of the main criteria, this area had the largest variance in scores, ranging from 2.1 to 3.6. Comments also widely ranged about the educational value of the WBLEs. Some examples of comments are:

Feedback was difficult to find under Conclusion page, web links were great resources to help assist in studying how cells work. It is difficult to tell if a rubric is needed based on the level of the reader. The Help Desk was very useful. The rubric consist of a learning activity worksheet that the learner could use to navigate his/her learning outcomes. The accounting questions were a good resource to use for a rubric. This site showed some very basic principles of accounting, then advertised a software to take over when the accounting became complex. The activities did not allow for the user to work alone and receive feedback. This was more of a show and tell. The site contained questions that followed the lesson. It was not interactive, did not have a rubric and the student could not receive feedback. There were interactive links, but not necessarily tied with the particular lesson I evaluated. The help tools consisted of an FAQ section. It was strictly a site teaching how to make a product. The lesson was not meant to be completed over the Internet, so feedback would be provided in person. There were no help tools. Did not have a rubric, but provided immediate feedback to answers. A rubric was not included, there was no option of feedback. No rubric or help tools. There was a blog, but it was not interactive.

There was an option to communicate with peers, but it was not easy to locate. There was no way to interact through this site. Feedback was intended to be provided by a live teacher, not through the site.

Comparisons and Discrepancies The site that received the highest overall rating was starfall.com. This site design and layout made it easy to follow and navigate. The colors in the site were pleasing and effective in gaining attention. Starfall.com contained entertaining and educational interactions. The site required user input and provided immediate feedback. It included activities designed to allow the learner to move from learning basic information to higher order thinking. The graphic elements help the learner make visual connections that aid in understanding conceptual knowledge. The site that received the lowest rating was The Ultimate Guide to Blackboard. This site had a basic white background with black font. The homepage had 100 different tutorials in 15 subcategories which provided for a variety of external resources, but to a fault. This made the learning experiences disjointed and ill-structured. An attractive feature of the site was the pictures that allowed the user to have a visual to accompany the text. The site did not allow for specific learning activities or feedback. Users were able to contact the support staff, but not other users. Interrater reliability did not appear to be strong for the Educational Value category. In 8 out of 10 ratings, there was a difference of 0.7 or higher, with many varying by a whole point or more. This could be attributed to the tool itself or the evaluators interpretation of the criteria. The other categorical data presented negligible variances between the evaluators. Evaluator 1 3.8 3.7 2.4 3.5 2.6 3.9 3.4 3.3 3.6 3.9 Evaluator 2 2.7 3.6 2.6 1.9 1.6 2.0 2.7 2.1 2.6 2.3 Difference 1.1 0.1 0.2 1.6 1.0 1.9 0.7 1.2 1.0 1.6

Use of the Evaluation Rubric The rubric was a general tool that provided criteria to facilitate the evaluation of the various sites. The rubric was well thought-out and planned and yielded an overall efficient and effective evaluation process. The explanations that accompanied each subcategory provided the evaluation questions that evaluators needed to answer. One limitation encountered was a difficulty in using the same rubric to evaluate sites for younger audiences and sites for adults. This required an understanding and constant consideration of the

target audience when rating the usability, content and educational value. It was helpful that the rubric included a question about meeting the needs of the target audience. Evaluating sites aimed at different age groups demonstrated examples of how site designers used similar techniques to promote learning. In addition to possible issues with the tool itself, discrepancies could also emerge from personal interpretations of how well the site met the criteria in the rubric. No matter what rubric would have been used, personal interpretations of the content could vary.

S-ar putea să vă placă și