Sunteți pe pagina 1din 10

Running Head: Francis - TECHNOLOGY TRAINING EVALUATION

Technology Training Evaluation Taylor Francis EAC 585, North Carolina State University

EAC 585

Francis TECHNOLOGY TRAINING EVALUATION

Introduction As of March 2011, Facebook has over 500 million registered users (Facebook, 2011) and Twitter has over 175 million open accounts (Carlson, 2011). Social media is already a dominating part of modern global culture and all indications are that it will continue to grow well into the future. The average student today is much more familiar and comfortable with technology than his counterpart even one generation ago. The multitude of technological assets available to the average person today can be overwhelming, but they also present a unique opportunity for individuals in the arena of training and development. In an effort to optimize the benefits that technology provides, training and education programs have begun to incorporate technology at higher and higher rates. Distance education and e-learning are now common themes on university campuses throughout the world. Using functions available on the internet today offers instructors and students the opportunity to connect with one another over time and distance in a previously impossible way. It can help foster collaboration and cooperation on a much larger scale than seen before. With that in mind, if instructors are going to use technology, it must be used correctly to reap its benefits while negating any detriments it may bring. If instructors are to achieve this goal, there must first be a consensus as to what constitutes success for an e-learning or technology-based program. The Distance Education & Learning Technology Applications (DELTA) program at NC State University has been in existence since July 2000 and is designed to improve the quality of education by harnessing technology to provide ready access for all learners (DELTA, 2011). It consists of over 60 full-time staff members that facilitate all distance education and learning management system (LMS) training at NC State (DELTA, 2011). Through the DELTA program, numerous training seminars and workshops are offered both online and in person. These seminars are delivered in multiple manners including taping a live presentation and posting the video on the DELTA website along with any accompanying audio-visual aids. Using this delivery method, the author chose to evaluate the seminar Pedagogical Facebook and Twitter? Using Social Media for Academic Good. Methods The world of e-learning evaluation is an academic Wild West presently. Numerous theories, hypotheses, and models abound for how we should evaluate e-learning, but none has taken a firm grasp as the preeminent model in the field. Most models are relatively similar with anywhere from six to nine primary dimensions and myriad numbers of sub-criteria within these dimensions (LeePost, 2009; Chua & Dyson, 2004; Tzeng, Chiang, & Li, 2007; Stewart, Hong, & Strudler, 2004). Some models can reach as high as eighteen distinct primary factors that the researchers attempt to measure and include in their evaluation model (Fetaji & Fetaji, 2009). With such relative similarity, one might think that a consensus would be an achievable goal. However, these proposed models are also vastly different. Chua and Dyson (2004) strive to bring some familiarity and standardization to the process through their recommendation of ISO 9126 as an evaluative model. First proposed in 1991 by the International Organization for Standardization (ISO), ISO 9126 was originally conceived as a way to measure the effectiveness of computer software. Chua and Dyson advocate for the model s utility in also managing technology-based educational courses (2004). This idea does have its merits in that ISO 9126 is a pre-existing model familiar to many researchers and it is set forth by an organization, the ISO, whose sole purpose is to create a standard that represents a global consensus on the state of the art in the subject of that standard (ISO Strategic Plan 2011-2015, 2010). Chua and Dyson
EAC 585 2

Francis TECHNOLOGY TRAINING EVALUATION

(2004) still find some flaws in the transition from evaluating software to evaluating technology-based learning platforms, but believe they can be overcome. However, this method becomes problematic when one notes that ISO 9126 was withdrawn as an ISO standard in June 2001 and has since been replaced with three iterations of a new standard culminating most recently in ISO 25060 in July 2010 (ISO website, 2011). Would Chua and Dyson have researchers continue to use ISO 9126 with modifications or adjust to incorporate the newest model each time the ISO introduces a new standard? Other researchers (Tzeng et al., 2007; Stewart et al., 2004) use a more statistics-based approach in the development of their models for evaluation. As with Chua and Dyson, Tzeng et al. (2007) rely on previous proposals for the creation of their model, namely the multi-criteria decision making (MCDM) approach and the Decision Making Trial and Evaluation Laboratory (DEMATEL) method. These methods are highly mathematical in nature and, while apparently effective, are difficult to apply and cumbersome to fully comprehend. The applicability of the model proposed by Tzeng et al. can be questioned simply from a perspective of the time and foundational knowledge needed to accurately apply and understand the results of the model. One would need a thorough knowledge of the DEMATEL method as well as a strong base in advanced statistical analysis to use this model, which handicaps its overall utility. Stewart et al. (2004) also rely on statistical analysis for their model incorporating four principal component analyses (PCA) and four maximum likelihood (ML) methods. These authors model and statistical analyses offer greater ease of use than those proposed by Tzeng et al.; however, the model is limited by its seven primary factors. Of the seven categories proposed by Stewart et al. (2004), three Appearance of Web pages, Hyperlinks and navigation, and Online applications arent applicable to all types of technologybased learning and werent germane to the training programs used for this analysis. Lee-Post (2009) provides an evaluation model that consists of three overarching themes designed to assess a training program from beginning to end system design, system delivery, and system outcome. Within these three themes there are six specific success dimensions that are measured through numerous criteria. These dimensions are system quality, information quality, service quality, use, net benefits, and user satisfaction (Lee-Post, 2009, p. 63). In her experiment, Lee-Post (2009) provided a survey with questions designed to measure the sub-criteria within each dimension. Each criterion was rated 1 5 with 5 being the highest rating and 1 being the worst. The mean score for all the sub-criteria in one dimension was calculated as a percentage of the total score achievable in that success dimension. For example, if five survey questions were used for one dimension, then 25 would be the highest possible score possible. Thus, the mean score calculated would be expressed as a percentage based on that maximum possible score. The target goal for LeePost to consider a program successful in a particular area was 85%. Lee-Posts method is scientifically-based and effective while also being intuitive, easy to understand and replicate, and not requiring advanced statistical analysis. Thus, in the authors opinion, although Lee-Posts processes are not 100% replicable, her model seems the most useful for the task at hand. A survey was not created, but the author was able to provide a 1 5 score for each sub-criterion and thereby achieve an overall mean score again expressed as a percentage of the total possible score. Following Lee-Posts (2009) technique, 85% was again determined to be a valid point to separate success from failure in each of the six primary factor areas. Based on the methods provided by Lee-Post (2009), the author evaluated the online program Pedagogical Facebook and Twitter? Using Social Media for Academic Good.

EAC 585

Francis TECHNOLOGY TRAINING EVALUATION

Analysis Pedagogical Facebook was originally offered in May 2009 and led by Stephanie Trunzo, a visiting lecturer in the NC State University English department who specializes in technical writing. Trunzo initially began incorporating technology into her courses in 2002 by offering a website to enhance her more traditional, classroom-based, lecture-style class. Through the years, she has continued to develop her use of technology to the point that, as of the presentation in 2009, she taught her courses entirely online. Based on her experience in the field, DELTA orchestrated this opportunity for others to learn from Trunzo about different ways to use social media and Web 2.0 applications to augment learning in educational settings. Table 1 shows the authors analysis of Pedagogical Facebook using Lee-Posts (2009) six success dimensions.
Table 1 Analysis of Program Success for "Pedagogical Facebook" Success Dimension System Quality Information Quality Service Quality Use Net Benefits User Satisfaction Score System Design 87% 73% 100% System Delivery 70% System Outcome 85% 75%

Table 1 Dimension 1 System Quality System quality is designed to measure such factors as ease of use, security, and stability of the platform. Depending on the nature of the technology used and technological interaction required in the class, the importance of system quality can increase or decrease significantly. Overall, the system quality for this seminar scored an 87%, which meets the threshold for success. For Pedagogical Facebook, most of the technology was controlled by Trunzo, so system quality was not as big a factor for the user. Any issues with the website and technology used to present the course would be directed to DELTA generally and not to Trunzo or her class specifically. However, the technology that was incorporated including PowerPoint slides, websites, and Facebook pages all worked appropriately, responded quickly, and were stable platforms for the user. The criterion that was the biggest limiting factor for system quality was the program s user friendliness. Trunzo controlled the PowerPoint presentation and all the applicable websites. There was no method for the user to have personal access to these materials. As such, if there was something of interest to the user or a point of confusion that would require more in-depth research, the user was unable to proceed down that path. Trunzos course website had numerous links to other websites and a wealth of data that was impossible to process real-time. Unfortunately, the student had no recourse to examine these links or data and had to rely instead of Trunzos pace. Further, the

EAC 585

Francis TECHNOLOGY TRAINING EVALUATION

link provided to the course website was no longer valid. Therefore, anyone viewing the seminar on DELTA couldnt go back at a later date and review the website or attempt to learn more through that function. While the system quality of Pedagogical Facebook was a success, it could be improved by making its technology more user-friendly and offering more personalization. Dimension 2 Information Quality Pedagogical Facebook offered an outstanding amount of information; however, the presentation of this information can be greatly improved to make a more successful end-state product. The information quality rated a 73%, falling short of the 85% required to be considered successful. The seminar did very well in several criteria including its usefulness, relevancy, and currency. As social media sites and Web 2.0 become a prevalent part of modern society, it will need to play an increasing role in educational offerings. Trunzo stated that educators and trainers need to meet students where they are and, since the average student today is familiar with these technological advantages, educators can either get behind, catch up, or get ahead (Pedagogical Facebook, 2009). With that idea in mind, Trunzos expertise and knowledge of social media and how to make it educationally useful is pertinent and will only become more valuable in the future. Even despite her years of experience, Trunzo still claimed that she felt she was only catching up and was not yet ahead of the curve in relation to social media in the classroom. Having seminars such as Pedagogical Facebook for trainers to reference is a valuable tool to facilitate learning and improved educational experiences in the future. Unfortunately, the seminar suffers greatly because of the delivery of its information. The PowerPoint slides that accompanied the course were busy and at times distracting. The student was forced to choose whether he would focus on the slides at the expense of missing some of the lecture or listen to the lecture while sacrificing fully studying the slides information. The presentation also was often disjointed, jumping from one topic to another and back again leading to potential confusion on the part of the learner. The course could benefit from classifying the information in a different manner so that the seminar has a smoother flow and transitions from one concept to the next. The workshop also suffered from a lack of examples. There are numerous different types of social media capabilities that exist on the web today and Trunzo mentions many of them. However, she rarely provides examples to allow the user to see or experience what shes explaining verbally. Trunzo talks about the utility of podcasts and briefly discusses how she uses the online applications Twitter and Delicious in her courses. However, she never accesses Twitter or Delicious on the web and shows the learner how to use these tools or even gives a visual example of how shes used them in the past. For a learner unfamiliar with these technologies, this course offers limited benefits in terms of becoming more comfortable with the options they provide and how to incorporate them into future classes. The information quality of Pedagogical Facebook fails to meet the standard of success used for this analysis because, while the information is current and highly relevant, the delivery methods need to be modified and improved in the future. Dimension 3 Service Quality Similarly to system quality, service quality is going to be a dimension that will fluctuate greatly in importance based on the particular course offered and the use of technology within that course. Service quality could become an issue in courses where technical support or teacher-student interaction is limited or inhibited by time or distance factors or is of a low quality in general. In this particular instance, service quality was the highlight of the course earning a score of 100%. The service offered in this seminar was primarily in the form of a question and answer session at the end
EAC 585 5

Francis TECHNOLOGY TRAINING EVALUATION

of the presentation. While this is different than the services required for many technology-based programs, it is still an important component of the overall success of the class. Service quality was measured through five criteria including fairness, promptness, knowledge, responsiveness, and availability. Some criteria, like promptness and availability, were somewhat artificial due to the nature of an in-person question and answer session as compared to other service issues that could arise such as software technical support. Pedagogical Facebook received the highest score possible in all five criteria. Trunzo made herself available to the students, was patient with all the questions, and answered each question fully. Her knowledge and expertise in the field were evident and she thoroughly explained her response to each query. In the future, different methods of service could potentially be incorporated, but based on the criteria used for this analysis, the service quality of Pedagogical Facebook was exceptional. Dimension 4 Use Use, the one dimension under the system delivery theme, was the weakest portion of this seminar. In the analysis, it scored 70%, the lowest of any of the six dimensions and well below the score necessary to be deemed successful. Some of the issues with use have already been discussed as it relates to a lack of examples and user interaction with the PowerPoint slides and websites. Unfortunately, the seminar suffered from other issues as well. The audio was excellent for the most part, but that was because Trunzo was the only speaker. As soon as students in the classroom started asking questions, one realized a major problem they were impossible to hear. The utility of the question and answer session was hindered simply because the user couldnt hear the questions as they were being asked. Trunzo did repeat some of the questions, but for the most part did not. Another primary issue was the lack of user control over the PowerPoint slides. In the taped version of the seminar provided by DELTA, there was no way for the user to reference a previous or future slide without also changing the accompanying audio and video. So, the only options for the user were either to pause the video and interrupt the flow of the lecture while the learner examined the slide or to skip to a different time in the lecture that corresponded to what was happening when the desired slide was being shown. The program could have also benefited from practice problems or exercises for the user. Trunzo often referenced concepts or websites such as podcasts, re -tweeting on Twitter, and tagging on Delicious without offering an example or allowing the students to examine these ideas. Providing a few minutes to walk the user through a problem of creating a tweet on Twitter or tagging a reading using Delicious would have paid huge dividends. For a class focused on using social media and technology for pedagogical benefits, the analysis shows Pedagogical Facebook could have done a much better job optimizing its delivery for the user. Dimension 5 Net Benefits The net benefits dimension is one of the two dimensions that falls under the umbrella of system outcomes. Determining the net benefits of a course consists of factoring in the positive and negative aspects of the training and assessing the value of the training. As well, part of this assessment involves contrasting the benefits gained through using technology in the class as opposed to conducting the course without technological enhancements. The dimension received a rating of 85% in this analysis, showing that it was considered successful per the criteria set forth. For the most part, Pedagogical Facebook did exactly what it set out to do by helping the learner better understand how social media and Web 2.0 applications can be used in an educational
EAC 585 6

Francis TECHNOLOGY TRAINING EVALUATION

setting. Trunzo used her own experiences and expertise to instruct the students about the benefits offered by Web 2.0 technologies. She showed her official NC State course website as well as Facebook group pages she has created to augment the learning processes in her courses. By doing this, she allowed the learner to see a final product and presented them with one potential model of how these technologies could be used educationally. Although there are steps Trunzo could take to better the course, ultimately, the student does learn and finishes the course feeling more confident in his ability to instruct through social media. The one criterion that detracts the most from the net benefits of this course is time savings. Since the course was lecture-based and the user was unable to control the pace, there was no time benefit to using DELTA for this seminar. Also, the course was designed such that the audio-visual aids were tied to the lecture. Therefore, there was no reason for the user to move through the slides at his own pace as they had limited benefit without Trunzos in-person additions. This seminar does provide noticeable benefits to the user in terms of knowledge and ability, but its use of technology is not designed in any way to allow the user to save time. Dimension 6 User Satisfaction User satisfaction is certainly a subjective success dimension and what is satisfactory to one user may not meet the standards of another. However, in order for a program to be truly successful, it must attempt not only to instruct, but to do so in a way that is pleasing to the user. Pedagogical Facebook received a 75% for this dimension in our analysis, below the standard set to determine achievement of success. The course scores well in terms of providing an enjoyable experience for the user and in general overall satisfaction. The learner could easily see that Trunzo is comfortable with the material and in command of the classroom and the information delivered. Trunzo is approachable, open to questions, and asked often about the learners knowledge level and comfort level with various topics and social media applications. Where Pedagogical Facebook loses points in the user satisfaction dimension is in overall success and in deciding whether the learner would recommend the course to others. Trunzo has an excellent knowledge of the topic and conveys a lot of information to the learners, but she falls short in helping the learner achieve independence. Leaving the class, an attendee may now know what some of the newer technological tools are and different ways they can be used in a classroom setting. However, the student wont know what steps he needs to take in order to start using these tools more effectively. In order to fully understand Twitter, Delicious, Facebook, or other similar applications, how they function, and see the benefits they can provide, the attendee will have to either have a prior working knowledge of the applications or take it upon himself to research them on his own time. Thus, while the course is enjoyable and well taught, there are changes that can be made that will increase the learners belief in the success of the seminar as well as his willingness to recommend it to others. Discussion Evaluating training programs can be a long and difficult process made worse when there is confusion as to what precisely is being evaluated. This process is further complicated when one attempts to factor in e-learning and the adaptations required by incorporating new technologies into traditional training seminars. As the world of e-learning has blossomed in the past few years,
EAC 585 7

Francis TECHNOLOGY TRAINING EVALUATION

numerous models have also bloomed to help determine how best to evaluate the success of these non-traditional methods of instruction. Each of these models has merits and demerits, but Lee-Posts (2009) model is the most applicable for this analysis. Analyzing Pedagogical Facebook and Twitter? Using Social Media for Academic Good, a course taught by Stephanie Trunzo and delivered via the DELTA module at NC State University, one finds the class has high points and low points. Certain criteria in each of six success dimensions were rated on a 1 5 scale and the mean score was expressed as a percentage of the maximum possible score in that dimension. An 85% score was required to assess achievement of success in the six dimensions. Pedagogical Facebook achieved this 85% benchmark in three of the six categories system quality, service quality, and net benefits and failed to reach it in the other three categories information quality, use, and user satisfaction. It scored the highest, 100%, in service quality and lowest, 70%, in use (i.e. usability). Trunzo did a fantastic job teaching the course and was knowledgeable and willing to help the students learn. The course itself, however, was not as user-friendly as it could have been in terms of organization, presentation, and interaction. There are certainly variables that may have played a role in the design of this course that werent considered for this analysis. For example, were there constraints placed on Trunzo by outside forces in terms of time allotted to give the presentation or time allotted to create the presentation? Were there limits in terms of the technology available in the room where the seminar was conducted? What, if any, were the computer network restrictions that might have prevented Trunzo from accessing some of the social media websites she referenced? All these factors and more may have had an unseen hand in the conduct of this seminar. Regardless, as Lee-Post (2009) explains, an e-learning program cannot just be evaluated on outcome success; the evaluation must encompass the totality of the course from system design through system delivery to system outcomes. Pedagogical Facebook shows the most success in the system design facet and performs fairly well in system outcomes, although improvements could be made. However, utilizing the holistic design concept, it falls short in system delivery. In order to fully maximize the potential of this course as a valuable tool for learners, changes need to be made in information quality and use that will, in turn, pay dividends in the user satisfaction dimension. By making minor changes to the program, it can become an excellent seminar furthering the growth of social media, Web 2.0 applications, and technology in training and education.

EAC 585

Francis TECHNOLOGY TRAINING EVALUATION

Reference List Carlson, N. (2011, March 31). Chart of the day: How many users does Twitter really have? Business Insider. Retrieved from http://www.businessinsider.com/chart-of-the-day-how-many-usersdoes-twitter-really-have-2011-3 Chua, B. B., & Dyson, L. E. (2004). Applying the ISO 9126 model to the evaluation of an e-learning system. In R. Atkinson, C. McBeath, D. Jonas-Dwyer, & R. Phillips (Eds.), Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference (pp. 184-190). Perth, 5-8 December. Retrieved from http://www.ascilite.org.au/conferences/perth04/procs/pdf/chua.pdf Distance Education & Learning Technology Applications (DELTA). (2011). DELTA fact sheet. Retrieved from http://delta.ncsu.edu/about/factsheet/ Distance Education & Learning Technology Applications (DELTA). (2011). DELTA mission/vision. Retrieved from http://delta.ncsu.edu/about/mission/ Facebook. (2011). Statistics. Retrieved from http://www.facebook.com/press/info.php?statistics Fetaji, B., & Fetaji, M. (2009). e-Learning indicators: A multi-dimensional model for planning and evaluating e-learning software solutions. Electronic Journal of e-Learning, 7(2), 1-28. Retrieved from http://www.ejel.org/volume7/issue2 International Organization for Standardization. (2010). ISO Strategic Plan 2011-2015. Geneva, Switzerland. International Organization for Standardization (ISO). (2011). Retrieved from http://www.iso.org/iso/search.htm?qt=iso+9126&searchSubmit=Search&sort=rel&type=simp le&published=on Lee-Post, A. (2009). e-Learning success model: An information systems perspective. Electronic Journal of e-Learning, 7(1), 61-70. Retrieved from http://www.ejel.org/volume7/issue1

EAC 585

Francis TECHNOLOGY TRAINING EVALUATION

Stewart, I., Hong, E., & Strudler, N. (2004). Development and validation of an instrument for student evaluation of the quality of web-based instruction. The American Journal of Distance Education, 18(3), 131-150. doi:10.1207/s15389286ajde1803_2 Tzeng, G., Chiang, C., & Li, C. (2007). Evaluating intertwined effects in e-learning programs: A novel hybrid MCDM model based on factor analysis and DEMATEL. Expert Systems with Applications, 32(4), 1028-1044. doi: 10.1016/j.eswa.2006.02.004

EAC 585

10

S-ar putea să vă placă și