Sunteți pe pagina 1din 41

Management Information Systems

Fall

08

[C.B.T. COMPUTER BASED TRAINING] [ A project report by Supratik Bhattacharya ]

UDAI PAREEK HR LABS EMPI BUSINESS SCHOOL NEW DELHI - 110074

A Report On Computer Based Training

Q u ic k T im e a n d a d e c o m p re s s o r a r e n e e d e d to s e e t h is p ic tu

Submitted to: Mr. B. G. Gupta

Submitted by: Supratik Bhattacharya

UDAI PAREEK HR LABS EMPI BUSINESS SCHOOL NEW DELHI (2008 2010)

Contents
Topic Overview Introduction Propositions for the design of CBL systems In -house or outsource CBT development Page No. 4 6 7 8

Pedagogical approaches or perspectives

Description

10

Modify the business process

13

Standard output Computer-Based Training and Assessments: An Exploratory Study of Social Factors (Abstract) Introduction The literature review The research model Research methodology Methodology Results and implications Conclusions References

15

16 18 20 24 26 29 30 37 38 3

Overview
Computer based training (CBT) or Electronic learning is a type of Technology supported education/learning (TSL) where the medium of instruction is computer technology. In some instances, no in-person interaction takes place. E learning is used interchangeably in a wide variety of contexts. Computer-based training (CBT) services are where a student learns by executing special training programs on a computer relating to their

QuickTime and a decompressor are needed to see this picture.

occupation. CBT is especially effective for training people to use computer applications because the CBT program can be integrated with the applications so that students can practice using the application as they learn. CBT nevertheless allows teachers to exercise their skills, although the skills needed are in some respects different from pre-CBT teaching. CBT is used to define a specific mode to attend a course or program of study where the students rarely, if ever, attend face-to-face for on-campus access to educational facilities, because they study online. In companies, it refers to the strategies that use the company network to deliver training courses to employees. Computer Based Training or Learning refers to the use of computers as a key component of the educational environment. While this can refer to the use of computers in a classroom, the term more broadly refers to a structured environment in which computers are used for teaching purposes. Computer Based Training is highly effective. People get the information they need, when they need it, no matter where they are located, and they can study at their own pace on their own computer. But creating a Computer Based Training program presents a problem. The material to be taught has to be converted into a computer program. And in many cases the person who knows the material to be taught is not familiar with computer programming. And that's where the Easy CBT Authoring Tools come in. 4

These Authoring Tools - Easy Tutor, Easy Test, Easy Quiz, Easy Book and Easy Study - were written specifically to allow non-programmers to create Computer Based Training programs. They are user-friendly and can be learned in a very short time. E-learning services have evolved since computers were first used in education. There is a trend to move toward blended learning services, where computer-based activities are integrated with practical or classroom-based situations. It is defined as a planned teaching/learning experience that uses a wide spectrum of technologies, mainly Internet or computer-based, to reach learners. Lately in most Universities, E-learning is used to define a specific mode to attend a course or programs of study where the students rarely, if ever, attend face-to-face for on-campus access to educational facilities, because they study online.

Introduction
As early as 1993, Graziadei, W. D. described an online computerdelivered lecture, tutorial and assessment project using electronic Mail, two VAX Notes conferences and Gopher/Lynx together with several software programs that allowed students and instructor to create a Virtual Instructional Classroom Environment in Science (VICES) in Research, Education, Service & Teaching (REST). In 1997 Graziadei, W.D., et al., published an article entitled "Building Asynchronous and Synchronous Teaching-Learning Environments: Exploring a Course/Classroom Management System Solution". They described a process at the State University of New York (SUNY) of evaluating products and developing an overall strategy for technology-based course development and management in teaching-learning. The product(s) had to be easy to use and maintain, portable, replicable, scalable, and immediately affordable, and they had to have a high probability of success with long-term cost-effectiveness. Today many technologies can be, and are, used in e - Learning, from blogs to collaborative software, e - Portfolios, and virtual classrooms. Most e - Learning situations use combinations of these techniques. By 2006, nearly 3.5 million students were participating in on-line learning at institutions of higher education in the United States. Many higher education, for-profit institutions, now offer on-line classes. By contrast, only about half of private, non-profit schools offer them. The Sloan report, based on a poll of academic leaders, says that students generally appear to be at least as satisfied with their on-line classes as they are with traditional ones. Private institutions may become more involved with on-line presentations as the cost of instituting such a system decreases. Properly trained staff must also be hired to work with students on-line. These staff members need to understand the content area, and also be highly trained in the use of the computer and Internet. Online education is rapidly increasing, and online doctoral programs have even developed at leading research universities.

Propositions for the design of CBL systems


In this section the training propositions derived from the aforementioned research efforts will be described. The overall objective of this collective research was to develop and test cognitive- based principles for designing learner support tools for distributed training environments that would enable trainees to develop the knowledge necessary for operating complex systems. Within these efforts there were two main initiatives that are directly relevant to the instructional design characteristics of distributed CBL systems. The first initiative involve a basic level cognitive initiative investigating ways in which knowledge can be integrated to facilitate knowledge structure organization and mental model development. The second initiative involves an investigation in individual differences and meta-cognitive process within multimedia CBL programs. Furthermore, these investigations will provide us methodologies in the ways that multimedia CBL instruction can influence knowledge acquisition, and also how this technology can best be used for assessment of training effectiveness. The following propositions for optimizing distributed learning effectiveness in complex CBL environments will be evaluated. Proposition 1:comparative study between the various CBL platforms Proposition 2: detect the factors that influence the success of CBL. Proposition 3:findings of surveyed students results will be matched with industry findings Proposition 4: based on these findings the hypothesis will be proposed Based on the hypotheses, develop a design model for development of interactive multimedia to effectively support E-learning in Bahrain. The research will also attempt to design and construct prototype CBL environment to verify the suitability of the proposed design model in e-learning development in Bahrain. The anticipated outcomes of the investigation of the above propositions are as follows: 1. Improved understanding of the current issues facing e-learning in Bahrain, and potential for the use of in-house CBL in Bahrain. 2. Design an effective model based on field study bearing in mind the conventional CBL development methods used in Bahrain. 3. New design model for the development of effective interactive multimedia learning environment.

IN-HOUSE OR OUTSOURCE CBT DEVELOPMENT

Influencing Factors: Some of the factors that influencing the outsource of CBT development Staff Money Time Commitment STAFF Existing or new hires CBT software experienced Subject matter experienced Targeted audience experienced MONEY What does it cost Do you have the funds TIME Does the developing staff have time Do the developers have technical expertise Do the Technical People have time COMMITMENT Is the group committed Who is taking Ownership

Pedagogical approaches or perspectives

It is possible to use various pedagogical approaches for computer based training which include: Instructional design - the traditional pedagogy of instruction which is curriculum focused, and is developed by a centralized educating group or a single teacher. Social-constructivist - this pedagogy is particularly well afforded by the use of discussion forums, blogs, wikipedia and on-line collaborative activities. It is a collaborative approach that opens educational content creation to a wider group including the students themselves. Laurillard's Conversational Model - is also particularly relevant to eLearning, and Gilly Salmon's Five-Stage Model is a pedagogical approach to the use of discussion boards.

Cognitive perspective - focuses on the cognitive processes involved in learning as well as how the brain works. Emotional perspective - focuses on the emotional aspects of learning, like motivation, engagement, fun, etc. Behavioral perspective - focuses on the skills and behavioral outcomes of the learning process. Role-playing and application to on-the-job settings. Contextual perspective - focuses on the environmental and social aspects, which can stimulate learning. Interaction with other people, collaborative discovery and the importance of peer support as well as pressure.

Description

Training is a powerful tool that successful organizations use to respond to evolving missions, changing audiences, and the increasing need for a diverse workforce. Training is essential for organizations wanting to take advantage of shifting market circumstances or to extend their impact by creating community-based coalitions for action. Macro has decades of global training experience. Our highly trained trainers and facilitators have trained thousands of public and private employees to be more efficient and effective and to be leaders and team players. We have conducted trainings in all 50 States and U.S. territories and in 70 countries around the world. Macro emphasizes training that builds an organizations capacity to achieve its mission. As an example, Macro conducts demographic and health surveys for the U.S. Agency for International Development in support of better public health in developing nations. As part of this effort, we conduct on-the-job training for government and nongovernmental organizations to incorporate better research technologies and methodologies and to use their data to improve public information and education. When implementing surveys internationally, we often supply and install computers and other infrastructure and train users on software used to collect and analyze data and to control its quality. We dont just dispense training informationwe make sure that the organization can understand it, can apply it, and can benefit fully from its use. We have developed hundreds of curricula specific to clients and projects. Our training curriculum can focus on train-the-trainer sessions or skills development of employees organization-wide. Macro uses a results-oriented training that can begin at any stage: from a training needs assessment through post-training evaluation and reinforcement. We offer flexible training options, such as partial day or multiday onsite training or training at one of our facilities. We offer customized computer-based trainings, including interactive CDROMs, online participation in distance-based learning, and video conferencing. Our multimedia specialists can tape training sessions as videos for sale or distribution. We also can tape sessions for live or streamed Web site use, both of which are highly cost-effective methods of expanding the number of people trained.

10

Macro also provides logistical support for training meetings. We track registration and billing online. We arrange for shipping of training materials, which we often print in-house. We develop and design agendas, meeting signage, badges, and other support materials. Macro can provide training wherever and in whatever form best meets your needs. Computerbased training is a highly effective method of maximizing workforce exposure to training while containing training costs. When Training QuickTime and a is computerbased, it can decompressor be made accessible to are needed to see this picture. anyone with computer access, can accommodate diverse schedules, and does not require an onsite trainer, travel time, or room reservations. Macro has combined its training and technological expertise to develop computer-based training that fits the specific needs of many clients. We design self-paced training modules, interactive trainings, distance learning, and trainings linked to person-toperson technical support. We integrate training evaluations into each Webbased training module, providing instant feedback on training effectiveness. We can develop training that qualifies for continuing education units. Computer-based training is just one of many ways Macro can help your e employees and your organization move forward. Total Training Solutions develops cutting edge computer based training (CBT) products to enhance, and supplement your training initiatives. Our computerdelivered training includes CD-ROM and the World Wide Web. Computer based training courses allow your employees to learn new concepts, applications, services, skills, and products as needed at their pace. CBT puts training at the fingertips of your employees both at the workplace and at home. It provides interactive learning from remote work locations and convenience of employees. CBT lends itself to the concepts of self-paced learning learning that occurs at a time the learner determines and at a pace established by the learner. We custom design CD-ROM delivered training that can be used as a stand alone training course or one that can be a supplement to an Instructor-Led course. In many cases, Computer Based Training can be used to teach the "basics" of a given course and then supplemented with hands-on InstructorLed classroom training that addresses specific learner needs. 11

This staged method of training provides your employees with a jump start on learning the basic concepts and procedures of a given course prior to the hands-on instruction to develop skill competency. CBT can help to reduce your training expenses by reducing your classroom training time, eliminating some hotel and travel expenses, and reducing the time employees are away from their job. By reducing your classroom training time, you can also shorten your overall implementation training time frame. Our Computer Based Training courses help your company build a confident work team that has gained the knowledge power to succeed in your business.

12

Modify the business process

QuickTime and a decompressor are needed to see this picture.

The term e-Learning 2.0 is used to refer to new ways of thinking about elearning inspired by the emergence of Web 2.0. From an e-Learning 2.0 perspective, conventional e-learning systems were based on instructional packets that were delivered to students using Internet technologies. The role of the student consisted in learning from the readings and preparing assignments. Assignments were evaluated by the teacher. In contrast, the new e-learning places increased emphasis on social learning and use of social software such as blogs, wikis, pod casts and virtual worlds such as Second Life. This phenomenon has also been referred to as Long Tail Learning. The first 10 years of e-learning (e-learning 1.0) was focused on using the internet to replicate the instructor-led experience. Content was designed to lead a learner through the content, providing a wide and everincreasing set of interactions, experiences, assessments, and simulations. Elearning 2.0, by contrast (patterned after Web 2.0) is built around collaboration. E-learning 2.0 assumes that knowledge (as meaning and 13

understanding) is socially constructed. Learning takes place through conversations about content and grounded interaction about problems and actions. Advocates of social learning claim that one of the best ways to learn something is to teach it to others.

The entwining portal offered by European school net is one of Europe's largest eLearning projects, comprising 50,000 registered teachers from across Europe. It is funded by the European Commission's Directorate General for Education and Culture, and has a network of 22 National Support Services, mostly operated by the national Ministries for Education in the EU. As another example, Second Life has recently become one of the virtual classroom environments used in colleges and universities, including University of Edinburgh (UK) Princeton University (USA), Rice University (USA), University of Derby (UK), Vassar College (USA), the University of Plymouth (UK) and the Open University (UK),[20]. In 2007 Second Life started to be used for foreign language tuition [21]. Both Second Life and real life language educators have begun to use the virtual world for language tuition. English (as a foreign language) has gained a presence through several schools, including British Council projects which have focused on the Teen Grid. Germany's cultural institute Goethe-Institut started an island in 2008[22], Spains language and cultural institute Instituto Cervantes has an island on Second Life. A list of educational projects (including some language schools) in Second Life can be found on the SimTeach site. SLanguages 2008 was the 2nd annual conference of language education using virtual worlds such as Second Life. The event took place in Second Life at the EduNation islands. Additionally, Mobile Assisted Language Learning (MALL) is a term used to describe using handheld computers or cell phones to assist in language learning. There is also an increased use of virtual classrooms (online presentations delivered live) as an online learning platform and classroom for a diverse set of education providers such as Fox School of Business for Templer University, Grades Grow, Minnesota State Colleges and Universities, BenAstrum Center of Regulatory eTraining, and Sachem[23][24][25][26]. Webex is a Cisco Web Meetings and Collaboration Solution. The platform has worked for educational institutions because of real time collaboration using an interactive whiteboard, chat, and VOIP technology that allows audio and video sharing. In distance learning situations, while replacing the classroom with features, institutions have also looked for security features which are inherently strong in a Cisco powered collaboration environment. The downside is that Webex is not a free platform like WiZiQ or Module, and fees are paid per 'host' of a classroom or a meeting. 14

Another real time collaboration provider making headway is Web Train. The Technology in Business Schools Roundtable, a group representing over 70 Canadian and US business schools[27], announced a program for their AACSB accredited members and affiliated colleges and universities to use Web Train for faculty meetings, student association meetings, virtual live classes, homework assistance, tutoring, teacher aid assistance, student technical support and remote control, lecture broadcasting, board meetings, virtual labs and anonymous drug, rape and depression counseling.[28] The announcement stated TBS will assist member business schools and their associated colleges and universities with implementation planning and rollout assistance to help increase the success of e-learning initiatives. The announcement also stated Web Train will provide their hosted services free to the business school and their associated college or university for a fiscal school year to reduce financial risk related to using an enterprise class hosted service.

Standard output Much effort has been put into the technical reuse of electronically-based teaching materials and in particular creating or re-using Learning Objects. These are self contained units that are properly tagged with keywords, or other metadata, and often stored in an XML file format. Creating a course requires putting together a sequence of learning objects. There are both proprietary and open, non-commercial and commercial, peer-reviewed repositories of learning objects such as the Merlot repository. A common standard format for e-learning content is SCORM whilst other specifications allow for the transporting of "learning objects" (Schools Interoperability Framework) or categorizing meta-data (LOM). These standards themselves are early in the maturity process with the oldest being 8 years old. They are also relatively vertical specific: SIF is primarily pK12, LOM is primarily Corp, Military and Higher Ed, and SCORM is primarily Military and Corp with some Higher Ed. PESC- the Post-Secondary Education Standards Council- is also making headway in developing standards and learning objects for the Higher Ed space, while SIF is beginning to seriously turn towards Instructional and Curriculum learning objects. In the US pK12 space there are a host of content standards that are critical as well - the NCES data standards are a prime example. Each state government's content standards and achievement benchmarks are critical metadata for linking e-learning objects in that space. An excellent example of e-learning that relates to knowledge management and reusability is Navy E-Learning, which is available to Active Duty, Retired, or Disable Military members. This on-line tool provides certificate courses to enrich the user in various subjects related to military training and civilian skill sets. The e-learning system not only provides learning objectives, but also 15

evaluates the progress of the student and credit can be earned toward higher learning institutions. This reuse is an excellent example of knowledge retention and the cyclical process of knowledge transfer and use of data and records.

Computer-Based Training and Assessments: An Exploratory Study of Social Factors

16

Q u ic k T im e a n d a d e c o m p re sso r a re n e e d e d to s e e th

Abstract
An exploratory research program on different types of hybrid classes to answer those and other questions around its efficacy and applicability for training and education. Our objective is to learn the outcomes through their effect on in-class and computer training phases of knowledge and skills acquisition and testing. The overall research question is: Which and how much do CBT, individual student, class, instructor, and CBA factors affect 17

student learning outcomes? Leidner and Jarvenpaa (2001) work introduced a research model that helps instructors determine the best teaching method depending on course content, available technology, an individual instructor, and student factors. Thirty-six questions were posed to over 400 students with direct and current experience using CBT and CBA for course credit. The findings show that there is a strong potential for student as well as corporate benefits in training using online assessment tools. Online assessment effectiveness should be given further research study given the explosive jump in reported learning.

1. Introduction
A quick scan of educational institutions and their programs show they are increasingly turning to computer-based training (CBT) and computer-based assessment (CBA) tools, especially for entry-level courses like introductory computing or for administering computer literacy or proficiency exams. Electronic, online, or computer-based training can provide a number of advantagessuch as time and place convenience for students and instructors, standardized delivery, self-paced learning, economies of scale in terms of classrooms and instructors, automated feedback to students and 18

instructors, and a variety of available content (Strother, 2002). IT can assist an instructor in extending availability beyond class time and office hours, establish links between classmates, and accomplishing administrative activities (Benbunan-Fich, 2002). One of the leading CBA/CBT providers, Course Technology, boasts on its web site how many millions of exams have been taken using their Skill Assessment Manager software since its inception in 1998. Additionally, data from CBA results can be used to conduct item analysis and strengthen course personalization, content, and delivery. There is a potential for computer-based teaching methods to improve classroom learning that remains untapped by the inability to use them effectively (Leidner and Jarvenpaa, 2001). And it's not just used in academia. In 1999, companies in the United States spent $62.5 billion on training and educating their employees, with more than $3 billion spent on technology-delivered training estimated to then to grow to $11billion by 2003with some retailers doing 20% of their training online (Khirallah, 2000). Companies are using it to screen job applicants, train employees, and test them on the training. Major vendors like IBM and Hewlett-Packard have established large CBT/CBA programs; IBM alone offers hundreds of subjects in 26 curriculum areas. But does it work well? There is a growing body of literature in and out of the academic community on its pedagogical efficiencies and effectiveness. One of the issues with CBT and CBA is using "hybrid" coursescombining traditional lecture pedagogy with computer-based technology to reap the best of both worlds. Moreover, after many lectures of June 16, 2005 introductory university courses are conducted in largeclasses to more easily absorb enrollment variations, provide economies of scale in terms of classrooms, and more efficiently use the skills of a professor especially during times of shrinking financial support. Using CBT and CBA with large classes can reportedly improve personalization, content tailoring, increased flexibility, and decreased administrative overhead. But what is the best mix of traditional and computerbased learning and assessment? What are the issues that need study and addressing? What are the factors that affect learning outcomes? How are they related? This paper introduces an exploratory research program on different types of hybrid classes to answer those and other questions around its efficacy and applicability for training and education. Our objective is to develop and perform an initial test of a new model designed to trace the influence of individual and technical characteristics on learning outcomes through their effect on in-class and computer training phases of knowledge and skills acquisition and testing. The overall research question is: Which and how much do CBT, individual student, class, instructor, and CBA factors affect student learning outcomes?

19

Q u ic k T im e a n d a d e co m p re sso r a r e n e e d e d t o s e e t h is p ic t u r e .

20

2. Literature Review
Previous literature relative to this study's research question is in two major areas: using CBA and online tools in a hybrid course, and the relationships between various technical and individual characteristics and academic performance. Two types of classroom information technologies are reviewed here: using technology to improve student learning, and using technology to improve student performance evaluation. A hybrid course provides teachers and students with face-to-face lectures and technology-enabled interaction for explanations, small group discussions, presentations, and individual assistance. This instructional format has been found to have many advantages over traditional lectures (Christopher, 2002). First, interaction between the professor and the students is regulated by the professor and occurs one-by-one; interaction via technology is controlled by the students and can occur in parallel. Second, students often receive an advance copy of the lecture slides and some prefer to study "at home" rather than attend class; studying via technology can always be done at home. Third, lectureseven while attendedmay not have student attention necessary for learning; training provided through technology may be more likely to keep student attention. Finally, people may learn more by doing than by watching and listening. At the same time, online training may be a viable alternative to those from rural areas and students with nontraditional schedules. Research has shown that the hybrid format can couple online homework with in-class, active learning exercises to improve attendance (Van Blerkom, 1992). Cameron (2003) used simulation in a hybrid course on networking, and found that it improved conceptual understanding and raised performance. Willett (2002) proposed to use online discussions to provide a good substitute for in-class discussion, and Haggerty, Schneberger, and Carr (2001) found that online discussions lead to better cognition due to the increase in available time to reflect and respond.

21

Q u ic k T im e a n d a d e co m p re sso r a r e n e e d e d t o s e e t h is p ic t u r e .

Cywood and Duckett (2003) discovered no significant differences between quantitative measures of online versus on-campus learning and suggest that there is no actual difference regarding learning. McGray (2000) demonstrated the potential of IT to enable an instructor to be more efficient and effective in broadening and deepening the learning process for business students in MIS. It has been shown that technology allows individuals to share tacit knowledge 22

in a manner uninhibited by the time and location (Leidner and Jarvenpaa, 1993). Another study, by Caywood and Duckett (2003), looked at the performance of students on campus and online during one specific course; the results showed no significant differences in learning across environments and concurred with the previous studies. But what are the factors involved in hybrid teaching? Bostrom, et. al., (1988) argued that individual differences are important for end-user training. Two studies in particular examined factors that influence computer training and skill gaining (Leidner, D. and Jarvenpaa, S. (2001); Yi and Davis, 2003). This exploratory study is aimed at identifying and testing specific variables that could predict learning outcomes for CBA. Leidner and Jarvenpaa (1995) described using technology to support an objectivist model of learning in hybrid courses by facilitating information delivery via a technology-enhanced instructor console and by using CBA. CBA allowed students to learn more effectively and efficiently because they were in control of the pace, time, and location. CBA feedback can be a critical part of learning; active involvement can lead to more effective learning than passive involvement. CBA enables instructors to collect, analyze, and use information about student learning as feedback to improve their teaching, and they enable students to demonstrate what they know (Ebert-May, Baltzli, and Lim, 2005). According to Riffell and Sibley (2003), surveyed students responded that the most effective way to learn material was through online homework and email with instructors. Ricketts and Wilks (2001) suggested that well-designed CBA can benefit students by improving their performances in assessments in the introduction of statistics in biology. Noyes, Garland, and Robbins (2004) studied paper-based and computer-based assessments, comparing the test performances of undergraduate students taking each test type. Given the identical multiple choice questions, students who used CBA achieved better results than those taking paper-based tests, and students with higher scores were found to benefit the most from CBA. Finally, CBA helped to improve long term recall of key concepts and resulted in higher scores than conventional exams, and students with computer experience had no additional advantages versus less experienced students (Bocij and Greasley, 1999). Many researchers have studied the relationships among student individual characteristics and academic performance. Previous academic history and grades, as well as propensity to study, are the most popular dimensions. Arias and Walker (2004) found strong negative relationships between class size and student performance calculated as aggregate exam points while teaching economics. The results suggested to them that student ethics and proximity to an instructor in small classes help students understand economic concepts better. They included several measures of student academic abilities, i.e., SAT, SAT verbal and SAT math, GPA, and demographic data (such as year of study, age, and gender) as explanatory variables and class size as the control variable. A number of noncognitive individual dimensions not measured by academic outcomes relate to academic performance as well. Gender, family size, and income have been used as academic performance predictors. Availability of support systems and preference of long term goals over short term needs were proposed by Northcote (2002). On the other hand, external collaborations (i.e., cheating) on online assessments have been shown to be problematicas they are with traditional paper testing (Kozma, 2003). Compeau and Higgins (1995) concentrated on studying self efficacythe conviction that one can control 23

his/her outcomes and do what is necessary to produce a certain resultand its importance in user acceptance and use of information technology. Learning style defined through demographic variables were found to have an effect on teaching and learning processes (Bostrom, et. al., 1990). Student major as a predictor was mentioned in McGray (2000). There is also literature on the effectiveness of technical support for computer assisted learning. Bocij and Greasley (1999) concluded that students with computer experience had no additional advantages versus less-experienced students. Another reported factor that affects academic performance and CBA is class size. Hancock (1996) found no significant difference in the performance among students in three large and six small classes on statistics. Tuckman (2002) compared the academic performance and learning in terms of GPA in a hybrid course (189 students in two academic quarters) and a traditional course (74 students in two academic quarters) with the knowledge of the control group (189 students who did not take the course) using the same textbook, content, and performance activities. His results suggested that student skills using the combined classroom and computermediated model improved significantly more in academic performance than the students taught the same material by the conventional classroom approach. Siegfried and Kennedy, Durfee, et. al., (2005), and Amoroso (2004) found no evidence to support that teaching strategy should depend on class size. Hill (1998) investigated the effect of large sections of 120 students in accounting on their performance and perceptions in the introductory courses. Data was collected from student surveys, instructor and university records, and student course evaluations. She used student interest toward accounting, course organization and planning, instructor-student interaction, student course evaluations, GPA and SAT scores, attendance, age, academic hours earned, hours worked, hours studied, and course completion as independent variables; final exam percentage scores and the overall course percentage points were the dependent variables. The study did not find statistically significant differences between student performance and section size. When attendance and university GPA were controlled, the large sections actually outperformed the small sections. In summary, there are many factors based on previous research that may affect student outcomes in hybrid courses using CBT and CBA.

3. The Research Model


Leidner and Jarvenpaa (2001) proposed a research model that helps instructors determine the best teaching method depending on course content, available technology, an individual instructor, and student factors. They 24

suggested that the amount of class versus in-class learning should depend on the chosen teaching method and impact of out-of-class learning (i.e., computer based training and assessments in our case) and in-class performance (i.e., paper exams). They used graduate students in a small class to investigate the proposed relationship and suggested that this experimental setting can be changed to reveal other interesting relationships. At the same time, Yi and Davis (2003) presented the conceptual framework of the relationships between modeling-based training interventions, pre-training individual differences, learning processes, and training outcomes. They had tested the model with 95 students engaged in computer spreadsheet training. Based on this literature, our observations, and our desire to extrapolate from previous studies on larger undergraduate student bodies, we propose the overall research model shown in Figure 1.

25

Q u ic k T im e a n d a d e c o m p re sso r a r e n e e d e d t o s e e t h is p

26

Figure 1. Proposed Research Model


The model incorporates three main groups of factors leading to the dependent variable, the learning outcome. The first group consists of the characteristics of the technology and the characteristics of the individual. The technology characteristics include variables such as ease of use, understandability, easy navigation, and Internet connection speed. The individual student characteristics include variables like previous computer experience, current grade point average, and self-efficacy. The second group is about the training, consisting of computer-based training and traditional classroom lectures. Some variables for computer-based training are technical support, how often it's used, and where it's used (at home or in a school computer laboratory). Classroom lecture variables include the size of the class, the instructor, and when the class meets. It should also be noted that we believe the technology characteristics would directly influence the computer-based training, while the individual characteristics would affect the computer-based training and the classroom lecture training. The third group of variables in the model concern computer-based assessments, with variables such as help from other sources during the assessment, where the assessment is accomplished, and technical support during the assessment. Finally, outcomes are generally measurable variables showing the results of individual training and assessment, such as the difference between pre- and post-tests.

4. Research Methodology
This paper describes the initial, exploratory study of a longer-range research program on computer-based training (CBT) and computer-based assessments (CBA). As such, we purposely chose to use a wide range of variables for the factors identified in our research model shown in Figure 1. The variables were from the aforementioned literature search, our own experiences using CBT and CBA, and reasoned postulation. Even though some variables appeared to overlap, they were still used for subsequent refinement as the program progresses. The key variable driver was to be inclusive, not exclusive, in variable selection. To be exploratory, our data collection methodology used the survey approach and direct performance measurement. Thirty-six questions Technology Characteristics Individual Characteristics Computer-based Training Classroom Lectures Training Computer-based assessments Outcomes 27

Q uickTim e and a decom pressor are needed to see this

Proceedings of the 39th Hawaii International Conference on System Sciences 2006 were posed to over 400 students with direct and current experience using CBT and CBA for course credit. Additionally, the subjective or perceptive survey data was matched with measured, objective course performance scores. On top of that, performance data from about 150 students using the CBT/CBA software was added. The combination of two collection approaches allowed us to search for relationships among the subjective data, among the objective data, and between the subjective and objective data. Moreover, the measured data gave us measured learning outcomes which could be used as dependent variables during data analysis. The survey instrument consisted of 36 questions on the responder's demographics, computing experiences and skills, and opinions about the CBT/CBA software or its use (see Appendix 1 for a list of the survey questions). Some were open-ended, some checked their perception with known data, and most asked for subjective answers using a Likert scale. The survey was offered online for respondent convenience (i.e., they didn't have to come to class, they could take it anywhere there was Internet access, and they could take it any time of the day) and for automated survey management and data compilation. Students volunteered to take the online survey, but were given token credit toward their final course grade for doing so. The survey was available for two weeks, but students were not allowed to take it more than once. In opening up the survey, students used a personal and unique non-descriptive campus ID codewhich could then be correlated with their course performance scores. When the survey was closed, the data was downloaded in spreadsheet format for input to statistical analysis programs. The measured data consisted of performance 28

scores on CBA exams and traditional paper exams. The CBA exams tested student skills performing specific tasks using Microsoft Office 2003 applications. The software presented a screen exactly duplicating the Office application with a canned document, and asked students to perform an operation (e.g., globally replace all instances of "bought" with "purchased"). To reduce cheating, each CBA exam had a time limit for minimizing the use of notes or textbooks, and the tasks were presented in random order to each student. Most importantly, the CBA software did not score solely the task result; it scored the steps taken to achieve the result and the order in which they were done. In other words, a student had to perform the right task steps in the right orderit was very difficult to "experiment" in achieving the task result. Realizing that some steps may have been done in error (e.g., mouseclicking twice instead of once), each student was given two opportunities to correctly perform each task. Each task was scored as right or wrong, and the results were automatically scored for reporting. There were six CBT exams; a comprehensive pre-test exam on the entire course, four exams on specific Office applications, and one comprehensive post-test. Each exam was open online for one 15hr window to minimize conflicts with other courses, outside work, etc. Students could take the CBA exams in the school laboratories, or on their own or anyone else's computer provided the CBT software was loaded on the machine. The traditional paper exams were administered in class using paper exam booklets and opticalreadable scan sheets that were automatically scored, analyzed, and reported to the professor. Each exam tested a student's knowledge about Office 2003 applications; the combination of the paper exams and the computer-based exams measured student knowledge and performance, respectively. There were two paper-based exams; one on each half of the course. All students used the same CBT/CBA software linked to the same course textbook. Office applications were covered in class using traditional lectures supplemented a large screen projection system for lecture slides, and a podium computer used to demonstrate application tasks onto the large screen display. Six course sections were small (<35 students), and four were large (>100 students). All sections were taught individually by two professors; one had five small sections one semester then three large sections the following semester, the other had one large and one small section the same semester. Both professors followed identical syllabi over both semesterseach taught the same topics in the same sequence using the same textbook, lecture slides, and exams. The students were university undergraduate students of a wide range of ages and of all academic years from all schools across campus. It was a required course for some students, but not for all.

29

5. Methodology
The initial step in data analysis was to pair CBA performance scores with survey results. The data and the matches were not totally complete; not all students took all CBA and paper exams, not all students took the survey, a few students gave random identification codes when taking the survey (and therefore could not be matched), and some simply skipped to the last survey question. But most analysis didn't require a complete set of all variables from all students, especially with almost 500 students the entire sample set (n=489). There were 226 students with complete, matched surveys. Proceedings of the 39th Hawaii International Conference on System Sciences 2006 shows the student demographics of the set of students who completed the survey and were matched with performance scores.

Mean Range
Gender Male: 145 Female: 81 Age 19.91 18-32 Academic Year Freshman: 49 Sophomore: 119 Junior: 37 Senior: 21 Grade Point Average 2.95 1.50-4.00 Internet Connection Type Dial-up: 7 Cable/DSL: 140 T1 or better: 58 Don't know: 21 Years Experience on the Internet 8.27 3-11

Student Demographics (Matched Students)


The first set of exploratory data analysis was achieved using multivariate analysis on all the variables to find potential relationships that may warrant further analysis. The second set of exploratory data analyses was done using the general linear model. The univariate analysis was made with the difference between the CBA comprehensive pretest and post-test (variable A1A6) as the dependent variable. The initial set of independent variables was all the independent variables to look for potential correlations and those deserving closer examination. The next analysis focused in on the independent variables class section, class size, instructor, written exam scores, SAT scores, GPA, the demographic data, and computing skills and experience. Then various groups and combinations of the independent variables were analyzed to focus in on specific factors relative to the dependent variable as suggested by previous literature and our research model. The independent variables were also examined with the paper exam scores as individual dependent variables.

30

6. Results and Implications


Some of the key results are shown in subsequent tables. Not all the statistically significant correlative results are shownonly the correlations with 1- tailed significance of .005 or less given the initial exploratory nature of the data analysis. Only the very strongest correlations are shown. Table 2 shows the correlations among the pre-test/post-test variable A1A6 and some student characteristics (again, only statistically significant correlations at .005 are shown). The regression model had a significance of .003. Significant Variable Correlation Coefficients, First Group A1A6 Sec Inst GPA A1 Q3 Q4 Q7 Q9 Q10 Q11 Q12 Q13 A1A6 Sec .238 Inst .162 GPA .256 Q3 .239 Q4 Q7 -.250 Q9 Q10 Q11 -.326 .222 .348 .586 Q12 -.347 .296 -.249 .398 .410 .772 Q13 -.291 .292 -.286 .418 .273 .554 .717 Q14 -.283 -.276 .194 -.309 .441 .467 .540 .677 Note: A1A6: Difference between pre-test and post-test SEC: class section Inst: instructor GPA: university cumulative grade point average A1: the comprehensive CBA pre-test Q3: gender Q4: age Q7: Internet connection type Q9: level of Internet experience Q10: computer skill level I Q11: computer skill level II Q12: computer skill level III Q13: computer skill level IV Q14: computer skill level V Significant Variable Correlation Coefficients, Second Group Q18 Q19 Q21 Q22 Q23 Q24 Q25 Q26 Q27 Q28 Q32 Q33 Q34 Q18 Q19 Q21 Q22 Q23 .245 .245 31

Q24 .253 Q25 .255 .243 .287 .276 Q26 .270 Q27 .286 .306 Q28 .231 .267 Q32 .226 .227 Q33 .250 .252 .350 .258 Q34 .226 Q35 .287 Q18: CBT/CBA software is easy to use Q19: CBT software helps prepare for CBA Q21: CBT software reduced time to learn Office Q22: CBT improved ability to use Office Q23: using technical support often Q24: technical support is timely and effective Q25: technical support is accessible and knowledgeable Q26: work hard in course Q27: certain can master the course Q28: receiving help during CBA Q32: CBT prepares me well for CBA Q33: use CBT often Q34: often discuss course with friends and family Q35: often receive emotional support from others CBA performance improvement (A1A6) is correlated to class section and instructor. This suggests that even though students performed computerbased skills training and took computerbased skills assessments, the aggregate group of students and the individual instructor teaching them with traditional lectures still had a significant effect on their computer-based assessments. Follow-on research comparing this data with students who relied solely on CBT and CBA without any classroom lecture sessions might clarify the importance of combining traditional lectures with CBT/CBA. Further research could also investigate whether the section and instructor benefits are due to cognitive, social, procedural, or explanatory factors (Haggerty, Schneberger, and Carr, 2001). We found that the higher the cumulative GPA, the higher the CBA improvement (A1A6). This implies that high performing students in general also perform well in CBAnot just in classroom environments. This suggests that at least some of the beneficial learning skills used in traditional classroom settings can be beneficially used in computer-based settings. We also found that the higher the cumulative GPA, the less initially known about computers and the Internet. The implications could be numerous and beg further study. Is academic performance degraded by high levels of student attention to computers? Are computer activities including gaming, Internet surfing, etc., supplanting attention to coursework? Do lesser computer-savvy students realize their shortfall and work harder on CBT and CBA as a result? Are high performing students less inclined or interested in computers, and vice-versa? One implication for CBT and CBA, however, is that high performing students may need special attention in terms of basic computer skills, but their ultimate

32

Q u ic k T im e a n d a d e c o m p re sso r a r e n e e d e d t o s e e t h is p ic t u r e .

CBA performance will not suffer as a result of low computer skills (see paragraph 2 immediately above). Females had higher GPAs than males, and males were initially more computer savvy. This follows on the heels of paragraph 3 immediately above. The more important implication for CBA, however, is in the links with paragraphs 2 and 3 above; that gender may be a factor in how much a student improves but not directly because of gender 33

per se but because of the gender imbalance in initial computer knowledge. The higher the initial student computer skills, the higher the initial skills assessment scores. While this may seem inescapable, the two skill sets are not necessarily identical; one involves skills about basic computer operations, while the other concerns specific Microsoft Office application skills. But this positive correlation implies measuring one may be a useful indicator to the other, especially given the widespread use of Microsoft Office applications. The higher the initial student computer skills, the less they improve overall in computer-based assessments. While this may seem trivial since students who start out with higher skills assessment scores have less room to improve, the data also showed that students with higher initial computer skills did not score commensurately higher in the post-test computer-based assessment. This also begs further investigation. Do computer-savvy students ease off in their efforts during the course because they don't feel they need to study and practice, while students with less computer knowledge work harder to compensate for their sense of computer inadequacy? The more accessible technical support is during computer-based training and assessment, the easier it is to use CBA, the more they take advantage of computer-based training, and the greater their perceived benefit from CBA. The happier a student is about technical support, the greater the student's belief in being able to master the CBT/CBA course. These relationships imply that technical support plays an important role in successful CBT and CBA. Computer-based training and assessments require well functioning computers and software. When there are problemsespecially to students with lower initial computer knowledge and skillstechnical support can make or break a CBT or CBA session. These relationships suggest that good technical support may be essential to good CBT and CBA. The higher the perceived benefits of CBT and CBA, the more often it's used. This may also appear insignificant, but the implication for CBT/CBA is that the more the potential benefits are explained and understood, the more they can be realized through increased utilization. The more other students and friends help, the less time it takes to learn CBA and the CBA material. This suggests that social networking is important to CBA, just as the class section and instructor are. This implies that face-to-face interaction with others can improve individual interaction with computerbased training and assessment software. While isolated CBT and CBA may be the most convenient for students, it may not be optimum for learning efficiency. The higher students' perceptions about how hard they work, the more they use CB training, and the more they discuss the course with friends and family. Causal relationships are not readily evident here and suggest further areas of study. But these correlations again suggest that hard work and networking with friends and family may have a role in CBT/CBA effects. If so, it may suggest the importance of a holistic approach to CBT/CBA courses well beyond just the underlying technology.

34

Survey Questions Demographics 1. English is my primary language (1=Yes, 2=No) 2. My SAM username is ____________ 3. My gender (1=Male, 2=Female) 4. My age is __________ 5. My academic year (1=Freshman, 2=Sophomore, 3=Junior, 4=Senior) 6. My current GPA: __________ 7. My Internet connection type (1=Dial-up, 2=Cable/DSL, 3=T1 or better, 4=Don't know) 8. Number of years using the Internet (<1, 1,2,3,4,5,6,7,8,9,>9) 9. Experience with Internet (1=No, 2=Little, 3=Some, 4=Much, 5=Extensive) Computing Skills 10. Basic skills like typing a document, etc. (1=strongly disagree, 2=slightly disagree, 3=indifferent, 4=slightly agree, 5=strongly agree) 11. Install programs, etc. (same 5pt. scale) 12. Set up virus checkers, etc. (same 5pt. scale) 13. Install networks, etc. (same 5pt. scale) 14. Install new hardware (same 5pt. scale) SAM Expertise 15. Learning SAM was easy (same 5pt. scale) 16. Navigating and accomplishing SAM tasks is easy (same 5pt. scale) 17. What SAM tells me is clear and understandable (same 5pt. scale) 18. Overall, I find SAM easy to use (same 5pt. scale) 19. SAM helps me prepare for assessments (same 5pt. scale) 20. SAM easily trains me on MS Office basic functions (same 5pt. scale) 21. SAM decreased the time to learn MS Office functions (same 5pt. scale) 22. SAM improved my ability to use MS Office (same 5pt. scale) 35

23. I use SAM technical support often (same 5pt. scale) 24. SAM tech support helps me well and timely with SAM problems (same 5pt. scale) 25. SAM tech support is very accessible and knowledgeable (same 5pt. scale) 28. I receive help from other students while doing assessments (same 5pt. scale) 29. I receive help from SAM/IT tech support while doing assessments (same 5pt. scale) 30. I receive help from my instructor while doing assessments (same 5pt. scale) 31. SAM training reflects what is covered in assessments (same 5pt. scale) 32. SAM training prepares me well for assessments (same 5pt. scale) 33. I use SAM training often (same 5pt. scale) 36. I do assessments (1=on my own computer, 2=in the lab, 3=at a friend's house, 4=elsewhere). Self-Efficacy 26. I work very hard and persistently in CIS1025 (same 5pt. scale) 27. I am certain I can master the skills in CIS1025 (same 5pt. scale) 34. I often discuss CIS1025 content with friends/family/etc. (same 5pt. scale) 35. I often receive general emotional support from others (same 5pt. scale)

36

Q u ic k T im e a n d a d e c o m p re sso r a r e n e e d e d t o s e e t h is

37

7. Conclusions
Organizations that use computer-based training and assessment tools can potentially reap significant rewards in improved employee knowledge and skill levels. As stated earlier, this is the initial, exploratory stage in a series of studies on efficacy of computerbased training and assessment. The findings in this study may be limited to some degree to students at one school in one American regionalthough the mix of students at this one university appears good for generalization. The motivational factors of students, however, may not correspond with corporative employee motivations; corporate CBT and CBA users need to be studied as well. Given the convenience factor of CBT, CBA, and online surveys that are available to use on any computer with the right software and Internet access, the students who took CBT, CBA, and the surveys were not all in controlled physical environments. It cannot be said with certainty that the person taking CBT, CBA, and the survey was the actual person supposed to be doing the work, or did not receive undue help from others while online. But if most students would not engage in this deception, we believe that the large number of students involved mitigates this effect in the results. There appear to be large numbers of opportunities to extend this preliminary research by looking at all correlations with 1-tailed significance less than .05 rather than .005, refining the variables, adding new variables, exploring the cumulative effects of the variables on the proposed research model in Figure 1, and expanding data analysis efforts. Numerous opportunities for searching for causal relationships in the model may be particularly beneficial to CBT/CBA developers, educators, and users. The role technical support plays may be of particular interest, including its educational value in addition to textbooks, lectures, and computer-based training.

38

References
Amoroso, D. (2004), Use of Online Assessment Tools to Enhance Student Performance in Large Classes.In the Proceedings of ISECON- 2004, 21: 8. Arias, J. J. and Walker, D. M. (2004), Additional Evidence on the Relationship between Class Size and Student Performance. Journal of Economic Education 35(4): 311-326. Benbunan-Fich, R. and Hiltz, S. (2002), Correlates of Effectiveness of Learning Networks. Proceedings of the 35th HICSS. Bocij, P. and Greasley, A. (1999). Can computer-based testing achieve quality and efficiency in assessment? International Journal of Educational Technology, Vol. 1, No. 1. Booms, B. H. and D. L. Kaltreier (1974), Computer-Aided Instruction for Large Elementary Courses. Economic Education 64(2): 408-413. Bostrom, et. al. (1990), The Importance of Learning Style in End- User Training, MIS Quarterly, March 1990, pp 101-120 Cameron, B. (2003), The effectiveness of simulation in a Hybrid and Online Networking Course, TechTrends: Vol. 47, No. 5, pp 18-21. Caywood, K. and Duckett, J. (2003), Online vs. on-campus learning in teacher education. Teacher Education and Special Education, Vol. 26, No. 2, p. 98 -105. Christopher, D. (2002), Interactive Large Lecture Classes and the Dynamics of Teacher/Student Interaction. Journal of Instruction Delivery Systems 17(1): 13-18. Compeau, D. and Higgins, C, Computer Self-Efficacy: Development of a Measure and Initial test. MIS Quarterly, Vol. Proceedings of the 39th Hawaii International Conference on System Sciences - 2006, No. 2, 1995. pp 145 158. Durfee, A., Schneberger, S., and Amoroso, D., Computer-Based Assessments of Student Performance in Hybrid Classes: Does Class Size Really Matter? Americas Conference on Information Systems (AMCIS) 2005. Driscoll, M. (2001), Building Better E-Assessments, ASTDs Source for ELearning: Learning Circuits, http://www.learningcircuits.org/2001/jun2001/driscoll.html. Ebert-May, D., Batzli, J., Lim, H. (2003), Disciplinary research strategies for assessment of learning. Bioscience Vol. 53 No. 12: pp. 1221-1228. Haggerty, N., Schneberger, S. and Carr, P. (2001), Exploring Media Influences on Individual Learning: Implications for Organizational Learning. Proceedings of the 22nd Annual International Conference on Information Systems (ICIS), New Orleans, LA, December 16-19, 2001. 39

Hill, M. (1998), Class size and Student Performance in Introductory Accounting Courses: Further Evidence. Issues in Accounting Education 13(1): 47-64. Hogan, D and Kwiatkowski, R. (1998), Emotional Aspects of Large Group Teaching. Human Relations 51(11): 1403-1417. Khirallah, D. (2000), A new way to learn? InformationWeek, May 22, 2000. Kozma, R. (2003), Technology and Classroom Practice: An international Study. Journal of Research on Technology in Education, v. 31, no. 1, p. 13. Leidner, D. and S. Jarvenpaa (1995), The Use of Information Technology to Enhance Management School Education: A Theoretical View. MIS Quarterly: pp. 265-292. Leidner, D. and Jarvenpaa, S. (2001), The Information Age Confronts Education: Case Studies on Electronic Classrooms. Information Systems Research 4:1, 24-54 Mason, P., Problems in the handling of large sections in accounting by the lecture method. The Accounting Review: 179-182. McCray, C. (2000), The hybrid course: merging on-line instruction and the traditional classroom. Information Technology and Management, Vol. 1, No. 4, pp. 307-327. Northcote, M. (2002), Online assessment: friend, foe, or fix? British Journal of Educational Technology Vol. 33, No. 5, pp. 623-625. Noyes, J., Garland, K. and Robbins, L. (2004), Paper-based versus computerbased assessment: is workload another test mode effect? British Journal of Educational Technology, Vol. 35, No. 1, pp. 111-113. Ricketts, C., Wilks, S. J. (2002), Improving Students Performance Through Computer-based Assessment: Insights From Recent Research. Assessment & Education in Higher Education, v. 27, no. 5, p. 476-479. Riffell, S.K. and Sibley, D.H. (2003), Learning Online Student Perceptions of a Hybrid Learning Format. Journal of College Science Teaching, Vol. 32, No. 6. Siegfried, J. and P. Kennedy, Does Pedagogy Vary with Class Size in Introductory Economics? AEA Papers and Proceedings, Better Learning from Better Management 85(2): 347-351. Strother, J. (2002), An Assessment of the Effectiveness of e- Learning in Corporate Training Programs, International Review of Research in Open and Distance Learning, Vol. 3, No. 1, April 2002. Tuckman, B.W. (2002), Evaluating ADAPT: a hybrid instructional model combining Web-based and classroom components. Computers & Education Vol. 39, No. 3: pp. 261-269. Van Blerkom, M. L. (1992), Class attendance in undergraduate courses. Journal of Psychology, 126, 487-494. 40

Willett, H. (2002), Not one or the other but both: hybrid course delivery using WebCT. The Electronic Library Vol. 20, No. 5: pp. 413-419. Yi, M. and Davis, F. (2003), Developing and Validating an Observational Learning Model of Computer Software Training and Skill Acquisition, Information Systems Research, vol. 14 no 2, June 2003, pp146-169.

41

S-ar putea să vă placă și