Sunteți pe pagina 1din 11

Computers & Education 81 (2015) 102e112

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

A problem solving oriented intelligent tutoring system to improve


students' acquisition of basic computer skills
Dongqing Wang a, Hou Han a, Zehui Zhan b, Jun Xu a, *, Quanbo Liu a, Guangjie Ren a
a
College of Educational Information Technology, South China Normal University, Guangzhou 510631, China
b
Center of Educational Information Technology, South China Normal University, Guangzhou 510631, China

a r t i c l e i n f o a b s t r a c t

Article history: Personalization and intelligent tutor are two key factors in the research on learning environment.
Received 29 January 2014 Intelligent tutoring system (ITS), which can imitate the human teachers' actions to implement one-to-
Received in revised form one personalized teaching to some extent, is an effective tool for training the ability of problem solv-
30 June 2014
ing. This research rstly discusses the concepts and methods of designing problem solving oriented ITS,
Accepted 4 October 2014
Available online 15 October 2014
and then develops the current iTutor based on the extended model of ITS. At last, the research adopts a
quasi-experimental design to investigate the effectiveness of iTutor in skills acquisition. The results
indicate that students in iTutor group experience better learning effectiveness than those in the control
Keywords:
Intelligent tutoring systems group. iTutor is found to be effective in improving the learning effectiveness of students with low-level
Interactive learning environments prior knowledge.
Applications in subject areas 2014 Elsevier Ltd. All rights reserved.
Improving classroom teaching

1. Introduction

Information and Communication Technology course (ICT) in Chinese academy aims at developing students' comprehensive ability of
computer key applications, promoting their positive attitudes, creative thinking and operational skills. However, with a large number of
students in class, lengthy pieces of work or practical constraints such as time and workload, providing effective feedback and meeting
individual needs of students are difcult for teachers (Buchanan, 2000; Wang, 2007). The result shows that the average of students under
private tutoring was about two standard deviations above the students using traditional didactic approach and 98% students could learn
better under private tutor (Bloom, 1984). Intelligent tutoring system (ITS) can provide one-to-one individualized instruction by stimulating
activities of human teachers. In our opinion, a teacher usually have to complete the following activities in teaching process: (1) explain the
core knowledge of a problem; (2) show how to solve the problems with specic knowledge; (3) provide solutions and worked examples of a
problem; (4) give targeted feedback to students in the process of their trying to solve the problem; (5) recommend related activities based
on students' cognitive state. Student model is the core element of ITS, based on which ITS is able to select the most suitable teaching
strategies, provide related examples according to the needs of students, and replace human teachers to some extent (Shi, Rodriguez, Shang,
& Chen, 2002).
Currently, the research on ITS is far from enough in aspects of problem solving and the method of learning by doing supporting.
Interactive problem solving environment is still rare, especially in general construction method. Interactive model needs further investi-
gation. Acquisition of basic computer skills is different from the theoretical knowledge learning, which cannot obtain directly from others
through passive or rote learning. Therefore, we must change the traditional teaching methods and build an interactive problem solving
environment to support the method of learning by doing, providing worked examples and personalized feedback.

* Corresponding author. Tel.: 86 13929521905; fax: 86 20 85214991.


E-mail addresses: wangdq.scnu@gmail.com (D. Wang), xujscnu@163.com (J. Xu).

http://dx.doi.org/10.1016/j.compedu.2014.10.003
0360-1315/ 2014 Elsevier Ltd. All rights reserved.
D. Wang et al. / Computers & Education 81 (2015) 102e112 103

2. Related works

2.1. Problem solving and skills acquisition

Skill as an advanced cognitive ability can be understood as the ability of using concepts and rules to solve the problem. It is difcult to be
achieved by using traditional teaching methods, such as lectures, knowledge representation (Hwang, Kuo, Chen, & Ho, 2014). The learner
should practice and strengthen the process continuously to complete the task. In teaching ICT, researchers gradually became aware of the
importance of operational skills' training and developed a variety of teaching aids systems and simulation tools, such as RCOS (Chernich,
Jamieson, & Jones, 1996), SOsim (Maia, 2003), in order to promote students' understanding of abstract concepts in computer courses
and correct students' misconceptions. Some simulating teaching systems, such as MINIX (Herder, Bos, Gras, Homburg, & Tanenbaum, 2006),
Nachos (Christopher, Procter, & Anderson, 1993), ltered out the complexity of the real-life situation, so that students could understand the
most basic concepts of knowledge and steps in a relatively simple context (Buendia & Cano, 2006). Web-based learning platform, such as
WebCT, BlackBoard, was also used to assist the instruction, providing a wide range of learning resources. Based on the platform and re-
sources, students were able to learn the contents of each module on demand, watching video lessons, reviewing the missing contents. To
some extent, it can support students to carry out resource-based learning and achieve a better learning effect, but it still cannot support the
skill acquisition in an effective way.

2.2. Interactive feedback

Feedback is crucial in the process of problem solving. It is the return of information about the learning process according to particular
predened objectives (Gagne, 1985). Learning is promoted when students are guided in their problem solving by appropriate feedback and
coaching (Merrill, 2002). Timely feedback and direct error analytic guidance can help learners tackle the problem (Anderson, Corbett,
Koedinger, & Pelletier, 1995), get to know the quality of their work (Moore & Kearsley, 1996), the current state of skills and the gap be-
tween the current state and the desired state (Butler & Winne, 1995), based on which learners can reect and adjust learning ways so as to
achieve the purpose of effective learning. The feedback for a learner consists not only of adaptive information about his errors and per-
formance, but also of adaptive hints for the improvement of his solution (Ltticke, 2004). Well-structured instructional feedback together
with annotations added to the worked examples can promote effective learning (Lee & Hutchison, 1998).
Providing feedback and guidance for each step of the problem in the process of problem solving is signicantly better and more
interactive than giving worked examples only (Ashton, Beevers, Korabinski, & Youngson, 2006; Corbalan, Paas, & Cuypers, 2010). Chi, Siler,
Jeong, Yamauchi, & Hausmann. (2001) found that students who engaged in a more interactive style of human tutoring were able to transfer
their knowledge better than the students in the didactic style of tutoring. Results that support greater interaction have also been found in
studies of intelligent tutoring systems (Person, 2003; VanLehn et al., 2005).

2.3. Prior computer experiences

Learners' prior knowledge is believed to be one of the most important factors affecting learning effectiveness (Dochy, 1994; Hailikari,
Nevgi, & Lindblom-Yla nne, 2007). Dochy (1994) argued that the domain-specic prior knowledge impacted learners' achievement. Prior
knowledge will facilitate skill acquisition (Posner & McLeod, 1982). Prior computer experience was an important predictor of performance
on subsequent computer-based tasks (Kuo & Wu, 2013; Park, 2001). Furthermore, Charness, Kelley, Bosman, and Mottram (2001) found that
breadth of experience with computer software was a strong positive predictor of learning a word-processing application.
Dochy, De Rijdt, and Dyck (2002) and Hailikari et al. (2007) argued that prior knowledge interacted with different phases of information
processing. Learners lacking appropriate prior knowledge will have trouble in learning new information and constructing new un-
derstandings (Ausubel, 2000). Therefore, prior knowledge can inuence learners' achievement (Dochy, 1996; Hailikari et al., 2007; Tobias,
1994). Prior knowledge is also an important variable related to e-Learning effectiveness. Learners with different levels of prior knowledge
benet differently from a given e-Learning environment (Smits, Boon, Sluijsmans, & Van Gog, 2008). Mitchell, Chen, and Macredie (2005)
argued that learners with different levels of prior knowledge had different perceptions about the features of the e-Learning environment,
which in turn affected their e-Learning effectiveness. Learners with poor level of prior knowledge need much more guidance (Mayer, 2002).
Worked examples (Clarke, Ayres, & Sweller, 2005) together with interactive feedback provide effective learning support for different levels
of students.

3. iTutor: a problem solving oriented ITS

An effective way to acquire basic computer skills is observing the worked examples and then solving the problems in context. This
concept consists two aspects, one is learning from examples, and the other is learning by doing. We designed and developed iTutor
system, which is a problem solving oriented ITS. It has two advantages, (1) extend the traditional model of ITS and emphasized on tracking
the process of problem solving and evaluating students' skill level; (2) build a highly interactive problem solving environment, under which
students can learn basic computer skills through solving the practical problems.

3.1. Extend the model of ITS

There are three parts in traditional framework of ITS, domain model, learner model and teaching model. Domain model represents the
domain knowledge. Learner model used to predict students' performance. Teacher model used to describe teaching process for students. In
order to enhance the ITS in the support of problem solving, traditional model of ITS need to be extended (Akhras & Self, 2002). The paper
presents an extended ITS for the design of iTutor, which is from the domain model to the problem solving situation model, the student
model to the interaction and process model, and the teaching model to feedback model, as is shown in Fig. 1.
104 D. Wang et al. / Computers & Education 81 (2015) 102e112

Fig. 1. The framework of extended ITS.

3.1.1. Problem solving situation model


Problem solving situation model, a description of the context and activities, is the activation phase designed in the learning environment
for problem solving. The domain model is the modeling of the knowledge, but for advanced cognitive abilities, such as skills. Learning
objectives cannot be achieved only by way of modeling representation of the knowledge. In fact, the skill is the ability to use concepts and
rules to solve problems, that is, the ability to apply declarative and procedural knowledge timely and accurately and solve the problems
which contain a number of activities and objectives dynamically. The skills of problem solving need to be acquired from experience and
advanced skills, which can be decomposed into a number of independent basic skills. The process of skills' training can be seen as the
mastery of basic skills, that is, practice in the context where certain skills needed. Constructivist view also pointed out that the learning of
knowledge or skills cannot be separated from its behavior and the activities of the learner together with his experience in the context
constitute an integral part of the learner's experience. Context also known as situation, the importance of situation has to be reected in the
problem-based learning environment research. Barron et al. (1992) pointed out the view of anchored situational teaching, which was trying
to introduce the situation of problem solving to the teaching process with the help multi-media technology to lead the students to nd the
problem and to solve it by means of a variety of materials and tools. Therefore, knowledge representation and modeling in ITS have been
gradually moving towards the problem solving situation model.

3.1.2. Interaction and process model


Interaction and the process model is the core elements of interactive problem solving environment, which are still based on a learner
model, but more emphasizing on modeling the process tracking and the evaluating of the problem solving skills.
The nature of learning is interactive. Learning by doing is an effective way of cultivating the ability of problem solving, which makes it
possible for the learners to follow the approach of real-world problem solving to carry through active learning and then construct their own
knowledge system, the process of which is highly interactive. Greeno and van de Sande (2007) believed that, conceptual understanding and
conceptual growth are considered as achievements of interaction. Therefore, in order to better understand the process of learning, we must
comprehend the interaction between the context of the learning environment and learners' cognitive structure. Based on this view, the
evaluation of the process of knowledge construction is more important than the evaluation of the results, thus the evaluating of learning
focus on process rather than results. The modeling of the learners' cognitive status should be targeted with analysis the characteristics of the
interaction varied by time, which combined with the sequences of interaction reects the process of the signicant knowledge construction.
The modeling of interaction and process was not divorced from the learner's cognitive state, because the occurrence of interaction depends
on the individual's cognitive level. Therefore, the new interaction model is mainly constituted by three factors of the learner's cognitive
structure, activities and context and the process model extended the interaction model in time, as is shown in Fig. 2.

3.1.3. Feedback model


Feedback model supports serialization of situations and activities with the display of a variety of examples and instances. The traditional
teaching model in ITS is actually based on pre-dened domain structure and teaching strategies, showing the sequence of the teaching
content for the learners based on the assessment of their cognitive state. But the validity of this teaching model is based on an assumption

Fig. 2. Interaction and process model.


D. Wang et al. / Computers & Education 81 (2015) 102e112 105

that the process of teaching in some way is forecast, which means draw up the teaching program depending on the pre perception of the
structure and the teaching sequence.
In the process of problem solving, learners are often confronted with the change of the previous knowledge in a eld. They had to survey
the prior knowledge repeatedly to gain a better understanding of knowledge in new situations, which cannot be achieved through pre-
dened teaching programs. When building a problem solving environment in ITS, it was not in order to determine a teaching sequence
obviously, but to provide the interaction space for students to solve the problem, includes activities, context, worked examples and so on.

3.2. Design the interactive problem solving environment

From the perspective of the learning environment design, ITS can build a problem solving environment through simulating human
teachers' teaching activities. The learners interact with the environment and adjust their learning behavior and activities in the process of
problem solving, until complete the goal of learning. Merrill (2002) split the process of problem solving into four distinct phases: activation
of prior knowledge, demonstration of skills, application of skills, integrating of these skills into real-world activities. Based on the four
phases, we design the interactive problem solving environment and the specic content is described below.

3.2.1. Activation: create problem situation


The phase of activation will guide the learners to memorize, link or apply knowledge based on the previous experience. The creation of
real-life situation for problem solving is one of the means to activate the previous experience. iTutor creates problem solving environment
for students to learn basic computer skills, including a real environment and a simulation environment. Most content of the IT skills were
implemented in the real environment, such as word processing, spreadsheet, presentation graphics and webpage making. The real problem
solving environment is conducive for knowledge transfer, under which students assimilate and conform to learn new knowledge and skills,
on the basis of the existing cognitive structure. However, in some cases, real environment will pose a lot of problems, bringing huge of
security risks, increasing the difculty of accessing information and improving the cost of implementation, so some of the skills training
should be carried out in the simulation environment, for example the operation of setting control panel in the Windows operating system.
Had the student operated improperly, the system would have malfunctioned or even crashed. Simulation environment is more suitable for
this type of content learning.

3.2.2. Demonstration: learning from examples


The phase of demonstration will provide learners with information of problem solving to help students to understand the issues. Worked
examples is an effective way to present information (Chi et al., 2001; Person, 2003) and can help novices to acquire schemas (Sweller &
Cooper, 1985). Kalyuga, Chandler, Tuovinen, and Sweller (2001) presented the results that students use worked examples show more
understanding in problem solving practice. Van Merrienboer (Van Merrie nboer, 1990) recommended that the rst problem in learning
sequence should be worked example that show students the type of whole task that they will learn to complete, this way is called learning
from examples. Learning from examples is considered to be an effective way and commonly used in learning skills. iTutor present the
operation steps of the learning content, through observation of which student can nd a similar examples, sum up the law of problem
solving and then try to solve the problem. These examples, can be used as resources in the ITS supply model, supporting the generation of
interaction sequence in process model.

3.2.3. Application: learning by doing


Application phase is the core content in designing problem-oriented learning environment, when the learner uses the knowledge and
skills to enhance the ability of learning. According to the theory of problem solving, problem solving was considered starting from the initial
state of the problem, searching in the problem space, until approximate the target state (Xu & Liu, 2001). If the learners deviated from the
target state in the process of problem solving, the system will diagnose the process and provide learning results and feedback to students to
help them re-understand the problem, adjust solutions to the problem timely and try to solve the problem again. Actually, the process is the
way of learning by doing.
The biggest advantage of iTutor is that it supports the method of learning by doing, so that students can try to complete the practical
skills tasks, acquire knowledge and develop skills in process of problem solving. Construction of interactive problem solving environment is
the core content of iTutor, through the diagnosis and evaluation of operation process and operating results, the cause of error can be pointed
out, the learning advice and the provision of related operations demonstration can be proposed. In a sense, iTutor implemented one-to-one
teaching. For learners, it is equivalent to have a patiently teacher around, which greatly reduce the instructor's experiment burden and the
students can achieve better learning.
The difculty of designing an interactive problem solving environment lies in providing real-time diagnostic evaluation and personalized
learning support, the core technology of which is the learner model and interaction model. Cognitive model based on assessment of the skill
acquisition process is a new student model. The student model, based on skills acquisition process, was built from the following three steps:
measuring the task performance of students; capture the contextual features when the measurement occurs; evaluate the level of skill
acquisition. This type of student model is the basis of the interaction and process models in iTutor, aiming to provide personalized learning
support services in the next step of learning activities for learners.

3.2.4. Integration: solve the problem


In the integration phase, the learner integrates the new knowledge or skills into real life, and uses them to solve more complex problems.
How to use knowledge or skills to solve the problem is not told by teachers directly but chosen by students when they try to solve it in real or
simulated environment. In the process of students' trying to solve the problem, interactive environment can provide a variety of interactive
support, especially personalized learning support, real-time evaluation is the key to achieve interaction. Interactive problem solving
environment is able to track the operation of the learners, give the real-time evaluation or diagnosis and provide feedback information.
Providing timely feedback through the process of dynamic tracking and diagnostic evaluation helps to stimulate and maintain student's
106 D. Wang et al. / Computers & Education 81 (2015) 102e112

motivation. Therefore, interactive environment or task environment is the most important factor to solve the problem and the most
important factor in determining how to solve the problem.
Based on the framework of extended ITS and the principles of interactive problem-solving environment design, we presented the ar-
chitecture of iTutor, as is shown in Fig. 3.
The main architecture of iTutor is based on client-server conguration. In this design, the user or client communicates with the server
using a Web browser. This architecture has ve important components. Among them, the interactive process management and the learner
prole analysis are the core of the system. Through interactive assessment activities, the system can record the students' interaction data
and track the whole process of interaction. Combined with the result of learner prole analysis in problem solving, the system can evaluate
learners' ability with the application of computer assisted assessment technique. According to the default rules made in the analysis and
decision module, the system can provide teaching content and teaching method for teachers, so as to provide students with the personalized
feedback to solve the problem.

3.3. User interface and application mode

There are two modules in iTutor to provide the worked examples and the real practicing tasks, namely learning from examples and
learning by doing. In the module of learning from examples, learners can choose worked examples according to the navigations of the
knowledge, as is shown in Fig. 4.
In the module of learning by doing, there are three ways for learners to choose the practicing tasks, namely, skills training where the
tasks were listed in accordance with chapters, skills self-test where the tasks were presented in the form of test papers generated auto-
matically based on the knowledge points selected by the learner and simulation papers where the tasks were also presented in the form of
test papers, but the structure of these papers is relatively complete, including multiple-choice questions, true or false questions and
operating questions. All the tasks could be edited by the teachers.
Fig. 5 is the interface of using Word to typeset an essay, which contains two parts: the real problem situation and the interactive control
panel.
When learners submit the solutions to the current task in interactive control panel, iTutor will diagnose and evaluate the completion of
the problem automatically and then provide an evaluation report, as is shown in Fig. 6. It points out whether the learners solve the problem
correctly, where they failed to address properly and provides worked examples presenting the steps of operation.
To achieve the competence in solving problems, learners have to undergo the following sections. First, they come to understand and learn
the solutions of the problems by observing the demonstration; Second, they examine, modify and improve the solutions by establishing
contact between the solutions and the previous tasks of the problem; Finally, they try to complete the task in the interactive problem-
solving environment again, the whole process of which may be repeated several times, with the system keeping track of the operation
and evaluation of learners' skills and providing targeted feedback. On the other hand, learners can login in iTutor through Internet with its
strong openness and scalability to access more missions of skills' training with the help of personalized learning support, such as online
guiding and online evaluation.

Fig. 3. The architecture of iTutor.


D. Wang et al. / Computers & Education 81 (2015) 102e112 107

Fig. 4. Interface of learning from examples in iTutor.

The self-regulated learning process of students with iTutor is shown in Fig. 7.


In order to promote students' self-regulated learning: on the one hand, iTutor provides a large number of sample resources; on the other
hand, iTutor creates an interactive problem solving environment for the way of learning by doing. Based on the diagnosis and real-time
evaluation of the operation process and the results, iTutor can point out the cause of the error and propose learning suggestions and
operation demonstration, which continues to occur in the whole process of problem solving, until students reach the goal of problem
solving. The process of observation, imitating and going back to try to solve the problem in a problem situation again is useful for students to
migrate the skill of problem solving.

4. Methods

4.1. Participants

137 freshmen from four normal classes in South China Normal University and one teacher participated in the research. The teacher
taught the course-Information and Communication Technology (ICT) and was experienced with iTutor and traditional web-based in-
struction. Most of these students had used the computer before, but their skills were varied. The four classes were assigned into two groups
randomly, an experimental group and a control group. The teacher, the learning materials and the practicing time in computer classroom at
school were the same for the two groups but the teaching methods was different. The experimental group practiced the skills with iTutor
and the control group did not use iTutor, but can access the same materials organized in the form of folders.

4.2. Instruments

4.2.1. Learning materials


The learning materials are about the basic skills of word processing, spreadsheet, presentation graphics and webpage making and some
theoretical knowledge in the course of ICT. The teacher taught the basic theoretical knowledge and the operating skills at the lecture and
arranged the practicing tasks to do in the practicing time, such as to do a resume. More materials were presented in iTutor as well as in the
form of folders to provide the experimental group and the control group with more learning contents respectively.

Fig. 5. Interface of learning by doing in iTutor.


108 D. Wang et al. / Computers & Education 81 (2015) 102e112

Fig. 6. Assessment report in iTutor.

4.2.2. Prior knowledge assessment and summative assessment


The prior knowledge assessment is used to test the student level of computer operation skills. There are 4 items (practicing tasks)
containing 40 points in total. A Two-Way Chart was developed to make sure that the design of these items was comprehensive and
reasonable. The design of summative assessment is based on the e-learning materials. The items included in iTutor or in the folders do not
appear in the summative assessment. The pre-test of the summative assessment, representing the entry level of students IT skills, was the
same as the prior knowledge assessment. At the end of the experiment, the post-test scores were taken to represent learning outcome.
The prior knowledge assessment and the summative assessment were taken on iTAS, an intelligent assessment system designed to
organize and administer the exams, the core of which is based on iTutor. Here, we will not discuss the specic management process in detail.

4.3. Research design and procedures

The research adopted a quasi-experimental design, dividing four participating classes into two groups. The skills of the two groups were
not signicantly different (F1, 135 1.111, p > 0.05) in the prior knowledge assessment. In the six weeks, the teacher taught the basic
theoretical knowledge and the operating skills at the lecture and arranged the practicing tasks to do in the practicing time, such as to do a
resume. Each week the learners need to participate in a class lecture (100 min) and have one chance to practice in the computer classroom at
school (120 min).
The aims, research designs and teaching methods were rst introduced to the participating teacher. Then all the students took the prior
knowledge assessment. After six weeks of experimental control, all the students took the post-test of the summative assessment at the end
of the experiment.

4.4. Data collection and analysis

The quantitative data collected include the scores of prior knowledge assessment and the post-test scores of the summative assessment.
Due to some reasons, such as machine failure or time conict, ve students in the experimental group and control group did not nish the
summative assessment with the answer to the last two or three questions blank. We eliminated the extreme value to avoid its inuence on
the subsequent analysis results. So we collected 132 students' scores in prior knowledge assessment and post-test of summative assessment.
All the data were analysis with SPSS 17.0. Three types of data analysis were performed. First, all students were divided into high-, middle-
and low-level prior knowledge groups according to their scores of prior knowledge assessment. The high-level prior knowledge group
comprised students with scores in the upper 33% of all scores, while the middle-level and low-level prior knowledge groups represented the
middle and lower third of scores respectively. Then a two-way ANOVA, taking the post-test scores of the summative assessment as the
dependent variable, and the different types of teaching methods and the different levels of prior knowledge as the xed factors, was used
to test the relationships between the post-test scores of the summative assessment, the different types of teaching methods factor and the
different levels of prior knowledge factor.
Next, one-way ANOVA, taking the post-test scores of the summative assessment as the dependent variable, and the different levels of
prior knowledge as the xed factor (three levels), was used to test the relationship between the post-test scores of the summative
assessment and the different levels of prior knowledge factor across the two different types of teaching methods. The Least Signicant
Difference (LSD) PostHoc test was also used to compare the learning effectiveness of students with different levels of prior knowledge in the
iTutor group and in the control group.
Further, this research also used one-way ANOVA, taking the post-test scores of the summative assessment as the dependent variable,
and the different types of teaching methods as the xed factor (two levels), to test the relationship between the different types of teaching
methods factor and the post-test scores of the summative assessment of students with low-level, middle-level and high-level prior
knowledge. The LSD PostHoc test was also used to compare the learning effectiveness of students with low-level prior knowledge and
middle-level prior knowledge across the two different types of teaching methods. Moreover, the learning effectiveness of students with
D. Wang et al. / Computers & Education 81 (2015) 102e112 109

Fig. 7. Self-regulated learning supported by iTutor.

high-level prior knowledge and that of students with middle-level prior knowledge in the two different types of teaching methods were also
compared.

5. Results

5.1. The inuence of different levels of prior knowledge and different types of teaching methods on student learning effectiveness

Firstly, all students were divided into three groups according to their scores of prior knowledge assessment, please see Table 1.
Before two-way ANOVA, the homogeneity of variance assumption (F5, 126 1.403, p > 0.05) was tested. The result indicated that the
homogeneity assumption was not violated. For the results of the two-way ANOVA, please see Table 2.
Table 2 shows that both the different types of teaching methods factor (F1, 131 28.844, p < 0.01) and different levels of prior knowledge
factor (F2, 131 10.895, p < 0.01) have signicant impacts on the post-test scores of the summative assessment. The results of the LSD
PostHoc test (Table 2) show that student learning effectiveness in the iTutor group is signicantly better than in the control group (p < 0.01).

Table 1
Descriptive statistics for different prior knowledge groups.

Groupa Mean scores of prior knowledge assessment Std.


High-level prior knowledge group (n 46) 34.02 0.643
Middle-level prior knowledge group (n 43) 32.62 0.669
Low-level prior knowledge group (n 43) 29.66 0.698
a
Because some students score the same, the three groups may not have the same number of people.
110 D. Wang et al. / Computers & Education 81 (2015) 102e112

Table 2
Two-way ANOVA on different types of teaching methods and different levels of prior knowledge (n 132).

Source SS df MS F Value PostHoca


**
Different types of teaching methods (A) 548.319 1 548.319 28.844 iTutor group > control group**
Different levels of prior knowledge (B) 414.227 2 207.114 10.895** High-level prior knowledge > low-level prior knowledge**
Middle-level prior knowledge > low-level prior knowledge**
A xB 256.626 2 128.313 6.755**
Error 2395.261 126 19.010
Corrected total 3377.295 131
**
p < 0.01.
a
Adjustment for multiple comparisons: Least Signicant Difference (equivalent to no adjustment).

This nding can be explained with reference to Merrill (2002). Merrill pointed out that problem solving oriented learning environment was
the most effective. Further, the results of the LSD PostHoc test also indicate that students with high-level and middle-level prior knowledge
have signicantly better learning effectiveness than those with low-level prior knowledge (p < 0.01). As Spyridakis and Lsakson (1991)
pointed out that prior knowledge can affect how learners associate new knowledge with what they know already.
In addition, Table 2 also shows that there is a signicant interaction effect between the different types of teaching methods factor and
the different levels of prior knowledge factor (F2, 126 6.750p < 0.01). Therefore, one-way ANOVA was used to do further analysis, as
discussed below.

5.2. Learning effectiveness of students with different levels of prior knowledge in different types of teaching methods

Before one-way ANOVA, the homogeneity of variance assumption (the iTutor group: F2, 72 3.013p > 0.05; the control group:
F2, 54 0.350p > 0.05) was tested. The results indicated that neither homogeneity assumption was violated. For the results of one-way
ANOVA, please see Table 3.
With regard to the iTutor group, Table 3 shows that the different levels of prior knowledge factor has no signicant impact on the post-
test scores of the summative assessment (F2, 72 0.320p > 0.05), meaning that in the iTutor group, student level of prior knowledge is not
signicantly related to their learning effectiveness. It is found that in the N-WBT group, the different levels of prior knowledge factor has a
signicant impact on the post-test scores of the summative assessment (F2, 54 14.586, p < 0.01), meaning that in the control group,
student level of prior knowledge has a signicant impact on their learning effectiveness. Furthermore, the results of the LSD PostHoc test
(Table 3) show that students with middle- and high-level prior knowledge have signicantly better learning effectiveness than students
with low-level prior knowledge in the control group. However, there is no signicant difference between the learning effectiveness of
students with middle-level and students with high-level knowledge (p > 0.05).
In addition, one-way ANOVA was also conducted to understand the learning effectiveness of students with different levels of prior
knowledge across the two different types of teaching methods. Before one-way ANOVA, the homogeneity of variance assumption (high-
level prior knowledge group, F1, 44 1.531, p > 0.05; middle-level prior knowledge group, F1, 41 0.756, p > 0.05; low-level prior knowledge
group, F1, 41 0.228, p > 0.05) was tested. The results indicated that neither homogeneity assumption was violated. For the results of one-
way ANOVA, please see Table 4.
Table 4 shows that the different types of teaching methods has a signicant impact on the post-test scores of the summative
assessment (middle-level prior knowledge group: F1, 41 5.050, p < 0.05; low-level prior knowledge group: F1, 41 33.830, p < 0.01). The
high-level prior knowledge group in the experimental group performed better, but the difference was not signicant in the statistical point.
In the process of learning, students may exist different forms of cognitive issues (Bangert-Drowns, Kulik, Kulik, & Morgan, 1991). Detailed
feedback can promote deeper conceptual understanding and help the students to apply rules to a more complex task context (Winne, 1989).
Based on Tables 3 and 4, by comparison with the materials stored in folders, iTutor can enable learners with any level of prior knowledge
to experience more effective learning. Further, iTutor can facilitate learners with low-level prior knowledge to experience greater learning
and raise their learning effectiveness to match that of learners with better prior knowledge. Learners with low-level prior knowledge
needed more guidance and assistance. In the iTutor group, learners with low-level prior knowledge can get more feedback. The feedback
played the role of a teacher, guiding and instructing learners with low-level prior knowledge step by step. Hence, learners with different
levels of prior knowledge experience statistically equivalent learning effectiveness in the experimental group. However, there is no such
design in the control group. Therefore learners with low-level prior knowledge have a signicantly lower performance in the control group.
Fig. 8 presents the results in an intuitive way.

Table 3
One-way ANOVA on two groups by different levels of prior knowledge.

Groupa Variable Level Mean (Std. Error) F Value PostHocb


iTutor group (n 75) Different levels of High-level prior knowledge 34.696(0.909) 0.320
prior knowledge Middle-level prior knowledge 34.167(0.890)
Low-level prior knowledge 33.714(0.824)
Control group (n 57) Different levels of High-level prior knowledge 33.348(0.909) 14.568** High-level prior knowledge > Low-level
prior knowledge Middle-level prior knowledge 31.158(1.000) prior knowledge**
Low-level prior knowledge 25.600(1.126) Middle-level prior knowledge > Low-level
prior knowledge**

**p < 0.01.


a
Because students in the experimental group (iTutor group) and the control group were come from the natural class, the two groups may not have the same number of
people. In the process of analysis we use the unweighted means.
b
Adjustment for multiple comparisons: Least Signicant Difference (equivalent to no adjustment).
D. Wang et al. / Computers & Education 81 (2015) 102e112 111

Table 4
One-way ANOVA on low-level, middle-level and high level prior knowledge groups by different types of teaching methods.

Group Variable Level Mean (Std. Error) F Value PostHoca


High-level prior knowledge (n 46) Different types of teaching methods iTutor group 34.696(0.909) 1.099
control group 33.348(0.909)
Middle-level prior knowledge (n 43) Different types of teaching methods iTutor group 34.167(0.890) 5.050* iTutor group > control group*
control group 31.158(1.000)
Low-level prior knowledge (n 43) Different types of teaching methods iTutor group 33.714(0.824) 33.830** iTutor group > control group**
control group 25.600(1.126)
*
p < 0.05.
**
p < 0.01.
a
Adjustment for multiple comparisons: Least Signicant Difference (equivalent to no adjustment).

Fig. 8. The learning effectiveness of the two groups.

6. Discussion and conclusions

Mimicking human teachers to implement one-to-one personalized teaching to a certain extent, is a hot but difcult spot in the research
of learning environment design. Extending the traditional architecture of ITS and exploring the new method of modeling student's learning
process and performance are two key issues to launch e-learning. The solution of these two issues will contribute to the launching of the e-
Learning. In this paper, the author extended the traditional model of ITS, applying the concept of problem-oriented learning environment
design to ITS architecture.
Then, iTutor was developed based on the extended ITS architecture to construct a real problem solving situation for student to practice
the basic computer skills. It poses complex, real-world problems and provides just-in-time, personalized feedback and on-demand advice,
such as worked examples, to support students in solving the problems. Students are allowed to work at their own paces. iTutor can partially
replace the experimental work of the instructor, which has particular application value for distance education students and self-learners,
because they can get a real sense of individualized guidance. On the other hand, the application of iTutor free teachers from arduous la-
bor of experimental guiding, and the teacher can put time and energy into more creative work.
ITS can respond in student's cognition (Del Solato & Du Boulay, 1995), cognitive abilities are related to success of students in problem-
solving (Yen, Rebok, Gallo, Jones, & Tennstedt, 2011). The paper was insufcient to verify iTutor's role in promoting students' ability of
problem-solving and innovation. Recently, iTutor was mainly used in the training and application of basic computer skills. The next step in
our research will be to apply iTutor in a larger scale to analyze whether it helps to promote students' advanced cognitive abilities with the
large data accessed. Besides, the proposed system and method can be further extended to competence training courses, such as computer
program design, driver training, physical or chemistry experiment and so on. Relate research is being carried out, as is shown in paper (Liang,
Liu, Xu, & Wang, 2009).

Acknowledgments

This research has been partially funded by the Chinese National Education Examinations Authority Planning Project 2009KS2002 and the
Natural Science Foundation in China #61305144. The authors would like to thank all the students that participated in the evaluation studies,
as well as to the rest of the iTutor research team, for their efforts and contributions to the ideas in this article.

References

Akhras, F. N., & Self, J. A. (2002). Beyond intelligent tutoring systems: situations, interactions, processes and affordances. Instructional Science, 30(1), 1e30.
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: lessons learned. The Journal of the Learning Sciences, 4(2), 167e207.
Ashton, H. S., Beevers, C. E., Korabinski, A. A., & Youngson, M. A. (2006). Incorporating partial credit in computer_aided assessment of mathematics in secondary education.
British Journal of Educational Technology, 37(1), 93e119.
Ausubel, D. P. (2000). The acquisition and retention of knowledge: A cognitive view. Springer.
Bangert-Drowns, R. L., Kulik, C.-L. C., Kulik, J. A., & Morgan, M. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61(2), 213e238.
112 D. Wang et al. / Computers & Education 81 (2015) 102e112

Barron, L., Campbell, J., Bransford, O., Ferron, O., Goin, B., & Goldman, E. (1992). The Jasper project: an exploration of issues in learning and instructional design. Educational
Technology Research and Development, 40(1), 65e80.
Bloom, B. S. (1984). The 2 sigma problem: the search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4e16.
Buchanan, T. (2000). The efcacy of a World-Wide Web mediated formative assessment. Journal of Computer Assisted Learning, 16(3), 193e200.
Buendia, F., & Cano, J. (2006). WebgeneOS: a generative and web-based learning architecture to teach operating systems in undergraduate courses. IEEE Transactions on
Education, 49(4), 464e473.
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Review of Educational Research, 65(3), 245e281.
Charness, N., Kelley, C. L., Bosman, E. A., & Mottram, M. (2001). Word-processing training and retraining: effects of adult age, experience, and interface. Psychology and Aging,
16(1), 110.
Chernich, R., Jamieson, B., & Jones, D. (1996). RCOS: yet another teaching operating system. In Proceedings of the 1st Australasian conference on computer science education (pp.
216e222). ACM.
Chi, M. T., Siler, S. A., Jeong, H., Yamauchi, T., & Hausmann, R. G. (2001). Learning from human tutoring. Cognitive Science, 25(4), 471e533.
Christopher, W. A., Procter, S. J., & Anderson, T. E. (1993). The Nachos instructional operating system. In Proceedings of the USENIX Winter 1993 Conference Proceedings on
USENIX Winter 1993 Conference Proceedings. USENIX Association. pp. 4e4.
Clarke, T., Ayres, P., & Sweller, J. (2005). The impact of sequencing and prior knowledge on learning mathematics through spreadsheet applications. Educational Technology
Research and Development, 53(3), 15e24.
Corbalan, G., Paas, F., & Cuypers, H. (2010). Computer-based feedback in linear algebra: effects on transfer performance and motivation. Computers & Education, 55(2),
692e703.
Del Solato, T., & Du Boulay, B. (1995). Implementation of motivational tactics in tutoring systems. Journal of Articial Intelligence in Education, 6, 337e378.
Dochy, F. (1994). Prior knowledge and learning. In International encyclopedia of education (pp. 4698e4702).
Dochy, F. (1996). Assessment of domain-specic and domain-transcending prior knowledge: entry assessment and the use of prole analysis. In Alternatives in assessment of
achievements, learning processes and prior knowledge (pp. 227e264). Springer.
Dochy, F., De Rijdt, C., & Dyck, W. (2002). Cognitive prerequisites and learning how far have we progressed since bloom? Implications for educational practice and teaching.
Active Learning in Higher Education, 3(3), 265e284.
Gagne, R. M. (1985). The conditions of learning and theory of instruction. CBS College Publishing.
Greeno, J. G., & van de Sande, C. (2007). Perspectival understanding of conceptions and conceptual growth in interaction. Educational Psychologist, 42(1), 9e23.
Hailikari, T., Nevgi, A., & Lindblom-Yl anne, S. (2007). Exploring alternative ways of assessing prior knowledge, its components and their relation to student achievement: a
mathematics based case study. Studies in Educational Evaluation, 33(3), 320e337.
Herder, J. N., Bos, H., Gras, B., Homburg, P., & Tanenbaum, A. S. (2006). MINIX 3: a highly reliable, self-repairing operating system. ACM SIGOPS Operating Systems Review, 40(3),
80e89.
Hwang, G.-J., Kuo, F.-R., Chen, N.-S., & Ho, H.-J. (2014). Effects of an integrated concept mapping and web-based problem-solving approach on students' learning achieve-
ments, perceptions and cognitive loads. Computers & Education, 71, 77e86.
Kalyuga, S., Chandler, P., Tuovinen, J., & Sweller, J. (2001). When problem solving is superior to studying worked examples. Journal of Educational Psychology, 93(3), 579e588.
Kuo, C.-Y., & Wu, H.-K. (October 2013). Toward an integrated model for designing assessment systems: an analysis of the current status of computer-based assessments in
science. Computers & Education, 68, 388e403.
Lee, A. Y., & Hutchison, L. (1998). Improving learning from examples through reection. Journal of Experimental Psychology: Applied, 4(3), 187.
Liang, Y., Liu, Q., Xu, J., & Wang, D. (2009). The recent development of automated programming assessment. In Computational intelligence and software engineering, 2009. CiSE
2009. International Conference on (pp. 1e5). IEEE.
Ltticke, R. (2004). Problem solving with adaptive feedback. In Adaptive hypermedia and adaptive web-based systems (pp. 417e420). Springer.
Maia, L. (2003). SOsim: Simulator for operating systems education.
Mayer, R. E. (2002). Multimedia learning. Psychology of Learning and Motivation, 41, 85e139.
Merrill, M. D. (2002). First principles of instruction. Educational Technology Research and Development, 50(3), 43e59.
Mitchell, T. J., Chen, S. Y., & Macredie, R. D. (2005). Hypermedia learning and prior knowledge: domain expertise vs. system expertise. Journal of Computer Assisted Learning,
21(1), 53e64.
Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. Belmont, CA: Wadsworth.
Park, R. (2001). Examining age differences in performance of a complex information search and retrieval task. Psychology and Aging, 16(4), 564e579.
Person, N. K. (2003). AutoTutor improves deep learning of computer literacy: is it the dialog or the talking head?. In Articial intelligence in education: Shaping the future of
learning through intelligent technologies (Vol. 97, p. 47).
Posner, M. I., & McLeod, P. (1982). Information processing models-in search of elementary operations. Annual Review of Psychology, 33(1), 477e514.
Shi, H., Rodriguez, O., Shang, Y., & Chen, S. (2002). Integrating adaptive and intelligent techniques into a web-based environment for active learning. Intelligent Systems:
Technology and Applications, 4, 229e260.
Smits, M. H., Boon, J., Sluijsmans, D. M., & Van Gog, T. (2008). Content and timing of feedback in a web-based learning environment: effects on learning as a function of prior
knowledge. Interactive Learning Environments, 16(2), 183e193.
Spyridakis, J. H., & Lsakson, C. S. (1991). Hypertext: a new tool and its effect on audience comprehension. In Professional Communication Conference (IPCC), IEEE International
(Vol. 1, pp. 37e44).
Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2(1), 59e89.
Tobias, S. (1994). Interest, prior knowledge, and learning. Review of Educational Research, 64(1), 37e54.
VanLehn, K., Graesser, A., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. (2005). When is reading just as effective as one-on-one interactive human tutoring. In Proceedings of the
27th annual meeting of the cognitive science society (pp. 2259e2264).
Van Merrie nboer, J. J. G. (1990). Strategies for programming instruction in high school: program completion vs. program generation. Journal of Educational Computing
Research, 6(3), 265e285.
Wang, T.-H. (2007). What strategies are effective for formative assessment in an e-learning environment? Journal of Computer Assisted Learning, 23(3), 171e186.
Winne, P. H. (1989). Theories of instruction and of intelligence for designing articially intelligent tutoring systems. Educational Psychologist, 24(3), 229e259.
Xu, J., & Liu, Q. (2001). IT skills automated testing and assessment: Theory, technologies and assessment. Science Press.
Yen, Y.-C., Rebok, G. W., Gallo, J. J., Jones, R. N., & Tennstedt, S. L. (2011). Depressive symptoms impair everyday problem-solving ability through cognitive abilities in late life.
The American Journal of Geriatric Psychiatry: Ofcial Journal of the American Association for Geriatric Psychiatry, 19(2), 142.

S-ar putea să vă placă și