Documente Academic
Documente Profesional
Documente Cultură
a r t i c l e i n f o a b s t r a c t
Article history: Personalization and intelligent tutor are two key factors in the research on learning environment.
Received 29 January 2014 Intelligent tutoring system (ITS), which can imitate the human teachers' actions to implement one-to-
Received in revised form one personalized teaching to some extent, is an effective tool for training the ability of problem solv-
30 June 2014
ing. This research rstly discusses the concepts and methods of designing problem solving oriented ITS,
Accepted 4 October 2014
Available online 15 October 2014
and then develops the current iTutor based on the extended model of ITS. At last, the research adopts a
quasi-experimental design to investigate the effectiveness of iTutor in skills acquisition. The results
indicate that students in iTutor group experience better learning effectiveness than those in the control
Keywords:
Intelligent tutoring systems group. iTutor is found to be effective in improving the learning effectiveness of students with low-level
Interactive learning environments prior knowledge.
Applications in subject areas 2014 Elsevier Ltd. All rights reserved.
Improving classroom teaching
1. Introduction
Information and Communication Technology course (ICT) in Chinese academy aims at developing students' comprehensive ability of
computer key applications, promoting their positive attitudes, creative thinking and operational skills. However, with a large number of
students in class, lengthy pieces of work or practical constraints such as time and workload, providing effective feedback and meeting
individual needs of students are difcult for teachers (Buchanan, 2000; Wang, 2007). The result shows that the average of students under
private tutoring was about two standard deviations above the students using traditional didactic approach and 98% students could learn
better under private tutor (Bloom, 1984). Intelligent tutoring system (ITS) can provide one-to-one individualized instruction by stimulating
activities of human teachers. In our opinion, a teacher usually have to complete the following activities in teaching process: (1) explain the
core knowledge of a problem; (2) show how to solve the problems with specic knowledge; (3) provide solutions and worked examples of a
problem; (4) give targeted feedback to students in the process of their trying to solve the problem; (5) recommend related activities based
on students' cognitive state. Student model is the core element of ITS, based on which ITS is able to select the most suitable teaching
strategies, provide related examples according to the needs of students, and replace human teachers to some extent (Shi, Rodriguez, Shang,
& Chen, 2002).
Currently, the research on ITS is far from enough in aspects of problem solving and the method of learning by doing supporting.
Interactive problem solving environment is still rare, especially in general construction method. Interactive model needs further investi-
gation. Acquisition of basic computer skills is different from the theoretical knowledge learning, which cannot obtain directly from others
through passive or rote learning. Therefore, we must change the traditional teaching methods and build an interactive problem solving
environment to support the method of learning by doing, providing worked examples and personalized feedback.
http://dx.doi.org/10.1016/j.compedu.2014.10.003
0360-1315/ 2014 Elsevier Ltd. All rights reserved.
D. Wang et al. / Computers & Education 81 (2015) 102e112 103
2. Related works
Skill as an advanced cognitive ability can be understood as the ability of using concepts and rules to solve the problem. It is difcult to be
achieved by using traditional teaching methods, such as lectures, knowledge representation (Hwang, Kuo, Chen, & Ho, 2014). The learner
should practice and strengthen the process continuously to complete the task. In teaching ICT, researchers gradually became aware of the
importance of operational skills' training and developed a variety of teaching aids systems and simulation tools, such as RCOS (Chernich,
Jamieson, & Jones, 1996), SOsim (Maia, 2003), in order to promote students' understanding of abstract concepts in computer courses
and correct students' misconceptions. Some simulating teaching systems, such as MINIX (Herder, Bos, Gras, Homburg, & Tanenbaum, 2006),
Nachos (Christopher, Procter, & Anderson, 1993), ltered out the complexity of the real-life situation, so that students could understand the
most basic concepts of knowledge and steps in a relatively simple context (Buendia & Cano, 2006). Web-based learning platform, such as
WebCT, BlackBoard, was also used to assist the instruction, providing a wide range of learning resources. Based on the platform and re-
sources, students were able to learn the contents of each module on demand, watching video lessons, reviewing the missing contents. To
some extent, it can support students to carry out resource-based learning and achieve a better learning effect, but it still cannot support the
skill acquisition in an effective way.
Feedback is crucial in the process of problem solving. It is the return of information about the learning process according to particular
predened objectives (Gagne, 1985). Learning is promoted when students are guided in their problem solving by appropriate feedback and
coaching (Merrill, 2002). Timely feedback and direct error analytic guidance can help learners tackle the problem (Anderson, Corbett,
Koedinger, & Pelletier, 1995), get to know the quality of their work (Moore & Kearsley, 1996), the current state of skills and the gap be-
tween the current state and the desired state (Butler & Winne, 1995), based on which learners can reect and adjust learning ways so as to
achieve the purpose of effective learning. The feedback for a learner consists not only of adaptive information about his errors and per-
formance, but also of adaptive hints for the improvement of his solution (Ltticke, 2004). Well-structured instructional feedback together
with annotations added to the worked examples can promote effective learning (Lee & Hutchison, 1998).
Providing feedback and guidance for each step of the problem in the process of problem solving is signicantly better and more
interactive than giving worked examples only (Ashton, Beevers, Korabinski, & Youngson, 2006; Corbalan, Paas, & Cuypers, 2010). Chi, Siler,
Jeong, Yamauchi, & Hausmann. (2001) found that students who engaged in a more interactive style of human tutoring were able to transfer
their knowledge better than the students in the didactic style of tutoring. Results that support greater interaction have also been found in
studies of intelligent tutoring systems (Person, 2003; VanLehn et al., 2005).
Learners' prior knowledge is believed to be one of the most important factors affecting learning effectiveness (Dochy, 1994; Hailikari,
Nevgi, & Lindblom-Yla nne, 2007). Dochy (1994) argued that the domain-specic prior knowledge impacted learners' achievement. Prior
knowledge will facilitate skill acquisition (Posner & McLeod, 1982). Prior computer experience was an important predictor of performance
on subsequent computer-based tasks (Kuo & Wu, 2013; Park, 2001). Furthermore, Charness, Kelley, Bosman, and Mottram (2001) found that
breadth of experience with computer software was a strong positive predictor of learning a word-processing application.
Dochy, De Rijdt, and Dyck (2002) and Hailikari et al. (2007) argued that prior knowledge interacted with different phases of information
processing. Learners lacking appropriate prior knowledge will have trouble in learning new information and constructing new un-
derstandings (Ausubel, 2000). Therefore, prior knowledge can inuence learners' achievement (Dochy, 1996; Hailikari et al., 2007; Tobias,
1994). Prior knowledge is also an important variable related to e-Learning effectiveness. Learners with different levels of prior knowledge
benet differently from a given e-Learning environment (Smits, Boon, Sluijsmans, & Van Gog, 2008). Mitchell, Chen, and Macredie (2005)
argued that learners with different levels of prior knowledge had different perceptions about the features of the e-Learning environment,
which in turn affected their e-Learning effectiveness. Learners with poor level of prior knowledge need much more guidance (Mayer, 2002).
Worked examples (Clarke, Ayres, & Sweller, 2005) together with interactive feedback provide effective learning support for different levels
of students.
An effective way to acquire basic computer skills is observing the worked examples and then solving the problems in context. This
concept consists two aspects, one is learning from examples, and the other is learning by doing. We designed and developed iTutor
system, which is a problem solving oriented ITS. It has two advantages, (1) extend the traditional model of ITS and emphasized on tracking
the process of problem solving and evaluating students' skill level; (2) build a highly interactive problem solving environment, under which
students can learn basic computer skills through solving the practical problems.
There are three parts in traditional framework of ITS, domain model, learner model and teaching model. Domain model represents the
domain knowledge. Learner model used to predict students' performance. Teacher model used to describe teaching process for students. In
order to enhance the ITS in the support of problem solving, traditional model of ITS need to be extended (Akhras & Self, 2002). The paper
presents an extended ITS for the design of iTutor, which is from the domain model to the problem solving situation model, the student
model to the interaction and process model, and the teaching model to feedback model, as is shown in Fig. 1.
104 D. Wang et al. / Computers & Education 81 (2015) 102e112
that the process of teaching in some way is forecast, which means draw up the teaching program depending on the pre perception of the
structure and the teaching sequence.
In the process of problem solving, learners are often confronted with the change of the previous knowledge in a eld. They had to survey
the prior knowledge repeatedly to gain a better understanding of knowledge in new situations, which cannot be achieved through pre-
dened teaching programs. When building a problem solving environment in ITS, it was not in order to determine a teaching sequence
obviously, but to provide the interaction space for students to solve the problem, includes activities, context, worked examples and so on.
From the perspective of the learning environment design, ITS can build a problem solving environment through simulating human
teachers' teaching activities. The learners interact with the environment and adjust their learning behavior and activities in the process of
problem solving, until complete the goal of learning. Merrill (2002) split the process of problem solving into four distinct phases: activation
of prior knowledge, demonstration of skills, application of skills, integrating of these skills into real-world activities. Based on the four
phases, we design the interactive problem solving environment and the specic content is described below.
motivation. Therefore, interactive environment or task environment is the most important factor to solve the problem and the most
important factor in determining how to solve the problem.
Based on the framework of extended ITS and the principles of interactive problem-solving environment design, we presented the ar-
chitecture of iTutor, as is shown in Fig. 3.
The main architecture of iTutor is based on client-server conguration. In this design, the user or client communicates with the server
using a Web browser. This architecture has ve important components. Among them, the interactive process management and the learner
prole analysis are the core of the system. Through interactive assessment activities, the system can record the students' interaction data
and track the whole process of interaction. Combined with the result of learner prole analysis in problem solving, the system can evaluate
learners' ability with the application of computer assisted assessment technique. According to the default rules made in the analysis and
decision module, the system can provide teaching content and teaching method for teachers, so as to provide students with the personalized
feedback to solve the problem.
There are two modules in iTutor to provide the worked examples and the real practicing tasks, namely learning from examples and
learning by doing. In the module of learning from examples, learners can choose worked examples according to the navigations of the
knowledge, as is shown in Fig. 4.
In the module of learning by doing, there are three ways for learners to choose the practicing tasks, namely, skills training where the
tasks were listed in accordance with chapters, skills self-test where the tasks were presented in the form of test papers generated auto-
matically based on the knowledge points selected by the learner and simulation papers where the tasks were also presented in the form of
test papers, but the structure of these papers is relatively complete, including multiple-choice questions, true or false questions and
operating questions. All the tasks could be edited by the teachers.
Fig. 5 is the interface of using Word to typeset an essay, which contains two parts: the real problem situation and the interactive control
panel.
When learners submit the solutions to the current task in interactive control panel, iTutor will diagnose and evaluate the completion of
the problem automatically and then provide an evaluation report, as is shown in Fig. 6. It points out whether the learners solve the problem
correctly, where they failed to address properly and provides worked examples presenting the steps of operation.
To achieve the competence in solving problems, learners have to undergo the following sections. First, they come to understand and learn
the solutions of the problems by observing the demonstration; Second, they examine, modify and improve the solutions by establishing
contact between the solutions and the previous tasks of the problem; Finally, they try to complete the task in the interactive problem-
solving environment again, the whole process of which may be repeated several times, with the system keeping track of the operation
and evaluation of learners' skills and providing targeted feedback. On the other hand, learners can login in iTutor through Internet with its
strong openness and scalability to access more missions of skills' training with the help of personalized learning support, such as online
guiding and online evaluation.
4. Methods
4.1. Participants
137 freshmen from four normal classes in South China Normal University and one teacher participated in the research. The teacher
taught the course-Information and Communication Technology (ICT) and was experienced with iTutor and traditional web-based in-
struction. Most of these students had used the computer before, but their skills were varied. The four classes were assigned into two groups
randomly, an experimental group and a control group. The teacher, the learning materials and the practicing time in computer classroom at
school were the same for the two groups but the teaching methods was different. The experimental group practiced the skills with iTutor
and the control group did not use iTutor, but can access the same materials organized in the form of folders.
4.2. Instruments
The research adopted a quasi-experimental design, dividing four participating classes into two groups. The skills of the two groups were
not signicantly different (F1, 135 1.111, p > 0.05) in the prior knowledge assessment. In the six weeks, the teacher taught the basic
theoretical knowledge and the operating skills at the lecture and arranged the practicing tasks to do in the practicing time, such as to do a
resume. Each week the learners need to participate in a class lecture (100 min) and have one chance to practice in the computer classroom at
school (120 min).
The aims, research designs and teaching methods were rst introduced to the participating teacher. Then all the students took the prior
knowledge assessment. After six weeks of experimental control, all the students took the post-test of the summative assessment at the end
of the experiment.
The quantitative data collected include the scores of prior knowledge assessment and the post-test scores of the summative assessment.
Due to some reasons, such as machine failure or time conict, ve students in the experimental group and control group did not nish the
summative assessment with the answer to the last two or three questions blank. We eliminated the extreme value to avoid its inuence on
the subsequent analysis results. So we collected 132 students' scores in prior knowledge assessment and post-test of summative assessment.
All the data were analysis with SPSS 17.0. Three types of data analysis were performed. First, all students were divided into high-, middle-
and low-level prior knowledge groups according to their scores of prior knowledge assessment. The high-level prior knowledge group
comprised students with scores in the upper 33% of all scores, while the middle-level and low-level prior knowledge groups represented the
middle and lower third of scores respectively. Then a two-way ANOVA, taking the post-test scores of the summative assessment as the
dependent variable, and the different types of teaching methods and the different levels of prior knowledge as the xed factors, was used
to test the relationships between the post-test scores of the summative assessment, the different types of teaching methods factor and the
different levels of prior knowledge factor.
Next, one-way ANOVA, taking the post-test scores of the summative assessment as the dependent variable, and the different levels of
prior knowledge as the xed factor (three levels), was used to test the relationship between the post-test scores of the summative
assessment and the different levels of prior knowledge factor across the two different types of teaching methods. The Least Signicant
Difference (LSD) PostHoc test was also used to compare the learning effectiveness of students with different levels of prior knowledge in the
iTutor group and in the control group.
Further, this research also used one-way ANOVA, taking the post-test scores of the summative assessment as the dependent variable,
and the different types of teaching methods as the xed factor (two levels), to test the relationship between the different types of teaching
methods factor and the post-test scores of the summative assessment of students with low-level, middle-level and high-level prior
knowledge. The LSD PostHoc test was also used to compare the learning effectiveness of students with low-level prior knowledge and
middle-level prior knowledge across the two different types of teaching methods. Moreover, the learning effectiveness of students with
D. Wang et al. / Computers & Education 81 (2015) 102e112 109
high-level prior knowledge and that of students with middle-level prior knowledge in the two different types of teaching methods were also
compared.
5. Results
5.1. The inuence of different levels of prior knowledge and different types of teaching methods on student learning effectiveness
Firstly, all students were divided into three groups according to their scores of prior knowledge assessment, please see Table 1.
Before two-way ANOVA, the homogeneity of variance assumption (F5, 126 1.403, p > 0.05) was tested. The result indicated that the
homogeneity assumption was not violated. For the results of the two-way ANOVA, please see Table 2.
Table 2 shows that both the different types of teaching methods factor (F1, 131 28.844, p < 0.01) and different levels of prior knowledge
factor (F2, 131 10.895, p < 0.01) have signicant impacts on the post-test scores of the summative assessment. The results of the LSD
PostHoc test (Table 2) show that student learning effectiveness in the iTutor group is signicantly better than in the control group (p < 0.01).
Table 1
Descriptive statistics for different prior knowledge groups.
Table 2
Two-way ANOVA on different types of teaching methods and different levels of prior knowledge (n 132).
This nding can be explained with reference to Merrill (2002). Merrill pointed out that problem solving oriented learning environment was
the most effective. Further, the results of the LSD PostHoc test also indicate that students with high-level and middle-level prior knowledge
have signicantly better learning effectiveness than those with low-level prior knowledge (p < 0.01). As Spyridakis and Lsakson (1991)
pointed out that prior knowledge can affect how learners associate new knowledge with what they know already.
In addition, Table 2 also shows that there is a signicant interaction effect between the different types of teaching methods factor and
the different levels of prior knowledge factor (F2, 126 6.750p < 0.01). Therefore, one-way ANOVA was used to do further analysis, as
discussed below.
5.2. Learning effectiveness of students with different levels of prior knowledge in different types of teaching methods
Before one-way ANOVA, the homogeneity of variance assumption (the iTutor group: F2, 72 3.013p > 0.05; the control group:
F2, 54 0.350p > 0.05) was tested. The results indicated that neither homogeneity assumption was violated. For the results of one-way
ANOVA, please see Table 3.
With regard to the iTutor group, Table 3 shows that the different levels of prior knowledge factor has no signicant impact on the post-
test scores of the summative assessment (F2, 72 0.320p > 0.05), meaning that in the iTutor group, student level of prior knowledge is not
signicantly related to their learning effectiveness. It is found that in the N-WBT group, the different levels of prior knowledge factor has a
signicant impact on the post-test scores of the summative assessment (F2, 54 14.586, p < 0.01), meaning that in the control group,
student level of prior knowledge has a signicant impact on their learning effectiveness. Furthermore, the results of the LSD PostHoc test
(Table 3) show that students with middle- and high-level prior knowledge have signicantly better learning effectiveness than students
with low-level prior knowledge in the control group. However, there is no signicant difference between the learning effectiveness of
students with middle-level and students with high-level knowledge (p > 0.05).
In addition, one-way ANOVA was also conducted to understand the learning effectiveness of students with different levels of prior
knowledge across the two different types of teaching methods. Before one-way ANOVA, the homogeneity of variance assumption (high-
level prior knowledge group, F1, 44 1.531, p > 0.05; middle-level prior knowledge group, F1, 41 0.756, p > 0.05; low-level prior knowledge
group, F1, 41 0.228, p > 0.05) was tested. The results indicated that neither homogeneity assumption was violated. For the results of one-
way ANOVA, please see Table 4.
Table 4 shows that the different types of teaching methods has a signicant impact on the post-test scores of the summative
assessment (middle-level prior knowledge group: F1, 41 5.050, p < 0.05; low-level prior knowledge group: F1, 41 33.830, p < 0.01). The
high-level prior knowledge group in the experimental group performed better, but the difference was not signicant in the statistical point.
In the process of learning, students may exist different forms of cognitive issues (Bangert-Drowns, Kulik, Kulik, & Morgan, 1991). Detailed
feedback can promote deeper conceptual understanding and help the students to apply rules to a more complex task context (Winne, 1989).
Based on Tables 3 and 4, by comparison with the materials stored in folders, iTutor can enable learners with any level of prior knowledge
to experience more effective learning. Further, iTutor can facilitate learners with low-level prior knowledge to experience greater learning
and raise their learning effectiveness to match that of learners with better prior knowledge. Learners with low-level prior knowledge
needed more guidance and assistance. In the iTutor group, learners with low-level prior knowledge can get more feedback. The feedback
played the role of a teacher, guiding and instructing learners with low-level prior knowledge step by step. Hence, learners with different
levels of prior knowledge experience statistically equivalent learning effectiveness in the experimental group. However, there is no such
design in the control group. Therefore learners with low-level prior knowledge have a signicantly lower performance in the control group.
Fig. 8 presents the results in an intuitive way.
Table 3
One-way ANOVA on two groups by different levels of prior knowledge.
Table 4
One-way ANOVA on low-level, middle-level and high level prior knowledge groups by different types of teaching methods.
Mimicking human teachers to implement one-to-one personalized teaching to a certain extent, is a hot but difcult spot in the research
of learning environment design. Extending the traditional architecture of ITS and exploring the new method of modeling student's learning
process and performance are two key issues to launch e-learning. The solution of these two issues will contribute to the launching of the e-
Learning. In this paper, the author extended the traditional model of ITS, applying the concept of problem-oriented learning environment
design to ITS architecture.
Then, iTutor was developed based on the extended ITS architecture to construct a real problem solving situation for student to practice
the basic computer skills. It poses complex, real-world problems and provides just-in-time, personalized feedback and on-demand advice,
such as worked examples, to support students in solving the problems. Students are allowed to work at their own paces. iTutor can partially
replace the experimental work of the instructor, which has particular application value for distance education students and self-learners,
because they can get a real sense of individualized guidance. On the other hand, the application of iTutor free teachers from arduous la-
bor of experimental guiding, and the teacher can put time and energy into more creative work.
ITS can respond in student's cognition (Del Solato & Du Boulay, 1995), cognitive abilities are related to success of students in problem-
solving (Yen, Rebok, Gallo, Jones, & Tennstedt, 2011). The paper was insufcient to verify iTutor's role in promoting students' ability of
problem-solving and innovation. Recently, iTutor was mainly used in the training and application of basic computer skills. The next step in
our research will be to apply iTutor in a larger scale to analyze whether it helps to promote students' advanced cognitive abilities with the
large data accessed. Besides, the proposed system and method can be further extended to competence training courses, such as computer
program design, driver training, physical or chemistry experiment and so on. Relate research is being carried out, as is shown in paper (Liang,
Liu, Xu, & Wang, 2009).
Acknowledgments
This research has been partially funded by the Chinese National Education Examinations Authority Planning Project 2009KS2002 and the
Natural Science Foundation in China #61305144. The authors would like to thank all the students that participated in the evaluation studies,
as well as to the rest of the iTutor research team, for their efforts and contributions to the ideas in this article.
References
Akhras, F. N., & Self, J. A. (2002). Beyond intelligent tutoring systems: situations, interactions, processes and affordances. Instructional Science, 30(1), 1e30.
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: lessons learned. The Journal of the Learning Sciences, 4(2), 167e207.
Ashton, H. S., Beevers, C. E., Korabinski, A. A., & Youngson, M. A. (2006). Incorporating partial credit in computer_aided assessment of mathematics in secondary education.
British Journal of Educational Technology, 37(1), 93e119.
Ausubel, D. P. (2000). The acquisition and retention of knowledge: A cognitive view. Springer.
Bangert-Drowns, R. L., Kulik, C.-L. C., Kulik, J. A., & Morgan, M. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61(2), 213e238.
112 D. Wang et al. / Computers & Education 81 (2015) 102e112
Barron, L., Campbell, J., Bransford, O., Ferron, O., Goin, B., & Goldman, E. (1992). The Jasper project: an exploration of issues in learning and instructional design. Educational
Technology Research and Development, 40(1), 65e80.
Bloom, B. S. (1984). The 2 sigma problem: the search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4e16.
Buchanan, T. (2000). The efcacy of a World-Wide Web mediated formative assessment. Journal of Computer Assisted Learning, 16(3), 193e200.
Buendia, F., & Cano, J. (2006). WebgeneOS: a generative and web-based learning architecture to teach operating systems in undergraduate courses. IEEE Transactions on
Education, 49(4), 464e473.
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Review of Educational Research, 65(3), 245e281.
Charness, N., Kelley, C. L., Bosman, E. A., & Mottram, M. (2001). Word-processing training and retraining: effects of adult age, experience, and interface. Psychology and Aging,
16(1), 110.
Chernich, R., Jamieson, B., & Jones, D. (1996). RCOS: yet another teaching operating system. In Proceedings of the 1st Australasian conference on computer science education (pp.
216e222). ACM.
Chi, M. T., Siler, S. A., Jeong, H., Yamauchi, T., & Hausmann, R. G. (2001). Learning from human tutoring. Cognitive Science, 25(4), 471e533.
Christopher, W. A., Procter, S. J., & Anderson, T. E. (1993). The Nachos instructional operating system. In Proceedings of the USENIX Winter 1993 Conference Proceedings on
USENIX Winter 1993 Conference Proceedings. USENIX Association. pp. 4e4.
Clarke, T., Ayres, P., & Sweller, J. (2005). The impact of sequencing and prior knowledge on learning mathematics through spreadsheet applications. Educational Technology
Research and Development, 53(3), 15e24.
Corbalan, G., Paas, F., & Cuypers, H. (2010). Computer-based feedback in linear algebra: effects on transfer performance and motivation. Computers & Education, 55(2),
692e703.
Del Solato, T., & Du Boulay, B. (1995). Implementation of motivational tactics in tutoring systems. Journal of Articial Intelligence in Education, 6, 337e378.
Dochy, F. (1994). Prior knowledge and learning. In International encyclopedia of education (pp. 4698e4702).
Dochy, F. (1996). Assessment of domain-specic and domain-transcending prior knowledge: entry assessment and the use of prole analysis. In Alternatives in assessment of
achievements, learning processes and prior knowledge (pp. 227e264). Springer.
Dochy, F., De Rijdt, C., & Dyck, W. (2002). Cognitive prerequisites and learning how far have we progressed since bloom? Implications for educational practice and teaching.
Active Learning in Higher Education, 3(3), 265e284.
Gagne, R. M. (1985). The conditions of learning and theory of instruction. CBS College Publishing.
Greeno, J. G., & van de Sande, C. (2007). Perspectival understanding of conceptions and conceptual growth in interaction. Educational Psychologist, 42(1), 9e23.
Hailikari, T., Nevgi, A., & Lindblom-Yl anne, S. (2007). Exploring alternative ways of assessing prior knowledge, its components and their relation to student achievement: a
mathematics based case study. Studies in Educational Evaluation, 33(3), 320e337.
Herder, J. N., Bos, H., Gras, B., Homburg, P., & Tanenbaum, A. S. (2006). MINIX 3: a highly reliable, self-repairing operating system. ACM SIGOPS Operating Systems Review, 40(3),
80e89.
Hwang, G.-J., Kuo, F.-R., Chen, N.-S., & Ho, H.-J. (2014). Effects of an integrated concept mapping and web-based problem-solving approach on students' learning achieve-
ments, perceptions and cognitive loads. Computers & Education, 71, 77e86.
Kalyuga, S., Chandler, P., Tuovinen, J., & Sweller, J. (2001). When problem solving is superior to studying worked examples. Journal of Educational Psychology, 93(3), 579e588.
Kuo, C.-Y., & Wu, H.-K. (October 2013). Toward an integrated model for designing assessment systems: an analysis of the current status of computer-based assessments in
science. Computers & Education, 68, 388e403.
Lee, A. Y., & Hutchison, L. (1998). Improving learning from examples through reection. Journal of Experimental Psychology: Applied, 4(3), 187.
Liang, Y., Liu, Q., Xu, J., & Wang, D. (2009). The recent development of automated programming assessment. In Computational intelligence and software engineering, 2009. CiSE
2009. International Conference on (pp. 1e5). IEEE.
Ltticke, R. (2004). Problem solving with adaptive feedback. In Adaptive hypermedia and adaptive web-based systems (pp. 417e420). Springer.
Maia, L. (2003). SOsim: Simulator for operating systems education.
Mayer, R. E. (2002). Multimedia learning. Psychology of Learning and Motivation, 41, 85e139.
Merrill, M. D. (2002). First principles of instruction. Educational Technology Research and Development, 50(3), 43e59.
Mitchell, T. J., Chen, S. Y., & Macredie, R. D. (2005). Hypermedia learning and prior knowledge: domain expertise vs. system expertise. Journal of Computer Assisted Learning,
21(1), 53e64.
Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. Belmont, CA: Wadsworth.
Park, R. (2001). Examining age differences in performance of a complex information search and retrieval task. Psychology and Aging, 16(4), 564e579.
Person, N. K. (2003). AutoTutor improves deep learning of computer literacy: is it the dialog or the talking head?. In Articial intelligence in education: Shaping the future of
learning through intelligent technologies (Vol. 97, p. 47).
Posner, M. I., & McLeod, P. (1982). Information processing models-in search of elementary operations. Annual Review of Psychology, 33(1), 477e514.
Shi, H., Rodriguez, O., Shang, Y., & Chen, S. (2002). Integrating adaptive and intelligent techniques into a web-based environment for active learning. Intelligent Systems:
Technology and Applications, 4, 229e260.
Smits, M. H., Boon, J., Sluijsmans, D. M., & Van Gog, T. (2008). Content and timing of feedback in a web-based learning environment: effects on learning as a function of prior
knowledge. Interactive Learning Environments, 16(2), 183e193.
Spyridakis, J. H., & Lsakson, C. S. (1991). Hypertext: a new tool and its effect on audience comprehension. In Professional Communication Conference (IPCC), IEEE International
(Vol. 1, pp. 37e44).
Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2(1), 59e89.
Tobias, S. (1994). Interest, prior knowledge, and learning. Review of Educational Research, 64(1), 37e54.
VanLehn, K., Graesser, A., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. (2005). When is reading just as effective as one-on-one interactive human tutoring. In Proceedings of the
27th annual meeting of the cognitive science society (pp. 2259e2264).
Van Merrie nboer, J. J. G. (1990). Strategies for programming instruction in high school: program completion vs. program generation. Journal of Educational Computing
Research, 6(3), 265e285.
Wang, T.-H. (2007). What strategies are effective for formative assessment in an e-learning environment? Journal of Computer Assisted Learning, 23(3), 171e186.
Winne, P. H. (1989). Theories of instruction and of intelligence for designing articially intelligent tutoring systems. Educational Psychologist, 24(3), 229e259.
Xu, J., & Liu, Q. (2001). IT skills automated testing and assessment: Theory, technologies and assessment. Science Press.
Yen, Y.-C., Rebok, G. W., Gallo, J. J., Jones, R. N., & Tennstedt, S. L. (2011). Depressive symptoms impair everyday problem-solving ability through cognitive abilities in late life.
The American Journal of Geriatric Psychiatry: Ofcial Journal of the American Association for Geriatric Psychiatry, 19(2), 142.