Sunteți pe pagina 1din 2

Critical review

The present study compared gains in students critical thinking measured with general and subjectspecic questions. In agreement with McMillans (1987) review and in recent studies by Williams (2003) and Williams et al. (2004), larger pretestposttest gains were found from tests that contained questions that relate to the course in which the tests were given (e.g., introductory psychology) than tests with questions that focused on general topics. Perhaps one reason for this nding is that although a student is likely to be aware of the topics in general measures , they may not be required as often to engage in the types of critical thinking in these everyday topics as they would in studying particular subjects (e.g., psychology). In other words, it may be that as instructors teach the skills that foster critical thinking, much of the focus is within a subject-specic context. In addition, the length of time between the pretest and posttest was only about 40 min and the subjects were rst-year students. Therefore, the opportunity to engage in a variety of learning experiences over a longer time span such as 34 years ought to contribute to gains in critical thinking skills that may be more detectable with a general measure. In addition to being signicant both in statistical and practical terms, this nding is noteworthy for another reason. Like intelligence, it is not unreasonable to think that an appreciable gain in critical thinking skills would occur after having been exposed to a variety of courses and experiences over an extended period. Therefore, given that one could expect only a small effect under the restricted conditions of this experiment, namely minimal exposure to the independent variable in such a short time frame, these results are encouraging .Arguably, such a minimal exposure would lead to a narrower range of the degree to which subjects had experienced the treatment, which would tend to attenuate its relationship with the scores on the dependent variable .On a related limitation concerning the short duration of the intervention, it becomes more difficult to determine the degree to which the students pretest and posttest scores actually reect the level of critical thinking that was inuenced by the treatment. Alternatively, given that the entire experiment (i.e., pretest, intervention, and posttest), it is possible that these ndings may be a function of other inuences such as the students orientation to what was required on the assessments. In future research, it will be helpful to reduce the effects of these confounding inuences by assessing these skills more comprehensively (i.e., varied measures) and over a longer period. An ongoing concern in controlled experiments like this is the degree to which students are putting forth a genuine effort to complete the tasks involved in the experiment. The students who took part in this study did so to help fulll their research participation requirement in the introductory psychology course. This requirement is based solely on the number of research studies in which the student participates, and has nothing to do with the quality of the participation. As an incentive to make the students in this study try their best in completing the critical thinking tests and the review questions, each student was promised a lottery ticket if his or her scores were above a hypothetical standard on all three tasks (i.e., pretest, review questions, posttest). In this study, the effectiveness of this incentive is questionable for two reasons. First, although both groups showed a pretestposttest improvement on the course-specic psychology subtest, it was somewhat surprising that there was not a larger improvement given that the critical thinking tests referred to a relatively short passage that students had read through for about 45 min. Second, it is possible that the lower scores obtained on review questions by students in the higher order condition (4.35 out of 8, on average) may suggest that these students were not engaged in higher order thinking as much as was expected. One way to address the concern regarding the level of student motivation to try their best in each of the tasks in the experiment might be to provide a more valuable incentive. At the post-secondary level, one of the clearest, most immediate incentives is grades. Therefore, one possibility would be to 1

include the experimental tasks as a small part of a course with a corresponding weight of the nal grade. To deal with the ethical issues involved, such an experiment would require a within-subjects design such that each student receives exactly the same materials from which he or she will be assigned a grade. The review questions could consist of both lower order questions that pertain to one half of the passage (e.g., based on one chapter), and higher order questions that pertain to the rest. The ndings in this study suggest that gains in students critical thinking skills are more clearly detected with items focusing on specic course content rather than on general issues assumed to be familiar to a student in any discipline. Concerning research that examines the link between institutional and instructional processes and student outcomes, two implications are worth noting. While Pascarella and Terenzini (2005) conclude in their extensive review that controlling for incoming ability and maturational effects, most studies found a signicant gain in critical thinking going from freshman to senior year, the format of assessing critical thinking used in these studies includes general, subject specic, and self-ratings. Thus, it is possible that some of the studies considered in their review that used general measures of critical thinking may have underestimated the extent of student gains. Using subject-specic measure of critical thinking may help colleges to better assess gains in students critical thinking skills both within departments and entire institutions based on measures obtained from several departments. From another perspective, Halpern (2001) suggests that because critical thinking ought to be a skill that students should be able to use indenitely after graduation, and therefore should be transferable to novel contexts outside of a particular course, gains over a longer enrollment (e.g., from rst-year to fourth-year) might be more appropriately assessed with a general measure. Conversely, shorter term gains (e.g., one semester), might be more detectable with subject-specic measures. The second implication has to do with identifying valid institutional (e.g., campus tutoring programs) and instructional processes (e.g., higher order questioning in class) in terms of their relation with intended outcomes including gains in students critical thinking skills. Pascarella and Terenzini (2005) point out that while signicant gains in critical thinking have been found as a result of attending college, we are only beginning to learn what actually contributes toward this improvement. A more sensitive, course-based measure of critical thinking could help better determine the effectiveness of various educational processes. This would help to provide better empirical evidence concerning the factors that contribute toward student learning and development, which can be used to make more informed and justied choices both at the institutional level and within a particular course.

S-ar putea să vă placă și