Sunteți pe pagina 1din 5

Methods

Introduction

The problem that the researcher identified was that students were not internalizing

chemistry material in a way that allowed them to access it as the course progressed throughout

the school year. This was manifested in students forgetting simple processes that were integral to

their comprehension of new material as the school year progressed. Additionally, students were

mired in the complexity of new tasks, and found themselves lacking a fundamental entry point

into new concepts owing to a lack of mastery in previous topics.

This descriptive quantitative action research study examined the relationship between

attainment of chemistry concepts and several different pedagogical approaches. That is, the

researcher studied the impact that argumentation training, Socratic dialogue, and engagement of

modeling curriculum had upon comprehension in a high school chemistry classroom. The study

was designed to measure student growth on several different baseline measurements as a result

of these pedagogical approaches.

Participants

The study was conducted in a largely rural school in northeastern Wisconsin, with the

student population for my study being a representative sample of our student body. The study

location (based on 2015-2016 school year) had 804 students enrolled. There were 71 students

self-enrolled into the authors chemistry course, and of that number, all 71 were included in the

study. Participants ranged in age from 15 to 17. Of that total population 12.9% were students

with disabilities, and 42.3% were economically disadvantaged. The gender breakdown for the

course was half male and half female for the class. The ethnic composition of said school was as

follows: 74.9% Caucasian, 16.7% American Indian, 0.5% Asian, 1.0% Black, 4.5% Hispanic,
2.5% two or more races. Our students with special needs can be broken down into the following

categories: 0.7% were labeled autistic, 2.5% EBD, 0.2% had a hearing impairment, 1.1% were

intellectually disabled, 2.5% were labeled OHI, and 5.6% of our students were learning disabled.

(WISEdash, 2016)

Procedure

Student participants were scored using the Chemical Concepts Inventory in early

September to see how they respond to models based thinking. This test was repeated at the

conclusion of the study in late December, participant scores were analyzed for differences in

understanding. Additionally, during the course of the study, students were also asked to answer

several writing prompts dealing with modeling activities, as well as meta-cognitive thinking

about the modeling activities. These writings were scored using the same rubrics multiple times,

again, looking for an improved score in student outcomes.

Participants were also asked to clarify their reasoning in both written and verbal

communication. Students were given explicit instruction in making sound logical arguments

(claim, evidence and reason structures) and progress was monitored as they wrote lab reports so

that improvement could be monitored. In order to affect the measurement of the study the author

needed to modify teaching practices by being mindful of the practices as highlighted in the study

and included them at specified intervals.

Research Design

The author designed this quantitative, quasi-experimental study to determine the impact

of certain teaching methodologies on the achievement of his chemistry students. Specifically, the

author wanted to know the impact these techniques would have on students in his own
classroom. The research tools that were utilized during the course of this study, as well as the

frequency and timing of their uses are listed in following paragraphs.

To test how well students gained proficiency in the modeling process, the author included

a pretest in September of 2016. At the conclusion of the study, a posttest was administered in

January of 2017. The tool utilized to accomplish this comparison is the Chemical Concepts

Inventory exam (Appendix A) (Mulford & Robinson, 2002), which has a focus on modeling

proficiency embedded within.

Students performed several classroom activities to demonstrate change in modeling

ability (an example of which is Appendix B), which was scored with the same rubric (Appendix

C) several times. Students were assessed with like activities on four separate occasions dating

from October to January of 2017. The rubric utilized to assess student growth in this area was

created to be universally applicable across all of these tasks, so that student growth could be

accurately mapped. Students were also asked to use modeling question prompts (Appendix D) as

exit tickets four times over the same time frame, and were assessed via the same rubric

(Appendix E) each time to show student growth.

In an effort to assess student and teacher growth in the use of Talk Moves, colleagues

were asked take field notes on four occasions. Two of these notes sessions focused on student

usage of talk moves (Appendix F), counting the number of times students used Talk Moves early

in the time frame of the study (September of 2016), and once again at a later time (January of

2017). The author also asked a colleague to count how many times he utilized talk moves in

conversations with students, once again assessed twice, September of 2016 and in January of

2017 (Appendix G). Participants were asked to provide feedback on the implementation of Talk
Moves in the classroom through an online survey platform (Appendix H) at the conclusion of the

study in February of 2017.

To measure student use of claim evidence and reason structures in their arguments, the

author began by focusing on laboratory reports. The authors departmental colleagues have used

a common lab report from (Appendix I) for some time, and the author recorded separately

performance on those portions of the report rubric (Appendix J). The author performed this

calculation on every laboratory activity performed throughout the course of the study, September

2016 to February of 2017.

As part of a classroom formative assessments strategy, the author assigned writing

prompts (Appendix K) five times from September to January of 2017. These student writings

were assessed using a rubric (Appendix L) that asked students to construct reasoned responses

with supported positions. Students were given multiple opportunities to demonstrate proficiency.

During the course of the study growth was measured though monitoring averaged values of

student achievement.

Data Analysis

All data was surveyed, and collated in an effort to gauge an increase in student

comprehension in the chemistry classroom. Basic descriptive statistics were applied to each

measure, and reported back to find trends among various groups of students, in an effort to

determine if study parameters were an effective treatment for students. The tools used to form

conclusions in this study were used multiple times throughout the course of the study, and after

multiple usage showed a consistent grouping of student responses, demonstrating that

measurements are valid and reliable. Student participants showed in increased level of
attainment using these measurements, the validity of measurements, is well within acceptable

parameters, as both the standard deviation from the mean.

The data tools utilized in this study were well placed to triangulate students abilities and

to measure growth in the three areas of focus. The author designed the assessments in this study

so that each question had a minimum of three measurements to ensure that students were given

multiple opportunities to demonstrate growth. Additionally, these three measurements were

significantly different from one another, so that students had multiple modalities available with

which students could demonstrate acquisition of new skills.

S-ar putea să vă placă și