Sunteți pe pagina 1din 4

This site is mainly about your own individual practice as a teacher, and as such it tries to take into account

your particular circumstances, such as the students you teach (assumed largely to be over school-age), your subject, your setting (school, college, university, work-based or informal adult education). It recognises that it is difficult and even unreasonable to generalise, but we ought to set alongside this the results of very generalised research in the form of meta-analyses. Meta-analysis is more commonly found in medicine and epidemiology than in education, and it has its limitations, but it can also make very strong points. It is simply the technique of searching for all the existing research reports on a particular issue, and combining them to get an overall result. A moment's thoughtparticularly if you know anything about research methodswill tell you that this is fraught with problems. Has the issue been defined in exactly the same way by all researchers? If not, how do you adjust the results? If the research is on the interaction between two variables (say, the use of ICT with disaffected learners), what category do you put it in? Do you rate the validity and reliability of the findings, or just assume that if it has been published, it must be right? And what about all the unpublished research which did not make it because it questioned the conventional wisdom of the day? And what about the research which produced negative results which no-one ever bothered to publish, despite the inestimable value of knowing what is not the case as much as knowing what is? And so on. However, its proponents argue that many of these problems cancel each other out when you take a large enough research base, and that others can be mitigated by the choice of the metaassessment tool. The most prominent meta-meta-analyst in education is probably John Hattie, whose work draws on "a total of about 800 meta-analyses, which encompassed 52,637 studies, and provided 146,142 effect sizes [...] these studies are based on many millions of students" (Hattie, 2009; 15). Note, however, that the evidence is collected across all phases of education (primary and secondary, and some post-compulsory) but dominated by children in the school sectors. Some of the issues, therefore, pose more questions than answers for those of us more interested in the post-16 sectors. In the discussion of low "developmental effects" below, for example, they are attributed simply to a child growing up over the course of a year. Clearly that accounts for far more change between the ages of 6 and 7 than between 16 and 17, or 26 and 27.

Hattie's common denominator


In common with standard meta-analysis practice, Hattie's bottom line is the "effect size". An effect size of "1" indicates that a particular approach to teaching or technique advanced the learning of the students in the study by one standard deviation above the mean. OK, that's rather technical: An effect-size of d=1.0 indicates an increase of one standard deviation... A one standard deviation increase is typically associated with advancing children's achievement by two to three years*, improving the rate of learning by 50%, or a correlation between some

variable (e.g., amount of homework) and achievement of approximately r=0.50. When implementing a new program, an effect-size of 1.0 would mean that, on average, students receiving that treatment would exceed 84% of students not receiving that treatment. Cohen (1988) argued that an effect size of d=1.0 should be regarded as a large, blatantly obvious, and grossly preceptible difference [such as] the difference between a person at 5'3" (160 cm) and 6'0" (183 cm)which would be a difference visible to the naked eye. Hattie, 2009: 7-8 (my emphasis) Notes: Effect-size is commonly expressed as d. Correlation is commonly expressed as r. * In 1999 (p.4), Hattie only claimed achievement was advanced by one year. I have no idea why he changed his mind. So an effect size of "1" is very good indeed. And correspondingly rare; the chart below reports only about 75 individual studies reached that level. Hattie's more recent table of the later expanded number of meta-analyses themselves shows, by my counting, only 21 meta-analyses showing mean effect-sizes of over 1 (out of 800+) (See Hattie, 2009; fig. 2.2 p.16)

[Based on Hattie, 2003]

This is more or less a normal distribution curve. (The mean effect size is 0.4. Sorry about the alignment of the labelsblame the spreadsheet's charting macro.)

So that is what Hattie calls the "hinge point". He uses a "barometer" chart or gauge on which he can impose a needle in an appropriate position (diagram based on Hattie, 2009, fig.2.4 p.19 et passim)

Reverse effects are self-explanatory, and below 0.0 Developmental effects are 0.0 to 0.15, and the improvement a child may be expected to show in a year simply through growing up, without any schooling. (These levels are determined with reference to countries with little or no schooling.) Teacher effects "Teachers typically can attain d=0.20 to d=0.40 growth per yearand this can be considered average" (p.17) ...but subject to a lot of variation. Desired effects are those above d=0.40 which are attributable to the specific interventions or methods being researched. Much less deserves less effort, and is marginal. On the other hand, sometimes simple interventions, such as advance organisers, pay off because although not terrifically effective, the return on a very small investment is substantial. "Problem-solving teaching" has an effect-size of 0.61 (2009 figures), and comes fairly naturally in most disciplines. But "Problem-based learning" overall had d=0.15, and developing it requires a very substantial investment in time and resources. So that's a non-starter, isn't it? Not necessarily; like a potent drug it needs to be correctly prescribed. For acquisition of basic knowledge it actually had a negative effect; but for consolidation and application and work at the level of principles and skills it could go up to d=0.75. Not much use in primary schools, but a different matter on professional courses at university (which is where it is generally found). (Hattie, 2009: 210-212) Given that we can't do everything, where should we concentrate our efforts? The majority of innovations and methods "work", according to the meta-analysis (bearing in mind the point above, that unless substantial funding and contractual obligations to publish are involved, most researchers are not inclined to publish negative findings).

But which work really well, and which have such a marginal effect that it is not worth the bother? That is the critical question. Here is part of the answer! (Follow links for comment; and please note that the comments are my spin on the topics, in relation to post-compulsory education, not Hattie's) Influence Feedback Students' prior cognitive ability Instructional quality Direct instruction Remediation/feedback Students' disposition to learn Class environment Challenge of Goals Peer tutoring Mastery learning Homework Teacher Style Questioning Peer effects Advance organisers Simulation & games Computer-assisted instruction Testing Instructional media Affective attributes of students Physical attributes of students Programmed instruction Audio-visual aids Individualisation Finances/money Behavioural objectives Team teaching Physical attributes (e.g., class size) Effect Size 1.13 1.04 1.00 .82 .65 .61 .56 .52 .50 .50 .43 .42 .41 .38 .37 .34 .31 .30 .30 .24 .21 .18 .16 .14 .12 .12 .06 -.05 Source of Influence Teacher Student Teacher Teacher Teacher Student Teacher Teacher Teacher Teacher Teacher Teacher Teacher Peers Teacher Teacher Teacher Teacher Teacher Student Student Teacher Teacher Teacher School Teacher Teacher School

The table above is edited from Hattie, 2003: note that I'm not using the 2009 figures because there is now so much information it is very difficult to digest in a form like this and the broad categories have now been sub-divided. Do read the primary source [insofar as a meta-analysis can be a primary source].

S-ar putea să vă placă și