Sunteți pe pagina 1din 24

Cambridge Journal of Education

Vol. 35, No. 3, November 2005, pp. 383–405

How can research inform ideas of good


practice in teaching? The contributions
of some official initiatives in the UK
Sally Brown*
University of Stirling, UK

There is evidence of progress over the last decade in the quest for research to inform ideas of good
practice in teaching. This function of research had achieved increased status and funding, and
attention has focused on issues such as how teachers learn and evidence-based practice. This has
been supported by several ‘official’ initiatives, three of which are given special attention in this
paper. Teacher/researcher networking has contributed to greater involvement of teachers in
research and discredited simple-minded ideas of ‘delivery’ models of evidence to be embedded in
practice. However, placing research entirely in teachers’ hands has revealed concerns about
research quality. Different forms of collaborative relationships between researchers and teachers,
and among projects within a programme, have increased understanding of the conditions under
which research can impact on practice. What and how to disseminate evidence to wider audiences
remains a challenge.

Introduction
As Reese (1999) has commented, the problem of how ideas shape action is an old
one in social and intellectual history, but historians have written remarkably little
about whether educational research has had any real impact on practice.
Furthermore, traditional modes of dissemination, where news of ideas or findings
was supposedly transferred from researchers to teachers and thence on to other
teachers (cascade, linear or centre-periphery models), have largely been discredited
as naı̈ve, simplistic and ineffective (e.g. Brown & McIntyre, 1993; Hargreaves,
1999). It is, perhaps, remarkable how little understanding we have of the processes
by which research findings can be transformed into the everyday knowledge that is
needed by users (Taylor, 2002). Yet the research–practice relationship continues to
demand our attention. So how has the work in this area, at the end of the twentieth
and start of the twenty-first century, taken forward the question of whether

*Institute of Education, University of Stirling, Stirling FK9 4LA, UK. Email: s.a.brown@stir.
ac.uk
ISSN 0305-764X (print)/ISSN 1469-3577 (online)/05/030383-23
# 2005 University of Cambridge, Faculty of Education
DOI: 10.1080/03057640500319073
384 S. Brown

educational research can and should inform the practice of teaching? To what extent
have official educational initiatives explored, encouraged and provided research
opportunities that have led to greater understanding of such influences?
This article offers an overview of such questions from my perspective as someone
who has not, at least in recent years, been actively working on these areas of
educational research, but has been heavily involved in the appraisal of relevant
research proposals and outputs. Such appraisal has arisen partly from my
involvement with a number of formal assessment agencies, including membership
during the 1990s of the Economic and Social Research Council’s Boards for
Research Grants and Postgraduate Training. However, experience with two official
initiatives has been particularly relevant to the issue of the research–practice
relationship. First, there has been the UK higher education Research Assessment
Exercise where I chaired the education panel for 2001 and was a member of the
1992 and 1996 panels. Secondly, there is the Teaching and Learning Research
Programme where I was Vice-Chair of the Steering Committee for five years to
2004. This paper draws heavily on this experience of the operation and outcomes of
these initiatives as it examines the debates around, and changes in, priorities for
educational research and its relationship to practice. A third area of official
initiatives, in which grants have been provided for teachers’ own research, is also
considered, but this is one where I have had no personal experience and so rely on
the literature.

Changing emphases in priorities for educational research


Over the last two decades, educational research in the United Kingdom has been
subject to an accountability process far more searching than in previous years.
Initially, the most obvious form that this took was the introduction by the Higher
Education Funding Councils, in the late 1980s, of the Research Assessment Exercise
(RAE) with its focus on judgments about the quality of research based on criteria for
excellence. Traditionally, the research community was trusted to identify and
operate these criteria. Even though there were patent and substantial disputes about
philosophy and methodology within that community’s boundaries, there was a broad
measure of agreement about the general principles of judging quality.
Since the mid-1990s, however, the pattern of public demand for what is to count
as quality educational research has changed. Much more emphasis has been placed
on what is seen by some as the need for a direct and instrumental relationship
between educational research and practice, but more generally as the importance of
promoting greater understanding of practical matters through applied and practice-
based research. Furlong and Oancea (2005) have provided examples of the
developments that have illustrated this new emphasis, and pointed to the
requirement for more conceptual clarity about what is meant by practice-based
research and how its quality may be assessed. However, even if meanings still lack
some clarity, there is a clear and widespread challenge to any notion that there will
always be distinct separation between the communities of research and of practice.
Research and ideas of good practice in teaching 385

And the idea, articulated by policy-makers as well as researchers, that practice


should always be evidence-based or evidence-informed has fuelled the changes in
thinking. As John Nisbet has described it,
research has become part of every professional role today, and in education one task of
professional development is to weave a research element into the expertise of teachers,
leading them to adopt at a personal level the self-questioning approach which leads to
reflection and understanding, and from there into action. (Nisbet, 2005, p. 43)

The case for change was nowhere put more vehemently than in an annual lecture
to the Teacher Training Agency by David Hargreaves (1996) where he argued that if
teaching were to become a research-based profession, then it would be both more
effective and more satisfying. He urged researchers to engage in partnership with
practitioners to establish networks that would collaborate on strategic research plans
and priorities. He also called for the establishment of a National Educational
Research Forum (for England) to shape the agenda and policy for research and
offered a model for evidence-based practice from the field of medicine. The Forum
was subsequently established, but the medical model has been the focus of some
dissent (for example, Hammersley, 1997; Hargreaves, 1997). Hargreaves’ commen-
tary was closely followed by other adverse criticisms of the quality and relevance of
much current educational research (Hillage et al., 1998; Tooley & Darby, 1998).
These other criticisms generated a challenging debate suggesting that they were
themselves based on poor quality research. However, significant action for change
has also emerged from a much more serious consideration of the research–practice
interface.
The intention of this article is not to look comprehensively at the broad debate,
but to consider some of the arguments that have been articulated and the progress
that has been made in promoting high quality research that is relevant to the
improvement of practice, developing partnerships and networks among researchers
and practitioners, and exploring how practice can take account of evidence. Initially,
it introduces some commentary on two large-scale official initiatives: higher
education’s Research Assessment Exercise (RAE) and the Teaching and Learning
Research Programme (TLRP), both of which have provided frameworks that are
relevant to the linking of research to the development of teachers’ professional
knowledge.

The Research Assessment Exercise


The RAE has made a powerful policy impact since the late 1980s. This exercise has
been undertaken every few years by the Funding Councils to assess the quality of
research in higher education. It has been seen as having had a profound impact, for
both good and ill, on institutions’ research plans and strategies. Over time, the RAE
has put increasing emphasis on the recognition of applied research. This has suited
education rather better than some other fields and has recognised the importance of
establishing relationships between research and issues of public interest, especially
good practice in teaching.
386 S. Brown

However, coping with both the linkage between research and teaching and the
need to sustain traditional criteria for quality research has engendered a considerable
challenge for those who undertake the assessments. For the 2001 assessments, the
education panel made judgments about the quality of research on the basis of
features such as originality, contribution to the advancement of knowledge,
methodological strength, scholarly rigour and relevance for other researchers, policy
makers and practitioners (UK Funding Councils, 1999, p. 301). Decisions about
whether research outputs were of international excellence sought evidence of
substantial knowledge of developments in theory and practice, significant empirical
findings, conceptual contributions, innovative methodologies or techniques,
theoretical developments or contributions to innovative developments in policy
and practice (UK Funding Councils, 1999, p. 303). There was considerable scope
here for practice-related research, though many reading these lists of criteria may not
have seen this as a priority for the judgments of high quality that were to be
undertaken.
It was made clear that research output published in professional as well as
academic outlets would be valued (UK Funding Councils, 1999, p. 304). However,
one of the problems to be faced was, and still is, the difficulty that ensues when
material for teachers is published in a way that is acceptable to them, but lacks the
research detail necessary to identify it as quality work. To deal with this, submissions
were asked to indicate the relationship of teacher-targeted material to the underlying
research and to explain how the quality of that research might be assessed. In
addition, extra information for each piece of research output was requested to
indicate the field of enquiry, the prime audience and the educational significance.
Provision of this extra information was a source of irritation to many of those
preparing submissions, illustrating the additional demands on researchers if they are
to demonstrate convincingly the quality of research output that is practitioner-
oriented. Nevertheless, it made the panel’s assessment of research output much
more straightforward and allowed a clearer sense of the nature of research
environments that were practice-based. Given these efforts to accommodate and
value research relating directly to practice, the criticisms by Roberts (UK Funding
Councils, 2003) of the 2001 RAE for not recognising the importance of knowledge
transfer and collaborative research would seem harsh judgments to apply to
education.
For the 2008 RAE it is clear that the UK Funding Bodies (2005a) recognise the
need to include in the panels practitioners and other users of research (as they did in
2001), and to ensure that the criteria against which practice-based research will be
judged are appropriate. The sub-panel for education has argued in its draft criteria
(UK Funding Councils, 2005b, UOA 45 paragraphs 18–29) that there is no set of
clear and distinguishable categories into which educational research can be classified
and that all types of output, including applied and practice-based research, will be
judged on the basis of the criteria of rigour, significance and originality. In describing
what is to count as applied or practice-based research, the sub-panel has adopted the
broad and inclusive definition put forward by Furlong and Oancea (2005) of
Research and ideas of good practice in teaching 387

an area situated between the academia-led theoretical pursuits and research informed
practice, and consisting of a multitude of models of research explicitly conducted in,
with, and/or for practice. (Furlong & Oancea, 2005, p. 9)

Once again, the sub-panel calls for extra information on each research output, but
significantly more guidance and exemplification than in 2001 is provided on how
this might be structured, especially for practice-based research. It remains to be seen
whether this approach will further facilitate the assessments to be made by the hard-
pressed sub-panel members, who have to read a great deal of material and make
extensive judgments very quickly indeed.
There still remain challenges and a deal of hard work for researchers in judging the
quality of, and making the case for, their own applied or practice-based research in
their RAE 2008 submissions. Furlong and Oancea (2005) have suggested
dimensions of practice-based research quality covering methodological and scientific
robustness, instrumental value for practice, development of practitioners’ tacit
knowledge and critical approach to practice, and an economic element to ensure
money for research is well spent. They have commented on the need for research
that is trustworthy, makes a genuine contribution to knowledge, offers what is
salient, timely and accessible for practitioners, and develops practical wisdom
through collaborative, reflective and critical partnerships. Perhaps this somewhat
complicated framework may be helpful in constructing submissions and a useful
spur to what continues to be a rather limited debate on how to justify the quality of
practice-based research. However, as the authors make clear, they do not provide
any specific indicators or thresholds of quality.
There is still a question about the extent to which the priorities have moved, if at
all, towards research that could inform ideas for good practice in teaching. A
preliminary analysis by Oancea (2004a, 2004b) of the research output submitted to
the 2001 RAE education panel has indicated that departments with lower ratings for
their research quality tended to direct their research more towards teachers, students
and school administrators, and target less of it to researchers and policy-makers.
Furthermore, she has suggested these departments might be seen as placing more
emphasis on research into teaching and learning and issues specific to educational
settings than do departments undertaking research that is rated more highly:
departments rated 3a and under potentially foster a research culture that favours
considerations of use and strong links with the teaching profession (with a view to
development). (Oancea, 2004b, p. 5)

Although this finding might have some face validity, the differences were relatively
small and it would be risky to claim that the research in higher-rated departments
did not also seek strong links with teachers. It seems plausible to suggest that the
highly-rated departments might have had more publications of all kinds to offer, but
felt more secure in submitting those appearing in eminent journals that are mainly of
interest to researchers rather than practitioners. At present, the evidence base is
clearly not strong enough to conclude that practice-based research is, in general, of a
lesser quality than other forms of educational research. What can be said is that there
have been many submissions that have appeared to find it difficult to articulate
388 S. Brown

effectively the link between practical commentaries for teachers and the high quality
research from which those commentaries have arisen. In the next RAE (2008), it
may be possible to estimate better how, and how much, high quality research output
informs practice.
It is clear that the intention of the RAE is no longer to accept any kind of barrier
that deters the sector from undertaking research that is deliberately designed to
inform practice. How that will be achieved for educational research is, in part, a
responsibility of the new sub-panel, but it is also dependent on those submitting
their research materials and profiles.

Teaching and Learning Research Programme


The very significant Teaching and Learning Research Programme (TLRP) has been
funded by the Higher Education Funding Council for England and government
departments of education in all four of the UK countries. It is
The largest co-ordinated programme of educational research ever funded in the United
Kingdom. It has a budget of £28m and is managed by the Economic and Social
Research Council. It involves over 350 researchers and around fifty projects, networks,
fellowships and seminar series which investigate teaching and learning in all sectors of
education from early years to adult and workplace learning … It aims to combine
research of the highest quality with research that has high relevance to the concerns of
practitioners and policy makers… [It] aspires to inform educational practice and
provision. (James, M., 2005, p. 3)

One way of looking at the aims of TLRP is to see it as an intention to conduct


research that has the potential to improve outcomes for learners in a very wide range
of UK contexts. The range of outcomes is much greater than the readily measurable
attainments and skills that have been emphasised by many bureaucratic and
performance-based models of learning. It includes the acquisition or development of
more complex attributes: understandings, knowledge, attitudes, values and
identities. The programme is underpinned by an assumption that, despite some
arguments in the literature to the contrary, it is appropriate for research to explore
the relationships between pedagogical approaches and learning improvements. It
acknowledges, however, that this cannot take a simple-minded ‘what works’
approach, but must rather be seen as the pursuit of much greater understanding of
how learning comes about and how it is linked to teaching approaches. There is,
therefore, a concern with the exploration of what may count as ‘improving’ or even
‘good’ practice, but there are no assumptions about uncovering some holy grail of
‘best’ practice—a phrase relentlessly and misguidedly beloved of politicians. A
further crucial commitment is to the communication of ideas and findings of the
research to all those concerned with effective teaching, not just to researchers, and to
involving users of research in the planning and implementation of the research itself.
So to what extent has this been achieved and what has it told us about the
relationship between research and practice?
A preliminary response to this forms part of the rest of this article, but TLRP will
continue for at least another three years until 2008. We are, therefore, a long way
Research and ideas of good practice in teaching 389

from being in a position to evaluate the impact on practice of the programme as a


whole. However, individual projects have started to publish and otherwise
disseminate their findings. They are producing articles, books and newsletters as
well as engaging directly with policy-makers and practitioners. Collections of papers
focusing on issues that span the programme are starting to appear in journals, and
bulletins designed for schools and for researchers have been produced by the
Programme Director’s team. Full details of all that is available are to be found on the
TLRP website (www.tlrp.org) and through the links it provides to the projects’
websites.
It is important to see TLRP as a programme and not just a set of projects. The
intention is that a significant amount of the output will look across projects in
support of more general understandings of teaching and learning. For example, one
aspect of the outcomes of the research, which is centrally important for looking at the
relationship between research and practice, concerns teacher learning. Mary James,
Deputy Director of TLRP, has edited a special issue of Research Papers in Education
(20[2], June 2005) on teacher learning with articles from five of the programme’s
projects. She argues that the improvement of pupil learning often implies that there
will be changes in classroom practice, and such changes only occur where teachers
have learned. An assumption that underpins the programme is the importance it
places on the actual engagement of researchers and practitioners to encourage this
teacher learning.

Partnerships and networks of researchers and practitioners


In recent years, many (for example, Hargreaves, 1999) have asserted the importance
of networks that include researchers and practitioners if educational research and the
creation of professional knowledge are to be reinforced effectively. An important
feature of TLRP has been the inclusion of practitioners and policy-makers, as well as
academics, on its Steering Committee. This committee has had the responsibility of
commissioning and evaluating the research projects’ work. The practitioners have
come from schools, further education and work-place learning as well as higher
education. This has provided a spread of expertise able to judge not only the
potential of the research designs to provide trustworthy research findings, but also
the chances that these findings would have relevance for policy and practice.
However, the potential quality of the research has been the first requirement; no
matter how relevant the findings are seen to be to policy or practice, the basis has to
be sound research. The selection of referees for assessing proposals has also
reflected, as ESRC have generally done for several years, a mix of academic
researchers, practitioners and policy-makers. And a characteristic of the commis-
sioning, especially for projects in the first phase of the programme, was the
requirement that networks of researchers and practitioners should have a prominent
role in projects’ activities.
The ‘mix’ of users and researchers has also been reflected in thematic groups that
have been formed to look at cross-programme themes. As an example of the
390 S. Brown

research–practice link, one of these has involved the analysis of learning outcomes
and their role in the programme. It has published some of its findings in a special
issue of The Curriculum Journal (2005, 16[1]). This journal
is written for those professionals in the education services wishing to influence future
directions in education for the better. The Journal is designed to integrate academic and
practitioner-based writing and scholarship. (Inside cover of the Journal.)

The collection has six papers written by researchers looking across the programme
and covering research that is based in schools, further education and higher
education, and three articles responding from the perspectives of ‘users’ of research
in schools, further education and professional and workplace learning.
Among other things, the researcher-authored papers could be seen as fulfilling
four roles for practitioners. First, they have communicated the breadth of learning
outcomes in the programme, secondly they have recognised the frustrations that
have been brought about by policies that have focused on very limited measurable
outcomes, thirdly they have attempted to broaden the concept of learning outcomes
by consideration of metaphors, and fourthly they have addressed the question of the
circumstances under which practitioners are likely to be confident that the research
has something to say that makes it worth them changing their practice. They have
also identified problems that are still to be resolved for both researchers and
teachers. A particular concern expressed here was the lack of sophisticated tools that
could provide evidence of the effectiveness of new pedagogies.
The papers comprising responses from the users of research have illustrated some
aspects of progress that TLRP has made in the engagement of practitioners with
research, but they also offer warnings about the distance that has yet to be travelled.
They document the influence of the learning environment (characteristics of the
learners or the learning context), difficulties of conceptualising more complex types
of learning (attitudes, dispositions, values, identities) and problems of finding
appropriate and convincing measures of assessment. For example, Threadgold
(2005, p. 97), from a school perspective, acknowledged that research findings have a
significant impact in circumstances where the practitioner has been actively involved
in the planning and/or process of research, but she warned of the challenge of
reaching the wider professional audience with all their expectations of easy solutions
to practical problems. On the theme of what research is likely to be taken account of
by practitioners, she argued that
To engage practitioners on this broader front, however, may not be straightforward
because [school] teachers are traditionally drawn to learn from evidence found in
contexts with which they are familiar. It is likely therefore that teachers’ most immediate
engagement will be with those individual projects that are domain or subject specific.
(p. 98)

Entwistle (2005), in the context of higher education, also concluded


Effective teaching is likely to be achieved by helping teachers to understand how to
interpret research findings within their own context and circumstances, and so how to
identify the strongest influences on their own students. (p. 81)
Research and ideas of good practice in teaching 391

In addition to relevance-to-context, there is the matter of what many regard as the


national narrowly determined meanings of ‘learning’. School teachers’ impatience
with imposed attainment targets, further education’s concern with the equating of
learning outcomes with vocational qualifications, and higher education’s staff
frustrations with the ‘learning outcomes’ model that their quality assurance system
obliges them to conform to in articulating what they are trying to achieve with their
students, are concerns that researchers and practitioners have in common
(Entwistle, 2005, p. 72; James, D., 2005, p. 86; Threadgold, 2005, p. 98). The
importance of being able to pursue much broader forms of learning, beyond what is
easily measurable, is evident.

Broadening teachers’ concepts of learning


A particular priority for TLRP has been the question of how to extend ideas about
what is to count as learning. One approach to this has been a consideration of some
of the metaphors that underpin the ways in which learning is construed. In
particular, use has been made of three metaphors: of acquisition and participation
(Sfard, 1998), and of construction (Hager, quoted in Hodkinson & Hodkinson,
2005). Learning as acquisition is pervasive in education. It refers to the
accumulation of knowledge by the individual and, as Sfard has argued,
the idea of learning as gaining possession over some commodity has persisted in a wide
spectrum of frameworks, from moderate to radical constructivism and then to
interactionism and socio-cultural theories. (Sfard, 1998, p. 5)

In contrast, the participation metaphor implies that the ‘permanence of having gives
way to the constant flux of doing’ and ‘suggests that the learner should be viewed as a
person interested in participation in certain kinds of activities rather than in
accumulating private possessions’ (Sfard, 1998, p. 6). The focus is on learning to
become a member of some community or group and on sharing its language, culture
and patterns of activity. Detailed considerations of the participation and acquisition
metaphors in the context of TLRP are to be found in Edwards (2005) and
McGuinness (2005).
The construction metaphor is illustrated in the context of research on the
development of teachers’ learning in TLRP. It is suggested that in this context
learning is essentially a matter of construction … concerned with changing the learner—
constructing a developing and hopefully improving teacher through engagement with
the process of learning (Hodkinson & Hodkinson 2005, p. 113)

These authors conclude that in relation to teachers’ learning,


combining the perspectives of learning as social and workplace participation and those of
learning as personal construction … points to more effective ways of understanding and
improving that learning’ (p. 114, my emphasis).

To what extent is this a realistic aspiration? One of the features of the pressures
from policy initiatives and directives in schools is the emphasis, for both pupils and
teachers, on learning as acquisition. This emphasis is also reflected in TLRP
392 S. Brown

research projects within the school sector of education where pupil outcomes
construed as attainments, understandings and concepts have dominated. Outcomes
associated with the participation metaphor (social practices, dispositions, member-
ship, access and inclusion) have been more characteristic of research on the post-
compulsory period of education (James, M. & Brown, 2005, p. 17). It is not that
policy-makers have been unaware of the importance of other aspects of learning, but
their powerful priority for accountability in terms of measurable learning outcomes
has made them, and teachers, reluctant to consider how thinking about teaching and
learning might be more broadly conceived.
What the programme tries to do is to find ways of engaging and broadening
practitioners’ thinking about learning, whether through the support of metaphors or
otherwise. Threadgold (2005, p. 97), from her school perspective, has welcomed
information of this kind that has broadened out the ways that the complexities of
learning can be construed, but she sees the debate about these frameworks among
teachers as something that is still to come.
There is an important question about whether the settings in which teachers work
provide encouragement for such debate and so promote teachers’ own learning.
Traditional environments may not bode well, especially in secondary schools where
boundaries between subjects and schools have discouraged broad collaboration,
time for reflection has been very limited and compliance with government or school
priorities for measurable pupil learning outcomes has taken precedence over other
enterprises. However, the advantages of having a programme of research is the
opportunity that TLRP provides for cross-fertilisation of ideas. A useful concept,
developed in a project on work-based learning of apprentices in different parts of the
steel industry (Fuller & Unwin, 2003, 2004), concerns ‘expansive’ and ‘restrictive’
learning environments. An expansive environment, unlike the traditional restrictive
kind described above, is one that provides and supports many diverse oppor-
tunities to learn, work collaboratively with mutual trust, undergo personal
development and engage in extended debate and discussion without demands to
conform to official orthodoxies. General ideas about how this might be achieved
for teacher learning have been explored by Hodkinson and Hodkinson (2005,
pp. 124–128), but they warn of the need for changes in the ways teachers work,
the attention given to workplace inequalities and the consideration of how
different teachers are differentially affected by, and respond to, new ideas and
practices.
There are some TLRP projects where the involvement of teachers in the research
could be seen as providing an expansive environment and as illustrating the cautions
identified above. For example, McIntyre et al. (2005) have described an aspect of
their networked research on pupil voice in three secondary schools. Their research
question was:
if teachers are given access to in-depth evidence on their pupils’ thoughts about what
they, the teachers, do that does and does not help the pupils’ learning, and on their
pupils’ suggestions about what they might do differently, how will the teachers respond?
(McIntyre et al., 2005, p. 151)
Research and ideas of good practice in teaching 393

The researchers provided a significant system of support and encouragement for


the teachers to develop their own thinking in this area and, in terms of the
metaphors, this could be seen as an opportunity for learning as participation and as
construction. A particularly interesting feature of this work was a key role for the
researcher as
a research assistant for the teachers, gathering for them detailed pupil ideas about more
and less helpful learning opportunities in specific lessons, and about more helpful
alternatives. (McIntyre et al., 2005, p. 151)

One aspect of the pupils’ learning in this context was the preparation to be citizens
in a democratic society, and the researchers acknowledged how the government’s
priority for citizenship education had provided an important stimulus for their
support system (p. 160). They recognised that the teachers were more likely to
innovate where there were support structures and, as might be added, an expansive
environment. However, they also were conscious of the need for teachers to be
committed to the innovation and to choose whether they spent their time on this or
on complying with the other demands made upon them. The level of that
commitment was likely to be different for different teachers.
In this example, the researchers started from their own premises that pupils’ have a
right to be consulted and that if teachers respond to what they have to say then
teaching, learning and motivation will be enhanced. Neither the policy directives nor
the teachers’ existing practices placed emphasis on these matters. The research
demonstrated, however, that it was possible to give the teachers experiences that
persuaded them that pupils had sensible things to say about the teaching and
learning and could offer constructive ideas about how things might be different. This
is not to say, however, that there was a general change in teachers’ classroom
practice in response to pupils’ ideas. At one level the teachers, unsurprisingly, opted
for ideas that affirmed or extended their current practices. The immediate impact of
the research seemed to be greatest on the practice of the least experienced teachers
who were open to new ideas. Over time, however, the reality of the powerful
National Curriculum and assessment demands appeared to shift the pupil-
responsive approaches of these teachers into more peripheral aspects of the
teaching. More experienced and successful teachers took longer to become
enthusiastic, but once they themselves had developed the approach, and recognised
its value, there was evidence that they were able to build pupil consultation into
their everyday classroom practice. Finally there were those teachers who had
problems with pupil consultation, either because they did not have the confidence
that pupils had significant ideas to offer or they expected too much from the
approach.
The cautions identified by the Hodkinsons, about the difficulties of setting
up effective expansive environments, were reinforced by these experiences.
Nevertheless, with some caveats, this project illustrates rather well the possibilities
and problems of engaging teachers in approaches to their work of a kind that they
would not have thought to develop themselves, but where there is research evidence
for the value of such approaches.
394 S. Brown

The opportunities to broaden practitioners’ understandings of what is meant by


learning are important, but how has our understanding of their readiness and ability
to make use of evidence moved on in recent years?

Evidence-based practice
It seems obvious to argue that practice should be based on a sound base of evidence,
especially that coming from relevant and sound research. However, the aspiration to
provide systematic, cumulative, unequivocal evidence to inform practice has proved
anything but straightforward, not least because the relationship between research
evidence and the improvement of practice is highly complex. From a position that
the task of evidence-based education is (a) to utilise evidence from the literature and
research and determine its relevance to the educational needs of the particular
environment and (b) to establish sound evidence where existing evidence is lacking
or of questionable quality, Davies (1999) concluded that:
Evidence-based education … is not a panacea, a quick fix, cookbook practice or the
provider of ready made solutions to the demands of modern education. It is a set of
principles and practices which can alter the way people think about education, the way
they go about educational policy and practice and the basis on which they make
professional judgments and deploy their expertise. (p. 118)

At a practice level, as Sandison (2003) has argued, the question is not ‘what
works’ because this is too narrow a focus to inform practice or connect with teachers’
decision-making. The question is rather what is an appropriate course of action in
particular circumstances. In general, there seems to be considerable agreement with
Helen Simons’ call to research to ‘strive to produce the best contextually sensitive
evidence we can … but let us not overpromise’ (Simons, 2003, p. 309). So, while it is
clear that evidence-based practice cannot resolve all the problems of guaranteeing
standards in education, we can still be hopeful that evidence, where it is of high
quality and responsive to the contexts and purposes of practitioners, will have a
critical part to play in the improvement of practice. It is perhaps significant that in
recent years the term ‘evidence-informed’ has often replaced ‘evidence-based’
(Hargreaves, 1999; Sebba, 1999) acknowledging the fact that professional practice
both generates and uses evidence, and accepting the central role of the user of the
research in determining how use is made of its findings.
An example illustrating the conditions under which such use is made was provided
by the evaluation of a national programme in England that put teachers at the heart
of a search for evidence on which to base improvements in their practice (Simons
et al., 2003). The authors commented on teachers’ preference for collaborative
activities, including both teachers and researchers, and the ways in which this built
confidence, research involvement and an understanding of the complexities of the
research–practice relationship. They concluded:
First, that teachers need to interpret and reinterpret what evidence means for them in
the precise situation in which they are teaching. Second, that presentation of evidence
needs to remain closely connected to the situation in which it arose, not abstracted from
Research and ideas of good practice in teaching 395

it. Third, the collective interpretation and analysis of data by peers seems to act as a
validity filter for acceptance in practice. (p. 361)

They also reminded us (pp. 359–360) that what turns experience in a specific
context into evidence is its communicability and applicability or relevance to other
contexts. However, evidence does not come (except in very rare circumstances) as
freestanding bits of information that are generalisable to all other settings. While
some kind of generalisation has to lie at the heart of evidence-based practice, it has to
be a ‘situated generalisation’ where teachers interpret and adapt the evidence
according to the ways in which they see how their own contexts are similar or
different from the original.
There remains a question about the conditions under which practitioners are
prepared to give their attention to, and learn from, Davies’ evidence-informed
principles. There are indications, especially from the action research literature, that
teachers who have been involved in research, usually small scale and often in higher
degree work, are more likely to have access to, and make use of, other research. This
was briefly explored in one TLRP project (Ratcliff et al., 2005) that set out to
develop a better understanding of science teachers’ recognition and use of research
findings in the course of their normal practice. Part of this study compared those
teachers who had had first hand experience of a research culture with those who had
not. It concluded that this experience seemed to enable them ‘to view practice
through an evidence-informed lens, bringing their understanding to bear if their
professional context allows’ (p. 183).
The science teacher study also found that use of research evidence in the
classroom was dependent on the extent to which research findings were translated
into tangible and useful outcomes, such as curriculum or assessment materials, and
on the presence of a professional culture that encouraged both consideration of
research and changes in practice. The researchers concluded that, in order to impact
on practice, research findings would have to be convincing, resonate with teachers’
own experiences, be translated into practical classroom strategies and be widely
disseminated through teachers’ professional networks. They acknowledged, how-
ever, that some of the teachers still had unrealistic expectations that the research
would give incontrovertible ‘what works’ information.
Even where general relevance to the task of teaching is achieved, there is the
question of the level of teachers’ familiarity with the particular ideas that are the
focus of the research evidence, how these ideas relate to the ways in which they
already make sense of their own classroom work and how this influences their
responses. Another TLRP study, on transforming research on morphology into
teacher practice (Hurry et al., 2005), had a tougher challenge on their hands in
comparison with the science education research. In this case, the teachers did not
have explicit knowledge about the role of morphemes in reading and spelling. The
researchers’ task included persuading teachers to develop knowledge in this area,
convincing them that it was valuable to teach pupils about morphology, helping
them understand how to do it, providing the necessary resources and ensuring that a
development of this kind was officially sanctioned.
396 S. Brown

The researchers saw the two driving forces underpinning the research as the power
of an idea and the transformation of that idea into practice, but they did not
prioritise any exploration of how the teachers were already construing their own
classroom activity. Their findings indicated the difficulty of changing ‘practices
which are already governed by previous behaviour and a range of hierarchical
structures’ and the ease of underestimating ‘how much effort it takes teachers to
make this transformation’ into practice (p. 203). Despite recording some encoura-
ging signs of improvements in children’s spelling, through the use of a traditional
experiment-and-control approach, one year later the researchers were disappointed
to find that the classroom strategies they had sought to instil had by then largely
fallen out of use. This confirmed the difficulty of embedding basic new ideas into
teachers’ practice, even when the research evidence is sound and apparently
convincing.
A failure to mention here the large body of school effectiveness research (SER)
would seem to be a serious omission since it has experienced a high profile,
especially during its 1990s heyday, and has aimed to provide an evidence-base of, for
example, ‘known-to-be-valid knowledge’ as a foundation for teaching (Reynolds,
1995, p. 59). In an earlier paper with colleagues and from a Scottish education
perspective (Brown et al., 1995), I commented on the clear achievement of school
effectiveness research in capturing the attention of policy-makers and engaging them
in a dialogue about the meaning of research findings and their application to policy
and practice. What has been less clear, however, is the extent to which policy-makers
have been enabled to understand the complexities and limitations of the field in ways
that will divert them from the seductive and naı̈ve expectation that knowledge can
be ‘delivered’ to teachers for use in the classroom. The sense of a top-down
management process, where generalisable research findings are transformed
into central development planning and so to classroom teaching, is still a pervasive
aspiration. The problem is that the chances of success of this strategy are profoundly
limited because it takes so little account of how the teachers, who must imple-
ment the changes and so any improvements, make sense of their own classroom
work.
One of the disjunctions of the literature of both school effectiveness and school
improvement has been, on the one hand, the apparent acceptance that the crucial
decision-making is in the classroom while, on the other hand, the research focus that
has invariably been placed on the school. Furthermore, in generating frameworks for
the collection of data, it has been the researchers’ constructs that have been used and
these have rested on the hunches that they, or policy-makers, have about what are
the important variables for promoting effectiveness. This is not to assert that the
hunches have been irrational or implausible, but any attempt to innovate in
classrooms has to start from where teachers are and how they construe their own
teaching, their pupils and what they are trying to achieve in their context (Brown &
McIntyre, 1993, pp. 15–16). If the intention is to introduce change, what is
important is to take account of how the teachers make sense of their educational
world and to address the variables that are most salient in their thinking.
Research and ideas of good practice in teaching 397

Given these features of SER, it is not surprising how rarely positive references to
its literature have been made by researchers involved in work with teachers on
classroom teaching and learning, or how scant have been the examples where the
evidence it has collected has had an impact on classroom practice. More generally,
Clark (2005) has mounted an attack on SER from the perspective of a philosopher
and, in so doing, has delivered a stinging commentary on the idea of evidence-based
education.
The proposal to run an educational system as a scientific, research-driven, ‘evidence-
based’, industrial project, compiling ‘methods that work’, with education as the output
is a morally objectionable muddle. (p. 299)

Teachers’ own research


Hargreaves (1996) was not the first to propose an increased and much more
profound involvement of teachers in research. One notable forerunner was Stephen
Corey who suggested more than 50 years ago in his book Action research to improve
school practices
that teachers, supervisors, and administrators would make better decisions and engage
in more effective practices if they, too, were able and willing to conduct research as a
basis for those decisions and practices. The process by which practitioners attempt to
study their problems scientifically in order to guide, correct, and evaluate their
decisions and actions is what a number of people have called action research. (Corey,
1953, p. 6)

Similar instrumental and other arguments were advanced in England from the late
1960s and early 1970s (for example, Stenhouse in Rudduck & Hopkins, 1985;
Elliott, 1991) and led quickly to an extensive ‘teacher as researcher’ movement. The
action research approach has avoided the traditional distinction between research
and practice and enabled the teachers involved in enquiry to integrate the research
directly with action in the workplace. However, it has generally not aimed, as Corey
proposed, to promote a scientific model for research. Action research’s claim has
been that teachers will learn most effectively and change behaviour in circumstances
where there is personal engagement in identifying a practical concern as the focus of
the research, designing the study, taking action, collecting evidence, formulating
conclusions and feeding these back into practice. It has further suggested that this
involvement will be an incentive for teachers to seek out the literature of other kinds
of research.
The actual patterns of action research have varied considerably. Inevitably, the
enquiries have developed a complexity that has made it difficult to produce accounts
that are easily understood in their entirety or can display with confidence any
generalisability to other situations. But the value of action research is determined
primarily by the extent to which it leads to improvement in individuals’ practice in
their unique settings and the depth of their understanding of the complexities of
their work i.e. the promotion of practical wisdom, situated knowledge and specific
insights (for example, Somekh, 1995). For wider dissemination it is possible, of
course, to provide tentative hypotheses that, as long as there is information about the
398 S. Brown

context in which the work was carried out, teachers can share with others who may
be working in somewhat similar circumstances.
Despite the value of a sense of ownership resting with the teachers, there has been
a need for support from others in developing research expertise. In some cases, this
support might come from experienced researchers acting as empathetic facilitators
or critical friends rather than directors of developments. Others would argue,
however, that it is within the group of teachers themselves, working in cooperative
and non-competitive ways, where the emotional and professional support system
resides (Lomax, 1999).
Although action research can often be accomplished without spending large
amounts of additional money, it is very demanding of teachers’ time and
commitment to ‘after hours’ work. Where teachers are encouraged by government
to see their role as essentially a technical one of acquiring certain ‘competences’
which they then use to implement that government’s priorities, there is an
understandable reluctance to increase what they see as their ‘over-load’ in order
to explore other aspects of policy and practice. Over recent years many have come to
see themselves as powerless in the face of highly prescribed curricula, and they have
little enthusiasm for creative developments that are unlikely to have an impact on
‘official’ ways of thinking. For example, it has been argued by a group of teachers
(Donald et al, 1995) that in Scotland the climate of performance indicators, rigid
contractual arrangements and hierarchical accountability did nothing to encourage
them to extend their role, become socially critical and develop the professional
commitment that action research requires. Indeed, this group reported that, despite
their belief that their work had made an impact on the thinking of local colleagues,
they did not want to continue with action research initiatives and the attendant
pressures. One way of looking at this is to suggest that the obvious ‘costs’ to the
teachers of engaging in action research have to be balanced against the ‘rewards’ it
purports to bring of generating personal or local knowledge about teaching, and new
levels of self-awareness and understanding.
As a result of the need to generate motivation among teachers to engage with the
ideas of action research, and the requirement to provide a context where resources
are available for support from professional researchers, much of the action research
carried out has been in the context of individual teachers following Masters
programmes. Qualifications have provided incentives, but the collaboration with
teacher colleagues, which many have seen as a crucial element of action research, has
tended to be undermined. The outcomes of the teacher’s action research turn out to
be designed to please the examiners rather than to engage the practical interest of
other teachers.
Attention has been drawn to the question of whether the quality and value of
action research has been adequately evidenced for funding bodies to allocate scarce
resources into its further development. Some concerns have been expressed, even by
those with well known involvement in the field.
Though the theoretical rationales for teacher research have been well argued by its
academic advocates … the claims they support have not been subjected to systematic
Research and ideas of good practice in teaching 399

study. Adelman (1989) has argued that there is a profound discrepancy between such
theoretical rationales for the ‘democratisation of educational research’ and the research
practices of teachers as evidenced by their ‘research reports’, most of which ‘appear to
be summaries of master’s dissertations’. (Elliott & Sarland, 1995, p. 372)

These authors document adverse criticism which has pointed to the uncertain
status of the knowledge claims made by the research, doubts about methods
used, questionable competence in research techniques and inadequate depth of
consideration of the research problems. The fact that as yet there is not enough high
quality evidence to rebuff accusations of this kind does not mean that action research
is not a valuable enterprise. But it does mean that a healthy scepticism has to be
maintained about (a) some of its wilder claims for the effective resolution of
educational problems and (b) any of the suggestions that its introduction is a simple
matter. It may be argued, of course, that it is inappropriate to seek hard evidence of
the kind that is usually demanded, and that the value of the action research is
personal to the individual or group whose practice is influenced by the findings, but
is transient and then moves on.
One of the proposals made by Hargreaves in his 1996 Teacher Training Agency
(TTA) lecture went further than arguing for teachers’ involvement in, and control
of, research. It suggested that a significant portion of available research funds should
be given directly to teachers to do research for themselves. This is a third area of
official initiative, in addition to RAE and TLRP, to be considered in this paper.
The TTA quickly set up a scheme that invited teachers to apply for grants to
undertake small-scale research on effective classroom practice. As Foster (1999)
commented,
these projects can be treated as a critical test of the idea that teachers can produce and
communicate the sort of knowledge about educational practice that critics claim
academic researchers have failed to create. (p. 381)

In his assessment of these projects, he found that most of the topics investigated
were highly relevant to classroom and school practice, but that there were substantial
doubts about the validity of many of the findings and the (often bold) causal
explanations offered. Although generally supportive of the value of teacher research,
Foster clearly had considerable reservations about its quality and the wisdom of
disseminating findings and ideas from work largely carried out without the necessary
critical scrutiny.
A second scheme, the Best Practice Research Scholarship programme (BPRS),
was later initiated for a limited period in England by the Department for Education
and Skills (DfES) and evaluated by Furlong and Salisbury (2005). These
scholarships were awarded to serving classroom teachers to engage in supported,
school-focused research as part of their continuing professional development. The
evaluation was concerned with the quality of the research, but it took the view that
the programme’s primary purpose was to improve practice in the schools involved
rather than contribute to the public stock of knowledge.
The evaluators expressed enthusiasm about the impact of BPRS projects on
practice and morale.
400 S. Brown

There was evidence that they impacted on teachers’ learning—how they thought about
themselves as professionals and the role they themselves gave to evidence, research-
based procedures and wider reading in the development of their own practice. There
was also substantial evidence of impact on practice … all of the Scholars whom we met
used their projects in the development of their own day-to-day practice; in the majority
of cases there was also evidence that their projects had influenced the practice of other
teachers in their own schools and beyond … there was overwhelming enthusiasm for it
[the programme] even when there were minor criticisms about actual procedures … it
was indeed the Government saying to teachers that they were important and could
make a valuable contribution to the development of teaching and learning in their
schools and beyond. (Furlong & Salisbury, 2005, p. 79)

However, the evaluators were sceptical about some of the claims made by the
projects. They drew attention to continuing problems of research quality and the
criteria to be used in judging work of this kind as trustworthy and appropriate for
dissemination. If it is inappropriate to make use of the traditional research demands
for rigour, transparency, ethical process and building on extant knowledge, then by
what are they to be replaced? Furthermore, the concept of dissemination remains in
need of explication, even where there is confidence in the quality of research. There
is not just the question of how to bring the outcomes of individuals’ research to the
attention of teachers on a wide scale; there is still the matter of the findings being
intimately linked to contexts. The term ‘Best Practice’ in the scholarship scheme is a
blatant misnomer in the sense that there can never be a one-size-fits-all answer to
how improvements in teachers’ practice can be brought about.

Discussion
It is clear that over the last decade in the UK there has been a significant increase in
the importance attached to the kinds of educational research that have the potential
to influence classroom teaching. For some, the research–practice relationship
continues to be seen as essentially direct and instrumental and serving a ‘what
works?’ demand for evidence-based practice. For most, however, the priority is the
promotion of greater understanding of practical matters through evidence-informed
processes that engage teachers with research. Those teachers so engaged, together
with researchers and many policy-makers, seem to agree that a stronger relationship
with research can make classroom practice more effective and satisfying, and
probably increase teachers’ willingness and capability to pay attention to the research
of others. What the rest, indeed the majority, of the population of teachers think is,
however, a moot point. Nevertheless, the experience of those involved in networks or
partnerships with researchers provides some optimism for the value of the
relationship in the future.
At one level, the developing concern for practice-based research has been reflected
in a change of emphasis in the priorities for accountability in higher education (the
RAE), and although some problems of making judgements remain, attention has
been given to the generation of criteria of quality that are appropriate for practice-
based research. The continuing difficulties arise from the need to ensure that both
the excellence of the research and its practical worth can be validly assessed in
Research and ideas of good practice in teaching 401

combination. This remains a challenge for the education sub-panel for the RAE
2008, but also for those submitting their research output for assessment.
A significant set of carrots, in the form of funding, for practice-based research has
emerged through such channels as TLRP and teacher scholarships for research
(DfES, TTA). There is also an Applied Educational Research Scheme, funded by
the Scottish Executive and the Scottish Higher Education Funding Council, which
is just getting off the ground. In general, funding has enabled many continuing
studies on teaching and learning in different contexts, with teacher-researcher
networks and concerns about dissemination as major features of the enquiries.
Important priorities have been the moves to involve teachers much more closely in
the commissioning, planning and implementation of research. As might be expected,
the extent of change has been somewhat patchy and initiatives have ranged across a
spectrum from those entirely in the hands of teachers to those where researchers still
firmly look for their own ideas or findings to be introduced, more or less unmodified,
into classrooms. Although there is considerable evidence that the barriers between
research and practice have been loosened, and that efforts have been made to design
enquiries producing trustworthy findings relevant to teachers’ practice, significant
difficulties remain.
One of the difficulties relates to the restrictive learning environments in which
teachers frequently still find themselves. Many are deterred from engaging in
continuing research activity because of the pressures they experience from
government or school policies that focus on specific targets, measurable learning
outcomes, narrow performance indicators, inflexible contractual arrangements,
hierarchical accountability and the pervasiveness of the idea of some ‘best practice’
that should be identified and then adopted by everyone. These restrictions are
further fuelled by official frameworks where simplistic expectations see practice as a
process of straightforwardly implementing evidence provided by research, teachers’
learning as the acquisition of (and compliance with) knowledge delivered to them,
and research findings as generalisable across contexts.
In these circumstances, what can be said about the conditions under which
teachers’ might be prepared to give their attention to research, be confident that
research would have something to say to make it worth their while to change their
practice, and be ready to spend their time in an enquiry mode rather than complying
with the other demands on them? Some would say that the necessary conditions are
that teachers should take control and ownership of the research on the assumption
that conducting their own enquiries will lead to more effective teaching and more
satisfaction for them. There is relatively long-standing experience of teachers’ action
research and, more recently, research scholarships for individual teachers have been
introduced. On the positive side, there is evidence that involvement in research of
this kind is clearly relevant to classroom practice, promotes the teacher’s enthusiasm
and often changes practice. There are, of course, some costs attached to such
involvement, especially the demands it makes on teachers’ time. What is of greater
concern, however, is the lack of evidence to promote confidence in the quality of
much of this kind of research, the validity of its findings and the claims or causal
402 S. Brown

explanations that ensue from the studies. It can be argued that the emphasis is on the
value of the experience for the individual teacher and not on the accumulation of
trustworthy knowledge that can be disseminated more widely. However, a lack of
critical scrutiny of quality is hard to defend in any circumstances, and many teachers
have looked to collaboration with experienced researchers to improve their
enquiries.
So what can be said about research where there is such collaboration between
teachers and more experienced researchers? The evidence indicates how teachers’
involvement and confidence can be built in expansive learning environments
characterised by specialist support, collaborative working, and the development of
mutual trust. Unsurprisingly, teachers are drawn to learn from, or to generate,
evidence found in contexts with which they are familiar and concerned with
innovations that relate to their classroom priorities. Much more difficult has been
the task of gaining their commitment to innovatory approaches that they would not
have thought of themselves, but for which research evidence is available. Evidence-
based practice cannot be construed, therefore, as a quick fix, but has to aim to alter
the ways teachers think about their work and consider what the evidence means for
them in their own settings. This implies that the evidence, rather than being
presented as sound research findings to be embedded into teaching, must make
explicit the circumstances in which it arose so teachers can judge the extent of its
application to their own context and make their decisions about implementation.
One crucial element emerging from the strategies to engage teachers with research
evidence is the extent to which account is taken of how teachers already make sense
of their own classroom activity. This relates to an argument Donald McIntyre and I
made over a decade ago.
The preaching of well-meaning theorists … seems to have been greeted by an ‘all very
well in theory but impractical in my classroom’ response. But what would count as
practical? … [W]hether innovations will be seen as practical will depend on how they
relate to the things which teachers have learned (through experience) about what is and
what is not appropriate in their classrooms, and on the implicit skills and strategies
which they have learned for achieving their purposes within the conditions in which they
work… [P]roposed innovations frequently take little account of (and may require the
discontinuation of) classroom practices which are familiar, comfortable and, in the
teacher’s eyes, successful in achieving his or her purposes … To have any chance of
being perceived as practical, plans for innovation would have to take account of what is
already being done (particularly what is being done well) in classrooms … The
emphasis would be on understanding how teachers construe their teaching. (Brown &
McIntyre, 1993, p. 15)

One of the effective ways that account has been taken of teachers’ perspectives has
been the development, from research evidence, of materials, such as curriculum or
assessment support, that are of direct use in classrooms and sought after by teachers.
Another is to take up specific concerns such as teachers’ frustrations at the
narrowness of the concepts of learning and learning outcomes officially promoted by
government. The success of drawing attention to new ways of thinking about
broader constructions (e.g. through metaphors) of this area will depend on the
Research and ideas of good practice in teaching 403

extent to which they help teachers to address their immediate classroom and
personal priorities. But new conceptions are not enough; the practical needs for
sophisticated and convincing tools for the assessment of broader areas of learning
and the effectiveness of new pedagogies may well hold the key to capturing teachers’
attention.
There have been some good examples of cases where research has had an impact
on practice. Acceptance and trust of evidence is more apparent where teacher
colleagues are collectively involved, and because communicability is crucial if
evidence is to be shared, dissemination through teachers’ professional networks may
be a particularly important pathway. But there will continue to be great variation
among teachers in their readiness to facilitate the transformation of particular
research findings into the everyday knowledge that is needed in classrooms. While
some may welcome such evidence if generated in their immediate context, the task
of generalising such progress to other schools remains a much greater challenge.
Many among the wider professional audience have never been involved in research
and often mistrust it for not having fulfilled their expectations of the provision of
ready-made solutions to immediate practical problems. As teacher education, initial
or continuing development, pays much more attention than in the past to teachers’
and student teachers’ own enquiries, familiarity with research may reduce
scepticism. What is clear, however, is that a simple delivery model of research
evidence as a package of knowledge to be taken in and used by teachers could be a
great waste of time and money.

References
Adelman, C. (1989) The practical ethic takes priority over methodology, in: W. Carr (Ed.) Quality
in teaching (London, Falmer).
Brown, S. & McIntyre, D. (1993) Making sense of teaching (Buckingham, Open University Press).
Brown, S., Duffield, J. & Riddell, S. (1995) School effectiveness research: the policy makers’ tool
for school improvement?, European Educational Research Association Bulletin, 1(1), 6–15.
Clark, C. (2005) The structure of educational research, British Educational Research Journal, 31(3),
289–308.
Corey, S. (1953) Action research to improve school practice (New York, Teachers College Columbia
University).
Davies, P. (1999) What is evidence-based education? British Journal of Educational Studies, 47(2),
108–121.
Donald, P., Gosling, S., Hamilton, J., Hawkes, N., McKenzie, D. & Stronach, I. (1995) ‘No
problem here’: action research against racism in mainly white areas, British Educational
Research Journal, 21(3), 263–275.
Edwards, A. (2005) Let’s get beyond community and practice: the many meanings of learning by
participation, The Curriculum Journal, 16(1), 49–65.
Elliott, J. (1991) Action research for educational change (Buckingham, Open University Press).
Elliott, J. & Sarland, C. (1995) A study of ‘Teachers as Researchers’ in the context of
award-bearing courses and research degrees, British Educational Research Journal, 21(3),
371–385.
Entwistle, N. (2005) Learning outcomes and ways of thinking across contrasting disciplines and
settings in higher education, The Curriculum Journal, 16(1), 67–82.
404 S. Brown

Foster (1999) ‘Never mind the quality feel the impact’: a methodological assessment of teacher
research sponsored by the Teacher Training Agency, British Journal of Educational Studies,
47(4), 380–398.
Fuller, A. & Unwin, L. (2003) Learning as apprentices in the contemporary UK workplace:
creating and managing expansive and restrictive participation, Journal of Education and
Work, 16(4), 407–426.
Fuller, A. & Unwin, L. (2004) Expansive learning environments: integrating organizational and
personal development, in: H. Rainbird, A. Fuller & A. Munro (Eds) Workplace learning in
context (London, Routledge).
Furlong, J. & Oancea, A. (2005) Assessing quality in applied and practice-based educational research
Available online at: www.esrc.ac.uk (accessed October 2005).
Furlong, J. & Salisbury, J. (2005) Best practice research scholarships: an evaluation, Research
Papers in Education, 20(1), 45–83.
Hammersley, M. (1997) Educational research and teaching: a response to David Hargreaves’
TTA lecture, British Educational Research Journal, 23(2), 141–161.
Hargreaves, D. (1996) Teaching as a research-based profession: possibilities and
prospects, Teacher Training Agency Annual Lecture 1996 (London, Teacher Training
Agency).
Hargreaves, D. (1997) In defence of research for evidence-based teaching; a rejoinder to Martyn
Hammersley, British Educational Research Journal, 23(4), 405–419.
Hargreaves, D. (1999) The knowledge creating school, British Journal of Educational Studies, 47(2),
122–144.
Hillage, J., Pearson, R., Anderson, A. & Tamkin, P. (1998) Excellence in research in schools
(London, Department for Education and Employment).
Hodkinson, H. & Hodkinson, P. (2005) Improving schoolteachers’ workplace learning, Research
Papers in Education, 20(2), 109–131.
Hurry, J., Nunes, T., Bryant, P., Pretzlik, U., Parker, M., Curno, T. & Midgley, L. (2005)
Transforming research on morphology into teacher practice, Research Papers in Education,
20(2), 187–206.
James, D. (2005) Importance and impotence? Learning, outcomes and research in further
education, The Curriculum Journal, 16(1), 83–96.
James, M. (2005) Editorial, The Curriculum Journal, 16(1), 3–6.
James, M. & Brown, S. (2005) Grasping the TLRP nettle: preliminary analysis and some enduring
issues surrounding the improvement of learning outcomes, The Curriculum Journal, 16(1),
7–30.
Lomax, P. (1999) Working together through for educative community through research, British
Educational Research Journal, 25(1), 5–21.
McGuinness, C. (2005) Behind the acquisition metaphor: conceptions of learning and learning
outcomes in TLRP school-based projects, The Curriculum Journal, 16(1), 31–47.
McIntyre, D., Pedder, D. & Rudduck, J. (2005) Pupil voice: comfortable and uncomfortable
learnings for teachers, Research Papers in Education, 20(2), 149–168.
Nisbet, J. (2005) What is educational research? Changing perspectives through the 20th century,
Research Papers in Education, 20(1), 25–44.
Oancea, A. (2004a) The distribution of educational research expertise—findings from the analysis
of RAE 2001 submissions: Part I, Research Intelligence, 87, 3–8.
Oancea, A. (2004b) The distribution of educational research expertise—findings from the analysis
of RAE 2001 submissions: Part II, Research Intelligence, 87, 3–9.
Ratcliffe, M., Bartholomew, H., Hames, V., Hind, A., Leach, J., Millar, R. & Osborne, J. (2005)
Evidence-based practice in science education: the researcher-user interface, Research Papers
in Education, 20(2), 169–186.
Reese, W. J. (1999) What history teaches about the impact of educational research on practice,
Review of Research in Education, 24, 1–19.
Research and ideas of good practice in teaching 405

Reynolds, D. (1995) The effective school: an inaugural lecture, Evaluation and Research in
Education, 9(2), 57–75.
Rudduck, J. & Hopkins, D. (Eds) (1985) Research as a basis for teaching: readings from the work of
Lawrence Stenhouse (London, Heinemann Educational).
Sandison, I. (2003) Is it ‘what works’ that matters? Evaluation and evidence-based policy making,
Research Papers in Education, 18(4), 331–345.
Sebba, J. (1999) Developing evidence-informed policy and practice in education, paper presented
to the British Educational Research Association Annual Conference, University of Sussex, 2–5
September 1999.
Sfard, A. (1998) On two metaphors for learning and the dangers of choosing just one, Educational
Researcher, 27(2), 4–13.
Simons, H. (2003) Evidence-based practice: panacea or over promise? Research Papers in
Education, 18(4), 303–311.
Simons, H., Kushner, S., Jones, K. & James, D. (2003) From evidence-based practice to practice-
based evidence: the idea of situated generalisation, Research Papers in Education, 18(4),
347–364.
Somekh, B. (1995) The contribution of action research to development in social endeavours: a
position paper on action research methodology, British Educational Research Journal, 21(3),
339–356.
Taylor, C. (2002) The RCBN Consultation exercise: stakeholder report. Occasional Paper Series,
Paper 50 (ESRC Research Capacity Building Network, Cardiff University).
Threadgold, M. (2005) A school practitioner’s response to the discussions of the Learning
Outcomes Theme Group, The Curriculum Journal, 16(1), 97–99.
Tooley, J. & Darby, D. (1998) Educational research: a critique. A survey of educational research
(London, Office for Standards in Education).
UK Funding Councils (1999) Research Assessment Exercise 2001: assessment panels’ criteria and
working methods. Ref RAE 5/99. Available online at: http://www.hero/rae/pubs/5_99
(accessed October 2005).
UK Funding Councils (2003) Review of Research Assessment, Report by Sir Gareth Roberts to the UK
funding bodies. Available online at: http://www.ra_review.ac.uk/reports/roberts.asp
UK Funding Councils (2005a) Guidance to Panels. Ref RAE 01/2005. Available online at: http://
www.rae.ac.uk/pubs/2005/01/
UK Funding Councils (2005b) Consultation on assessment panels’ draft criteria and working methods.
Ref RAE 04/2005. Available online at: http://ww.rae.ac.uk/pubs/2005/04/ (accessed October
2005).

S-ar putea să vă placă și