Download School self-evaluation is now seen as a matter of priority in most

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
School self-evaluation: What we are learning from other countries
John MacBeath
School self-evaluation is now seen as a matter of priority in most economically advanced
countries of the world. It flows from a shared concern for quality assurance and
effectiveness, fuelled by international comparison which rank countries on a range of
common indicators. For governments who invest in OECD and UNESCO surveys, pupil
performance in what are seen as ‘key’ areas of skill and knowledge acquisition carry high
political stakes. This is the international policy context for self- evaluation which is
driven by three primary ‘logics’.
1 An economic logic: the costs of training, administration, conduct and follow-up of
external evaluation are too high, and may not offer value-for-money
2 An accountability logic in which schools render an account to government, parents
in return for the investment and public trust placed in teachers and school leaders
3 A school improvement logic which holds that the process of reflection, dialogue and
concern for evidence is the motor of better schools
These three are not discrete in their expression but can easily become the prevailing or
driving motive. When they do get out of balance the quality of learning and teaching
suffers. Requiring schools to be ‘self-inspecting’ (that is, assuming the role external
inspection) may have economic benefits but may divert attention and energy from the
core work of the classroom. Over-emphasis on accountability, as shown by evidence
from many countries of the world1, reveals an attrition of professional engagement and
vitality of teaching. School improvement, while the most compelling of the three ‘logics’
will falter without accountability and attention to the attendant time and opportunity
costs.
There are important lessons to be learned from school self-evaluation around the world,
countries striving to reconcile the three prevailing logics and to find the balance between
external and internal evaluation. Examining current models within European countries
1
Canada Hargreaves, A. (2004) Educational Change over Time? The sustainability and nonsustainability of three decades of secondary school change and continuity Keynote address to the School
Effectiveness and Improvement Conference, Rotterdam
Barlow, M. and Robertson, J. R (1994) Class Warfare, Toronto, Key Porter Books
USA Berliner, D. and Biddle, D. (1995) The Manufactured Crisis, Myths, Fraud and Attack on America’s
Public Schools, Reading Massachussetts, Perseus Books.
Labaree, E.R.(1998) How to succeed in school without really learning, the credential race in American
education, New HAven, Yale Univerwsity Press
House, E. (1999) Schools for Sale, Teachers College Press, Columbia, New York,
Australia Townsend, T. (1999) Leading in times of rapid change, Paper delivered at the Annual
Conference of Secondary Principals, Canberra, September
the Standing International Conference of Central and General Inspectors of Education
(SICI) identify three typologies:
Proportional: in which inspection takes the school’ own data as its starting point. The
better the self-evaluation the less intensive the inspection. The Netherlands, Scotland,
Portugal, Flanders, the Czech Republic, Ireland and England are described as falling
under this rubric.
Ideal: in which the inspectorates report on the quality of self-evaluation and point to
improvements needed. Northern Ireland, Austria and France are in this category.
Supporting, in which the role of inspectors is to provide support for schools in carrying
out self-evaluation more effectively. Denmark and some German Lander fall into this
category.
Within these clusters of countries there are, however, distinctive differences in the nature
of the process, the high stakes implications, support and pressure, the flexibility of
frameworks and criteria, extent of the dialogue and the involvement of teachers in
developing frameworks, criteria and procedures.
Ireland provides an example of a process which involved wide consultation with schools.
A pilot in 35 schools, published in 1999 (DES) described it as ‘consultative evolutionary
evaluation’ and ‘a developmental model to serve the trinitarian purposes of school
improvement, school development and school effectiveness.’ Great care was taken to
couch the process in cultural and contextual language and to involve teachers’ unions,
parent associations in amending the model. Illustrative of this was the eventual agreement
to describe the central in indicator as quality of learning and teaching rather than teaching
and learning – an important shift in focus from what teachers do to how pupils learn. In
May 2003 Whole School Evaluation (WSE) was reincarnated as Looking at Our School,
An aid to Self Evaluation in Schools – a model in which external evaluation focuses
primarily on the schools’ own approach to self-evaluation. The model does, however,
have two central weaknesses. One is the lack of reference to pupils and parents as
participants in the process. The second is the lack of support at national level for schools
to use achievement evidence in either formative or comparative benchmarking sense.
In Scotland the process of development and refinement involving teachers and other
bodies has extended over a fourteen year period. Critical to its widespread acceptance by
schools has been its progressive growth over that period in which teachers have been
involved, in piloting, designing instruments, modifying and slimming down the indicator
set, and above all making the approach less prescriptive and more customisable by
schools. While there are schools which use How Good is Our School? (HGIOS)
mechanistically and with little sense of ownership, there are others schools who see it as a
reference point, as a basis for dialogue among staff and pupils, and sometimes parents.
As part of our whole strategy for raising achievement, we decided to look at ‘How
Good is our School’s as a way of getting into the whole dialogue with staff and
pupils on areas an effective school we should be concerned with. We looked at
what hinders learning, what makes effective learning, what makes effective
teaching and we worked through that with our own teachers to begin with, then
decided it was time to talk to our pupils about it because we were only getting half
of the picture. We set up small focus groups in each year group. Now children,
youngsters were asked to join - they had the opportunity to volunteer to join so
approximately 50 pupils from each year group came along to these focus groups
that were led by staff. And they talked about things that hinder learning, things that
help learning, things that hinder teaching and help teaching - that was really
enlightening. They were then asked to feed that information back to their
registration class so, we had representatives from every class in the school.
(St. Kentigern’s High School quoted in MacBeath, forthcoming)
This illustrates self-evaluation being used imaginatively, formatively and with ownership
by teachers rather than as a formulaic exercise. A similar kind of ownership is
exemplified by this extract from an Austrian Primary school. The national context is one
in which inspection and self-evaluation are in a process of development but allow latitude
for schools to be imaginative and in control of the process. .
The self-evaluation activities were focused directly on pupils and teachers. ……..
Highly individualised learning plans, including a range of lesson materials and
activities, were designed to meet individual pupil needs across the spectrum from the
special educational needs to the very gifted. The acknowledgement of different
learning styles and pace was very much in evidence. More recently, the school has
focussed on the development of skills and learning strategies to equip pupils for
successful transfer and attendance at secondary school. In promoting self-directed
learning, the school had engaged in a range of activities. The QPR project provided
support for these activities. A set of quality indicators for pupils and teachers were
developed within the school for self-directed learning. A survey on homework and
pupil leisure behaviour was conducted and the outcomes presented at a parents’
meeting that was documented in photographs and video. This provided a point of
focus and development for school management, teachers and parents.
(Quoted in Antoine et. al, 2001)
Implicit in this example are both purpose and audience. The school’s focus in on telling
its story about what matters, conveying it through a range of visual as well as written
media. With external support from a critical friend or a project team, as in the cases
above, schools have been both enthusiastic and imaginative in self-evaluation. Similar
examples can be found in English schools where there has been support either from local
authority critical friends or in the context of development projects..
While these examples from individual schools suggest a clear focus on improvement they
sit within countries where there is a greater or lesser emphasis on accountability. In
England OFSTED dropped its ‘improvement through inspection’ strapline as it moved
more candidly towards an accountability purpose. Yet with self-confidence and support at
individual school level schools in England an in many other countries can and do pursue
an improvement agenda. When there is a sense of ownership and latitude for schools they
may go down their own diverging paths in the knowledge that they can render an account
when asked for, with due attention to evidence and ‘rigour’. This is explicit in some
national policies and appears to be what is implied in the new OFSTED framework,
although the proposed critical friend appointment, the negotiation of targets and the
sharper focus of inspection do not sit comfortably within that more flexible and
collaborative approach. What we have learned from fifteen years of work on selfevaluation is that schools need to believe that they have the space, the authority and the
goodwill to pursue an improvement agenda within an accountability framework.
The 101 schools from 18 countries who participated in the E.C. Project Evaluating
Quality in European Schools all worked within different accountability contexts from
Iceland in the north to Greece in the south. In all of these schools self-evaluation was
located at the improvement end of the spectrum. The Irish schools involved, like schools
in many of the other E.C. countries, still make reference to the European Project as much
preferable to other models, as containing the essential ingredients which made it
engaging and empowering2. This was also the view of the European Commission,
national policy makers in the 18 countries and the 101 schools, 98 of which asked to be
included in a further stage of the project. The success of that model is explained by seven
seminal qualities







The central involvement of key stakeholders (teachers, pupils and parents) in the
process
The identification of what matters most to teachers and school leaders in evaluating
school quality and effectiveness
The support and challenge of critical friends chosen by, or in consultation with the
schools
The dialogue which flowed from the differing viewpoints and the press for supporting
evidence
The repertoire of tools for use by teachers
The simplicity and accessibility of the framework
The focus on learning and support for teaching
These principles seem to be continually observed in the breach, and important lessons are
there to be learned from North American legislations which lie at the at the accountability
end of the spectrum. The high stakes No Child Left Behind policy under George Bush is a
salutary lesson as to the collateral damage that can follow from too single-minded a
pursuit of accountability3. With rigid and oppressive legislation the vitality that we have
seen in school-based self-evaluation is stifled. Trends in Canadian provinces in recent
years towards harder-edged accountability further illustrate the constraints which serve to
inhibit, rather than promote, school self-evaluation.
2
quoted in SICI Report, (2001) (2003) The Effective School Self-Evaluation project, Standing
International Conference of Central and General Inspectorates of Europe, Brussels European Commission.
3
Ivins, M. and Dubose, L. (2004) Bushwhacked, London, Alison and Busby.
Traub, J. (2002) No Child Left Behind: success for some, New York Times Educational Supplement,
November 10.
In British Columbia the School Accreditation Program was recently discontinued,i and
replaced in 2002 by the Accountability Framework, which includes school district
accountability contracts, school planning councils, and, a school district review process,
large scale assessment and reports to account for system and institutional performance.ii
In Ontario the Education Quality and Accountability Office (EQAO) has a mandate to
ensure greater accountability and contribute to the enhancement of the quality of
education in Ontario ‘through assessments and reviews based on objective, reliable and
relevant information, and the timely public release of that information along with
recommendations for system improvement.’iii In Quebec the Ministère de l’Éducation du
Québec (MEQ) publishes Education Indicators, whose purpose is ‘to ensure
accountability by providing specific information on the resources allocated to education,
the various activities pursued by the education system and the results obtained.’iv The
annual publication of results on uniform Ministry examinations consist of raw data,v with
no adjustment scores for intake differences, socio-economic characteristics, or valueadded measures.
Within this regime in Quebec a cluster of schools, with support from McGill University,
embarked on a project entitled Schools Speaking to Stakeholders. It pursued two aims:
first, to collect, analyse and disseminate information about each of the schools involved
in the project; and second, to create a flexible framework that any school could use to
communicate significant information about itself to its major stakeholders. In the first
phase of the project (18 months), each school completed a school case report and
communicated the results of the exercise to its stakeholders by means of a school profile.
The second phase (2 years) consisted of a loose coupling of new and old project schools
organised around a developing school performance framework which led to a guidebook
for use by schools more generally undertaking self-evaluation of their performance. This
process resulted in a “starter kit” – a first attempt at providing schools with the basics to
get started on a journey of discovery about school self-evaluation.. More important than
any of the outputs produced by this project was the demonstration that school teams, with
the support of appropriate critical friends and other resources, could engage in a
meaningful way in the evaluation of their school and communicate the results to their
stakeholders in an accessible form. Although providing a vivid demonstration of the
energy and enthusiasm that can be engendered by bottom-up developments such as these
the model was never generalised, and was starved of resources and support from the
Ministry.
As the Canadian Schools Speaking to Stakeholders project demonstrates accountability is
not an anathema to teachers but entered into positively when seen as an integral part of
school-based self-evaluation and professional development. This can be seen in the
response to government of the Canadian Teachers’ Federation (CTF) which recognises
accountability as a priority policy issue and sets out some of the criteria for a professional
model of accountability:

Accountability involves accepting responsibility for actions, reporting on those
actions and working to improve performance.







Accountability must reflect the multiple goals of public education and the diverse
nature of students, schools and communities.
Educational accountability is a responsibility shared by all those involved in public
education.
Accountability should be focussed on supporting and enhancing student learning.
Quality classroom-based assessment must be the central feature of educational
accountability.
Teachers are responsible for possessing a current subject and pedagogical knowledge
base, using this knowledge base to make decisions in the best interest of students,
explaining these decisions about student learning to parents and the public and
working to improve their practice.
Parents have a right to clear, comprehensive and timely information about their
child’s progress.
The public has a right to know how well the system is achieving its goals.
As this shows, accountability may be construed in quite different terms from the topdown version, as shown in this. Example from Rhode Island’s School Accountability for
Learning and Teaching (SALT). It describes itself as ‘practitioner’ or ‘professional’
accountability because its focus is on teachers’ practice evaluated by teachers. The
review team is composed of practising Rhode Island teachers together with a parent, an
administrator and a member of university staff. The team spends four to five days in the
school, writes a report which is negotiated with the school, the process of which can be
lengthy but is highly valued in teasing out evidence and the basis for judgements made.
The team then draws up ‘a compact for learning’ the purposes of which are to ensure that
school staff have the capacity to implement improvement.
Practitioner accountability was also the rationale for the School Change through Inquiry
Project (SCIP) in Chicago. The external review process was designed to help schools
become reflective inquiring communities, and to remove the threat which accountability
may imply. An important facet of the review was to set it in a continuous context rather
than as a one-off snapshot at a given moment.
There are important lessons to be learned from the many school-based projects which
thrived within top down accountability-led systems, and from those that have been stifled
and starved of support, The three key principles are set out in Gladwell’s book The
Tipping Point in which he studies the epidemiology of change. Evidence suggests that for
good ideas and breakthrough practice to survive there need to be:
1 The vital few: the innovative people with the vision, energy and enthusiasm to take an
idea forward
2 Stickability: people or structures which endorse the project and help it ‘stick’
3 Conditions for growth: a culture which promotes ideas and practices :
It is a failure of governments, of administrations and of leadership when the vitality of
schools and teachers is lost and there is no ‘tipping point’ into generalisable practice. .
Commenting on the perceived threat of professional accountability in many systems.
Antoine et al. (2001:27) conclude that “where teachers had received a high level of
professional training and were accorded respect and status as professionals, they were
more likely to active participants in self-evaluation”. In simple terms self-evaluation will
be an empty exercise without the commitment of teachers and school leaders.
Three aspects of stickability deserve attention.
1. The framework
There is a continuum in models of self-evaluation from open to closed, from the
invention of the wheel at one end to the detailed step by step cookbook at the other.
Where there is too great a degree of openness it is very difficult for schools to initiate and
sustain self-evaluation. While this is possible in highly self -confident and resilient
schools the greater the pressure from the outside the less will be the time and energy
invested in invention. Where there is top down pressure and time is a scarce commodity
off-the-shelf products have considerable appeal. This is likely, however, to lead both to a
mechanistic approach and a disempowering of teachers. It is significant that when the
Scottish model HGIOS was adopted by an enthusiastic minister in Norway and translated
into Norwegian it was disliked and resisted by teachers. This is not surprising as they had
no part in its development, no engagement in the process but were simply presented with
a ready-made product.
A fundamental principle of self-evaluation, as noted by the European body of inspectors,
(SICI) is ‘steering oneself in order not to be steered’, schools taking the initiative rather
than being reactive to decisions taken elsewhere. Somewhere on the spectrum between
the set menu and the smogasbord are the table d’hote and a la carte models. These offer
frameworks within which there is a greater or lesser range of choice, direction or
guidance. With models to evaluate critically, exemplars to draw on and guidance as to
key principles schools can choose and adapt procedures which most closely match their
own context and stage of development.
2. The tools
Common to many self-evaluation systems is the four point scale for self-rating against
specific indicators or criteria. These are often derived from inspection models and come
with labels attached such as ‘satisfactory’ or ‘good’. A different, less definitive form of
terminology is exemplified by a form used in Ireland, in Hong Kong and in previous
versions of Scotland’s How Good is Our School? In these schema judgements made are
on balance of evidence learning towards strengths (4), weaknesses (1) and more strengths
than weaknesses (3) and more weaknesses than strengths (2). This is not simply a play on
words but rather of way of seeing and negotiating judgement.
As we have found in numerous self-evaluation projects it is into this vast middle ground
of ambiguity that most judgements fall. The nature of those middle ground judgements
depend to a great extent on ‘where you sit’, what and how much you see, the
preconceptions you bring to that judgement and the context in which you place it. An
inspector’s judgement of a lesson made in half an hour is often likely to differ from that
of a pupil, the classroom teacher or the headteacher. While such variance is obvious and
well documented, top down evaluation schema typically fail to recognise this, and by
bypassing differences miss the very meat of the process which is the discourse which
mismatches of perception bring to light. This is what was so powerful in the European
Self-evaluation Project where teachers, pupils and parents together brought their own
perspectives to bear and, through the critical friend, were encouraged to listen to
alternatives constructions of reality. This was strength of McGill’s Schools Speaking to
Stakeholders and the teacher-teacher dialogue in the Rhode Island SALT programme.
Tools used by schools to stimulate dialogue and negotiate judgements have travelled
across national boundaries. For example, Force field analysis (brakes and accelerators)
and SWOT analysis (strengths, weaknesses, opportunities, threats) are used in the
Netherlands, in the Scottish example cited earlier and in Hong Kong where selfevaluation starts with teachers working in small groups to list major strengths and areas
for improvement. The purpose is trying to reach consensus through discussion and press
for evidence. The compilation of these at whole school level leads to the construction of
the development plan.
The Hong Kong toolbox also uses the double-sided questionnaire which, through the eyes
of its various stakeholders, compares the school as it is with the school as people would
like to be. Used in Halton, Ontario, in the Scottish Improving School Effectiveness
Project, in the seven country Leadership for Learning Project at the University of
Cambridge and by the by Northwest Central Laboratories approach to self-evaluation in
the United States, this is a powerful instrument which helps schools to identify the critical
gaps between aspiration and practice and to plan accordingly.
3. The implementation
Schools conduct self-evaluation. But this is often interpreted to mean school management
and is typically a top down process within a school. A radical model bottom-up model is
for this to be carried out by school students. A Swedish school (described in SelfEvaluation in European Schools) illustrates just how imaginative and insightful students
can be when given the opportunity. The conduct of evaluation by students of the
Learning School4, now in its fifth year of operation, is further evidence of students’
ability to have a unique vantage point from which to evaluate learning, teaching, school
culture and leadership. Scotland Chief Inspector at the time wrote this:
4
Written up by students themselves in the book Self-evaluation in the Global Classroom, Edited by
MacBeath and Sugimine, RoutedgeFalmer.
I was taken by the way the students had approached the task within schools. It was
clear to me that they had invested a considerable amount of time in gaining the
confidence of staff and, in particular, their fellow students. ….. Inspired by the
students my thoughts turned to – what price the Scottish Inspectorate trading in a
few inspectors for a student evaluator or two? The very next day I wrote a
memorandum to my senior colleagues suggesting that we should pilot the idea of
‘Student Assessors’ with a view to students becoming an integral part of our
inspection teams. While the Scottish Inspectorate has to take the credit for leading
the way in getting schools and inspectorates to gather the views of students, the
response to my initiative suggested that we were not quite ready to embrace ‘Student
Assessors’. I still have that memorandum and, like ‘The Learning School’, the day
of the ‘Student Assessor’ will come.
A less risky and more pragmatic approach is for the process to be undertaken by a team,
perhaps a SEG, a ‘self-evaluation group’ drawn from a range of volunteers within the
school. This may be a group of teachers or teachers and students, ideally also including
support staff and parents. Bill Smith, architect of the Canadian Schools Speaking to
Stakeholders project suggests a seven step process for the school team. The importance of
stages 1 to 3 is particularly emphasised, yet often bypassed in governments’ eagerness to
ensure no latitude for deviance or creativity in implementing pre-packaged materials.
1 Planning the Evaluation
Planning the evaluation provides a roadmap – not a blueprint – for every successive step
of the process. With the end in mind the school team determines the direction it wishes to
take, ensures conditions and resources for success and that there is sufficient capacity for
the school to undertake the process with attention to appropriate ethical standards.
2 Determining What Matters
Determining what matters consists of establishing the focus and content of selfevaluation, bearing in mind its purpose and context. By the end of this step the school
team will have selected performance themes grounded in the school’s context and
reflecting the importance given to these themes by different stakeholders. For each of
these themes appropriate outcomes and conditions of schooling will have been identified.
3 Measuring What Matters
Measuring what matters shifts from what to how, focusing on appropriate indicators and
the means to produce them. By the end of this step, the school team will have selected
appropriate criteria or indicators for each of outcomes and conditions chosen as well as
feasible means for collecting and analysing the data.
4 Collecting Data
Collecting data involves gathering the bits of information needed to produce the
indicators selected in the previous step. By the end of this step, the school team will have:
identified and accessed appropriate sources of data, using appropriate quantitative and
qualitative methods of data collection.
5 Analysing Data
Analysing assessment data involves assembling and understanding the bits of information
gathered in the previous step. By the end of this step, the school team will have: set
appropriate ground rules for data analysing and interpreting data and criteria for judging
success.
6 Reporting the Results
Reporting the results means organising and communicating the analysis completed in the
previous step. By the end of this step, the school team will have reviewed the process and
assembled all necessary pieces for reporting to various audiences and for varying
purposes.
7. Using the Results
Finally, using the results consists of acting on the findings which have been prepared for
communication in the previous step. By the end of this step, the school team will have
developed action plans based on the results of the assessment.
This is only one model and it is not suggested that all aspects of the work are to be
undertaken by the team to xarry out on its own. Rather the team act as animateurs and
once a framework, a baseline and planning is in place the team may change its function.
Their new role becomes one progressively updating as the school’s story is added to and
unfolds overtime. So the school evaluation group (which may have a rolling membership)
assumes more of a role of chroniclers and catalysts.
What international experience tells us is that if ‘self’ evaluation is truly to be schooldriven it requires ownership without becoming an oppressive extra. Its driving force and
continued momentum derives from its purpose as formative rather than summative, its
fraemwork as supprotve rather than oppressive, as ongoing rather than a one-off audit and
as centred on what really matters to teachers, pupils and parents.
i
School Accreditation Regulation, B.C. Reg. 256/94, rep. by B.C. Regulation 277.
B.C. Education, 2001b, 2003, 2004.
iii
See EQAO web site: http://www.eqao.com/.
iv
MEQ, 2003a, p. 7.
ii