While measuring school performance is important, it is vital that the data used is properly understood and that the broader picture is not lost, says headteacher Peter Kent

Let’s face it, size matters. Whether or not we believe that this is true for all walks of life, we certainly have to admit that it applies in education. We currently seem obsessed with the process of measuring things. Exam results, value-added, parental satisfaction, floor space, carbon emissions. I sometimes think that I could spend all day, every day, wandering around my school finding things to measure. In many ways this would be an attractive prospect. Instead of trying to help the anxious parent who is waiting to speak to me as I walk in on a Monday morning, I could simply ask her to fill in a 10-minute survey. Then I could tell her how well her satisfaction levels compare with a randomly selected group of carers from across the region. This would be totally useless to her of course, but rather good evidence for a self-evaluation form. However, even the process of ceaseless measurement is not as straightforward as it first seems. The problem is that many educational measures are the subject of profound disagreement. Take the example of value-added. Everyone agrees that it should be possible to measure the value that schools add to their students’ education, but how do you do it? As well as the plethora of commercially available measures (MIDYS, CATS, SIMS, ALPS) and those provided by local education authorities (normally based around the Fischer Family Trust data) we now have an ‘official’ DCSF measures of value-added called CVA or contextual value added. Having produced systems to measure value-added at Key Stages 2, 3 and 4, the DCSF are now in the final stages of piloting a system to measure post-16 CVA.

A partial measurement

These new developments should be good news. At last, an official system that I can use to measure things (thus carrying out my key responsibility as a headteacher). The problem is that even this new system provides only a partial measurement of progress. If you have a school full of ‘typical’ students it works pretty well. The problem is that many of us have schools full of untypical students who progress in distinctly untypical ways. While producing a guide to CVA for the Association of School and College Leaders, I had the opportunity to spend time with the officials at the DCSF who developed the system. I had imagined them to be remote civil servants who had little idea of the reality of life in schools. In fact I found the department to be open, down to earth and extremely open to constructive feedback. They were only too aware that no system of measurement is perfect and that CVA should be used alongside a series of other indicators.

Inaccurate conclusions

So that’s good news then – a system of measurement developed by people with common sense, who recognise its limitations. Sadly, no. The problem with our current obsession with measurement is that the people who make use of statistical data have a much weaker understanding of it than those who develop the mechanisms that produce it. Hence, over recent years inspection teams from both Ofsted and local authorities have used data that they have only partially understood to produce completely inaccurate conclusions. We have all laughed at the story of the government minister who declared that he ‘would not rest until every school in the country was above average’, but the smile would quickly disappear from our face if he was leading an inspection in our schools. Over recent years many people in key educational positions have shown a similar inability to understand educational data. In the early days of CVA rogue Ofsted teams insisted that a school’s inspection grade must depend solely on their CVA score. When one headteacher pointed out that the high lesson observation grades awarded by inspectors contradicted her school’s lower CVA score, she was told that ‘we must have got the observation grades wrong.’ While some have shown their lack of understanding by placing too much emphasis upon CVA, others have demonstrated their ignorance by ignoring it completely. There has been more than one instance in which local authorities, acting from an agenda of their own, have used a randomly selected piece of data (perhaps an isolated key stage result or one table from a 30-page Fischer Family Trust report) to decide that a school is ‘causing concern’. On occasions these judgements have been endorsed by school improvement partners who have lacked the confidence to publicly disagree with the local authorities who employ them. However, if anyone had taken the time to check the CVA measure, they would have found that the school was actually adding above average value to the children that they were educating.

Fresh anxiety

Pressure from teacher associations such as ASCL, and some patient explanation about how to interpret CVA by the DCSF, seems to have dealt with the worst cases cited above. However, the introduction of the new post-16 CVA measure is bound to create fresh anxiety about further misinterpretation by inspection teams who have latched on to another partially understood piece of data that can be used to ‘measure’ schools. George Orwell’s nightmare vision of the future in 1984 was based around the misuse of figures. The book opens with the clocks striking 13 and ends with Big Brother assuring us that ‘2+2=5’. It is hard to escape the conclusion that in 2007 the forces of Big Brother are still around and that their sums have definitely not improved. Perhaps now more than any other time we need to make the case that there is more to a school than a string of examination results or a barrage of value-added data. Such information obviously matters, but so does the culture of the school, its distinctive ethos and what it does to promote the personal development and happiness of every individual in its care.

These broader values make up the indefinable special something which makes parents, students and staff actually want to belong to the organisation. Whatever politicians or administrators might try to tell us, it is the size of these values, commonly agreed to be beyond any system of measurement, that really matters.