Management information systems (MIS) and the analysis of data can lead to significant improvements in teaching and learning – as long as curriculum managers analyse and apply the findings effectively when considering pupil achievement, says Anthony Powell

In many schools, the power of management information systems (MIS) is not properly exploited, in large part because the amount of data is so great. The purpose of information systems is to identify patterns and anomalies in pupil performance and enable schools to trace the causal links with the quality of education. This means schools need to identify what they want to monitor and evaluate, what information they need and how they are going to organise and interrogate the evidence base they have created.

It takes time to set up effective systems and this is one of the main reasons why there is a wide variation in practice in using MIS to improve teaching and learning (T&L). There are audit tools to help you with this – see the box below for one example. To achieve maximum benefit from using MIS, schools need to spend time on the early stages of development to lay the foundations essential for evaluating all aspects of the school’s work in future.

Improve: audit tool

Improve: data for school improvement is a free audit instrument available for schools that is organised into four domains, using Bloom’s taxonomy (see Taxonomy of educational objectives, the classification of educational goals, 1956, McKay):

  • climate for learning
  • understanding pupils
  • school strategy
  • managing resources.

This detailed software package aims to show schools the stages of development towards using management information systems efficiently and effectively. Schools can register to use Improve at: www.dataforimprovement.co.uk

Uses and misuses

It is a capital mistake to theorise before using data. (The Memoirs of Sherlock Holmes, by Sir Arthur Conan Doyle, Penguin Books, 1973)

‘Data’ here means any information, but Holmes pioneered in literature the methodical use of evidence to reconstruct the chain of events. In education, the starting point for measuring impact (before theorising) is to analyse standards and progress.

The use of MIS has developed in response to need. Level of use also depends on expertise within the school and its stance towards sharing information. Common stages, from simplest to most sophisticated, are listed in the box below.

Stages of MIS use


Storage Access Analysis

Minimal use

I

I

V

More extensive use

  • At this stage, little use is made of data
  • The school has systems in place for storing its own data
  • External data (such as RAISEonline and LA data) is stored effectively
  • Access is limited to a data manager or a few key staff
  • Middle leaders are granted access
  • All staff can access data relevant to their responsibilities
  • All staff can contribute to the database
  • There is little analysis of data and it is provided in an undifferentiated form
  • Analysis is limited to the data manager or a few key staff and the results only are distributed
  • Staff with responsibilities are expected to analyse datasets and use this for monitoring and evaluation
  • Analyses from across the school are collected by senior managers

Synthesis Evaluation Action

Minimal use

I
I

V 

More extensive use

  • Different datasets are consulted and patterns are identified by serendipity
  • Datasets are cross-referenced to establish links between aspects of the school’s work
  • Areas for investigation are identified and evidence is collected from a range of datasets
  • Data is used to evaluate single aspects of the school’s work such as teaching
  • Data is used to evaluate impact on standards and achievement
  • Data is used to identify causal links between impact and provision and the contribution(s) made by different aspects of the school’s work
  • Evaluation feeds into school improvement planning

The main data sets used by schools are RAISEonline and Fischer Family Trust. Any analysis of data will show that different pupils and groups of pupils make different rates of progress across the curriculum. To identify patterns, schools should analyse results using the questions in the box below.

Identifying patterns in performance: questions to ask

Past performance – standards
‘Standards’ is synonymous with results or attainment. It is a calculation rather than an evaluation, so remember to compare like with like and take into account national differences. Standards are compared using levels, grades and average points scores (APS), with APS being the best overall comparison. At KS4, the best way to compare subjects is to use relative performance indicators (RPIs). RPIs compare the points scores for pupils taking a subject against the average score for their attainment in all other subjects. They also take into account the national differences between subjects. If the RPI score is significantly positive for a subject, this provides strong evidence that pupils made better progress in that subject. For each key stage, ask yourself the following questions:

  • How have standards compared with national results and those for similar schools?
  • Were there any marked differences in the standards attained:
    • in different subjects?
    • by different groups of pupils?
    • by different groups within different subjects?
  • Are there any clear trends in standards?

Past performance – progress
‘Progress’ is about distance travelled in schools on an educational journey. Pupil progress is measured using two types of value-added score: attainment and achievement tables (AAT) value added calculates progress across the key stage or stages using attainment scores only; contextual value added (CVA) modifies this by taking into account a range of pupil characteristics. Although RAISEonline shows CVA scores for pupil groups along one line, this is actually a merged set of median lines for different pupil groups. So, for example, the CVA score for less able boys compares the school group with this group nationally.

It is worth drilling down beyond the headline results to make the next stages more efficient. For example, in one school, history consistently has a RPI significantly above other subjects, due to:

  • consistently high attainment at the higher grades of A and A*
  • girls’ attainment being consistently above that of boys although both are consistently above the school average at A*–C.

In this case, the school should investigate the reasons for this high attainment by looking at the quality of teaching, assessment and target-setting.

When assessing progress, ask yourself:

  • How has overall progress compared with the national median?
  • Were there any marked differences in the score for different:
    • subjects?
    • groups?
    • pupil groups within the core subjects?
  • Were there any marked differences in progress in different key stages?
  • Did students achieve their (challenging) targets?

Current performance
Current performance can be evaluated using assessment records and by techniques such as observation and scrutiny of work. Ask yourself:

  • What progress are pupils making now (against their learning goals)?
  • Are there any marked differences:
    • in current progress between groups?
    • between subjects?
  • Are the patterns identified in the analysis of results being repeated?

Predicted performance Where pupils do well in a subject (in comparison with others), it is often because there is a higher level of challenge. There are a number of ways to evaluate high expectations, but a good starting point is by comparing the targets set in different subjects. For example, if allied subjects such as history and English have different overall targets for the end of Key Stage 3, this needs to be investigated. Ask yourself:

  • Are there any differences in the targets set for pupils?
  • Are pupils on track to achieve targets?

There are dangers in having such a wealth of information available. The most common are set out in the box below.

Dangers of data

Too much time processing data
Very frequent assessments and gathering of data may leave insufficient time to plan effective teaching and learning strategies. In turn, senior staff may spend so much time analysing the information that they have little time and energy to spare for reflection, visioning and development planning.

Analysis paralysis
Small differences in performance are identified and given undue importance. Some schools also carry out many types of analysis on the same data to identify causes outside of their control so as to explain weaknesses. The result is they fail to take any action.

Focusing on small differences
Data sets such as RAISEonline give what seem to be very precise data, for example the school’s percentile rank. This leads some schools to invest importance in very small differences, such as the difference between the 45th percentile and the 55th percentile. In fact, there is little difference because of bunching at the median.

Assessment is too frequent
Where staff are asked to provide data too frequently, assessments are not carefully linked to key learning objectives, so they fail to measure significant progress.

Lack of standardisation
Staff are asked to provide information and judgements but these are not standardised. The result is that data is unreliable, since what one teacher identifies as L4 may be L5 to another.

Analysis is driven by external factors such as league tables
When analysis is driven by external factors, the focus is then on a narrow set of data such as attainment at L5+ at the end of Key Stage 3 or five or more A*–C grades at the end of Key Stage 4. This excludes paying attention to the achievement, or non-achievement, of other groups of pupils.

Confusing data with knowledge
Much quantitative data is sets of figures. They only become meaningful when they are interpreted using other information.

Making invalid comparisons
Schools may compare attainment data (in other words results) and conclude that standards are rising. In fact, different cohorts of pupils have different ability profiles, so these results are saying nothing about attitudes and the dynamics between pupils. This is why it is more important to compare value added or progress.

Ignoring qualitative data
The distinction is often made between ‘hard’ (quantitative) and ‘soft’ (qualitative) data with the former being regarded as more accurate, valid and robust evidence, but in education the most valid evidence may be qualitative. For example, a school may gather attendance and achievement data as evidence that pupils ‘enjoy’ their education. Yet the students’ perceptions may be otherwise, so it is important to ask them.

Ignoring the present
All of the data provided to schools from external sources is about past performance. Often this is many months out of date. It is important to analyse this data for patterns and anomalies. However, it cannot be presumed that these patterns are persisting. The most important data is the school’s own assessments of current progress.

Many of these can be avoided if schools think through their procedures and purposes for MIS. The questions in the box below provide a focus for discussion and a way to evaluate the effectiveness of your system.

Evaluating the use of data

Does the school have a clear purpose for using data?
The purpose should be driven by the overriding goal of improving pupil achievement – if there is no clear link between data analysis and pupil progress, the exercise should be questioned.

Is data analysis integrated into whole-school self-evaluation and school improvement planning?
Data will come on stream throughout the year. Analyses, for example of the RAISEonline report, should be located within the cycle of evaluation, target-setting and the determination of priorities in the school improvement plan. The regulations for performance management also require that the review cycle should be integrated with school improvement.

Is data reliable and standardised?
Some schools may be failing to administer assessments rigorously – school assessments must be standardised against national curriculum levels or external exams.

Does data give a baseline for measuring progress?
Pupil progress can only be measured from a starting point. Schools need to establish a baseline at the start of each key stage. If different measures are used, for example cognitive ability tests (CATs) and standard assessment tasks (SATs) results, these should be compared to give the most accurate picture.

Does the school use data to set targets and track progress?
Targets should be set at the start of the key stage based on prior attainment and teacher assessments. These long-term targets should be broken down to years, terms and half-terms where they become learning objectives. These should be displayed prominently and pupils made aware of them. Progress towards these targets should be carefully monitored and recorded on standard forms so teachers receiving them understand them. Where pupils are not making expected progress, action needs to be taken.

Does the school analyse the progress of groups of pupils?
RAISEonline records the attainment and CVA scores for all groups of pupils identified on the pupil level annual school census (PLASC) form. It is particularly important to analyse the progress of vulnerable groups and those from ethnic minorities.

Does use of data lead to action?
Data analysis should lead to changes in practice from amendments to the curriculum, and changes in pupil groupings, to modification of targets, and the introduction of different teaching and learning strategies.

Does the use of data have an impact on pupils’ achievement?
In the short term, the school can use teacher assessment to evaluate whether pupils are making better progress than they would have done if it had not taken action. In the medium and long term it should be able to point to a rising trend in improved achievement.

Is the school’s use of data cost-effective?
Staffing costs are the biggest element in any school budget. The amount of time devoted to analysing data should be estimated and compared against other costs such as planning and preparation. This cost needs to be justified by its contribution to raising achievement.

Blanket monitoring versus focused evaluation
All schools now regularly collect information about their provision and procedures, for example observation of teaching, and outcomes such as attendance, behaviour records and academic results. Other information is collected and often not stored systematically or collated, for example records of rewards and sanctions. This can be called ‘blanket monitoring’, whereby the school has a lot of ‘knowledge’ about itself but does little analysis, synthesis or evaluation of this information. But monitoring should not be dismissed as having no value, as it often ensures that policies are implemented consistently and this is one of its main purposes.

‘Focused evaluation’ starts with an area for investigation and a hypothesis. For this Case in Point, the area for investigation that we are focusing on is teaching and learning and the hypothesis is that it is variable across the school, which has an impact on pupil performance. Put bluntly, pupils make better progress in some subjects and classes than in others and the school wants to know why.

Schools will be familiar with this process from Ofsted inspections. With the changes in the Ofsted methodology introduced in September 2005 (see: Every Child Matters: framework for the inspection of schools in England from September 2005, Ofsted, 2005), inspectors now carry out a pre-inspection briefing (PIB). From the evidence available at that point, inspectors identify and provisionally agree what are expected to be strengths of the school and also issues for the inspection. These will largely derive from differences in performance.

Aims and success criteria
Monitoring and evaluation of teaching and learning consumes a great deal of staff time. Since this is the most valuable (and the most costly) resource in the school, it is important to ensure the purpose of the exercise is to raise student achievement. Pupil achievement is an effect, while any improvement strategy is a cause that will almost certainly take time to have an impact. It is essential to think through short-, medium- and long-term success criteria so that you can then measure progress at each stage, which in this case would be along the lines of:

  • short term: the school will have a clear understanding of the patterns of achievement across the school and how these are related to the quality of teaching and learning
  • medium term: the continuing professional development (CPD) programme will be successful in improving the professional skills and expertise of all staff
  • long term: there will be a long-term rising trend in achievement across the school.

Perspective
Schools often complain about Ofsted inspectors having preconceived ideas, usually based on an analysis of data. While school staff have a more rounded understanding based on a much greater evidence base, it is still true that they are encumbered with too much knowledge. The danger can be that some underlying assumptions will not be questioned. For example, the senior leadership team may begin this evaluation with judgements about the quality of teaching and individual teachers. Reality will be far more complex.

Schools should adopt a perspective that is objective, fair and based on robust evidence. Monitoring and evaluation should always be a team effort, seen as professional and supportive. It is useful to adopt the standpoint of someone who does not know the school but needs to understand it quickly.

Organise the evidence base
Evidence will be in many forms, but broadly, it will fall into two categories:

  • evidence about outputs (academic and personal development)
  • evidence about provision (the quality of teaching and learning, curriculum and care, guidance and support)

There will also be different types of evidence, for example qualitative and quantitative, and evidence will be generated from different sources such as data, interviews, observations and so on.
Whatever the type of evidence, it must always be interpreted, cross-referenced and synthesised.

This process will result in written notes from brief jottings, records, ideas and speculations to organised reports. The danger is that during the process, the evidence base becomes scrappy and disorganised. Ofsted use what is called an evidence form (EF) to record all the information gathered during the inspection. There are several advantages in using a common template for recording findings. For example:

  • it can contain background information such as the author, the date and time and the nature of the evidence, and record year groups, classes, numbers, ability and gender – information that is essential for making objective comparisons.
  • it allows all the evidence to be recorded in one place for ease of reference.

The Ofsted evidence form can be easily adapted for school use. It can be downloaded from the section on ‘forms and guidance’ (follow the link through ‘schools’, ‘ inspection resources’, and then ‘guidance for inspectors of schools’).

The framework for evaluating all aspects of a school’s work is now provided by the self-evaluation form (SEF). If a school files all its evidence against the SEF sections, then all the information generated by this exercise should be stored in Section 5a: the quality of teaching and learning.

Methodology
There is a logical link between good teaching and good learning, so the starting point is to evaluate pupil performance. Differences in achievement lead into an investigation of the quality of T&L using agreed criteria. This will involve a variety of techniques. While criteria for evaluating T&L are well known and used extensively by schools. techniques for gathering evidence tend to be limited, with too much emphasis placed on lesson observation. The main criteria are listed in the box below with suggestions for questions and for collecting evidence not normally consulted. All of this evidence would be supplemented by focused lesson observation.

Criteria evaluation T&L

Evaluation question Assessment criteria Evidence type
Do teachers have a secure knowledge and understanding of the subjects or areas they teach?
  • Are teachers well qualified in the subjects they teach?
  • Do they have a good knowledge and understanding of national strategies?
Staff files will show qualifications and experience, courses attended and performance management objectives and reviews.
Do teachers set high expectations so as to challenge pupils and deepen knowledge and understanding?
  • Are objectives and activities challenging?
  • Does the range of activities meet the needs of all pupils?
  • Are pupils expected to work independently where appropriate?
  • Is the teacher giving too much support so as to remove challenge?
Evidence includes results (particularly higher grades), planning, teacher assessment and scrutiny of work.
Do teachers plan effectively?
  • Does the school have a curriculum map showing time allocated for subjects?
  • Are there long-, medium- and short-term plans for all subjects?
  • Do plans have clear learning objectives?
  • Do plans also show pupil activities and resources required?
Consider schemes of work, pupil groupings, classroom organisation and use of support staff.
Do teachers employ methods and organisational strategies that match curricular objectives and the needs of all pupils?

Teaching and learning methods should be evaluated by their appropriateness since there are no ‘best’ strategies. However, across the school a range of strategies should be used to cater for different learning styles:

Teacher activity During the course of the lesson do teachers move through a range of activities such as explaining, narrating, demonstrating, instructing, questioning?

Pupil activity During the course of the lesson do pupils move through a range of activities such as listening, watching, discussing, reading, writing, experimenting and problem-solving?

Working pattern Do pupils work in groups, pairs, as individuals and as a whole class appropriate to task?

Resources Is there a good range of quality resources, for example books, pictures, artefacts, equipment and ICT?

The range of strategies will be shown in planning, pupils’ work, student tracking and staff development interviews. Drop-in observations (lasting about 10 minutes) can also be used to survey the strategies being used across the school.
Do teachers manage pupils well and achieve high standards of discipline?
  • Are classroom rules displayed prominently?
  • Are there clear routines that students understand and follow?
  • Is a high percentage of time spent on task?
  • Do pupils find it easy to approach the teacher for help?
  • Do teachers use praise and encouragement?
  • Do teachers respond quickly to potential problems?
The use of rewards and sanctions, classroom rules and organisation, pupil referrals, parental responses and student interviews all give evidence of the different nature of relationships across the school.
Do teachers use time and resources effectively?
  • Are lessons well structured to take into account the amount of time available?
  • Is the time for the task signalled to pupils?
  • Are students clear about:
    • what they have to do?
    • why they are doing it?
    • how long they have to finish?
    • how to judge their own success in the activity?
Evidence includes planning, teacher assessment, pupils’ work, range and quality of resources used.
Do teachers assess pupils’ work thoroughly and constructively, and use assessments to inform teaching?
  • Is work regularly and consistently marked?
  • Are comments encouraging and constructive?
  • Do comments point out errors and give advice on how to improve?
  • Is teacher assessment accurate?
Check against the homework timetable and consider whether homework simply replicates classwork.
Do teachers use homework effectively to reinforce and/or extend what is learned in school?
  • Are instructions for homework clear and understood by students?
  • Does the setting of homework take into account the resources to which pupils have access?
  • Does homework build on and extend work done in class?
Look through assessment records, check the accuracy of assessment, planning and how books are marked.

At the end of this exercise, schools will have detailed knowledge of teaching methods and strategies across the curriculum. More importantly, they should understand the causal links between various methods and pupil achievement. The creative part is to design and implement a CPD programme that gives all teachers the professional expertise to use all the successful strategies.

The related case study explores how beneficial effective data analysis can be, showcasing one school that uses MIS to equip staff to take T&L to the next level.

Anthony Powell, consulting educationist specialising in self-evaluation, school improvement and CPD