This week we look at some of the ways to collect and interpret information about pupils’ achievements
SENCO Week Help Sheet 11- Behaviour Diary.pdf
Support for SENCOs
You will be familiar with a wide range of measures used to assess pupils and determine whether or not they are making ‘appropriate’ progress. This process is essential for planning how to meet pupils’ learning needs and for evaluating any intervention that has been used. For small group interventions, look at the ‘scores’ of the whole group, as well as for individuals, over a term/year; use a graph or bar chart to illustrate clearly the progress made. If there is no significant upward trend, then you need to review the effectiveness of the intervention.
Quantitative measures
Reading tests and spelling tests (phonics tests for younger pupils) are staples of a SENCO’s toolkit for demonstrating progress made by groups or individuals. Different types of test include word recognition, sentence completion/cloze exercises and passage reading. In most cases, a group test (with written or multiple choice answers) is used to measure overall standards in a class or school and needs to be fairly quick and easy to administer, and straightforward to mark and score (often online in secondary schools); check on the availability of parallel scripts for retesting. These are usually ‘standardised’ and will produce a score, or a ‘reading age’ (RA) which can be compared with the pupil’s chronological age (CA) (the Suffolk reading test is a popular choice). But see the cautionary note below.
An individual test is often used to give more accurate information about how a pupil is reading; this may include diagnostic elements such as running records and miscue analysis, which help the teacher to analyse mistakes and understand more about the reader.
Some schools still use ‘word recognition’ tests (eg Salford) because they are so quick and easy to administer, but these are very limited in the information they give about a reader. Passage-reading reflects more accurately what a pupil knows about the reading process and how s/he can use the various ‘cues’ to work out unfamiliar words (phonics, grammar, meaning, pictures, familiarity with the context, characters etc). Tests that incorporate a comprehension element will enable you to assess this aspect as well. You may be surprised at how many children manage to read text quite convincingly without very much idea at all of ‘what it’s all about’.
Note of caution Be careful when using test scores and reading ages, and sharing them with colleagues and parents. They can only ever be a rough indication of reading ability and there is a ‘range’ of scores that is acceptable for any child. In practice, the scores in each age group overlap the scores in adjacent age groups a lot, and the overlap gets bigger as the children get older.
Many reading tests provide tables that you can use to convert raw scores to ‘standardised scores’. The average standardised score is always 100; but the handbook will tell you what percentage of children get scores above or below 100. This is because standardised scores not only have a fixed average; they also have a known spread (the standard deviation). Sixty-eight per cent of children come within one standard deviation on either side of the average. Most tests have a standard deviation of 15 points, so 68% of children get standardised scores between 85 and 115. We can think of this range as the ‘normal range’ for any group of children. So a child who gets a standardised score of 90 is within the normal range for his age group. |
This type of testing is useful for evaluating the effectiveness of interventions and considering whether they should be continued or changed, but screening information (eg the Dyslexia Screener) can also be valuable for individuals, showing progress over time.
Writing can be assessed by NC levelling, but for pupils just beginning, ask them to write as many words as they can, on a blank sheet of paper (you can allow them a certain amount of time, five to 10 minutes). This gives a good indication of the number of words in their writing vocabulary and how many they can spell correctly, and is easy to ‘score’; it also demonstrates how a child is using phonics, and provides information about letter formation, spacing etc.
Progress in terms of behaviour can be more difficult to define and measure, but attendance graphs, punctuality records, behaviour records that contain some sort of measurable data (number of incidents per day/week; number of credits etc) can also be useful in assessing progress. (See this Helpsheet for an example of a behaviour diary).
P level descriptors should be in use in all schools now and are effective in reporting even small steps of progress made by pupils with significant difficulties (use ‘best fit’ judgements).
Qualitative measures
These include:
- examples of pupils’ work
- records of achievement
- teacher/parent/pupil voices (possibly responses to questionnaires)
- case studies (of individual pupils and/or specific projects).
These qualitative measures can be valuable in demonstrating progress to parents and governors, as well as in persuading senior managers of the value of a particular intervention.
Remember that IEPs can be useful in providing information about targets met throughout the year. (Often, these are not seen as a complete list.) Collect together a list of objectives achieved − possibly including some whole-class targets as well as individual ones.
This e-bulletin issue was first published in June 2008
About the author: Linda Evans is the author of SENCO Week. She was a teacher/SENCO/adviser/inspector, before joining the publishing world. She now works as a freelance writer, editor and part-time college tutor.