During these strange times, with so few aircraft in the skies, is it time for schools to review their use of another airborne analogy?
When the first National Curriculum came out in 1989, levels were used as guidelines for teachers to determine what a student had learned and to create some sort of progression of subject knowledge and skills. Combined with the introduction of School Performance Tables in 1992; a high stakes metric, school management quickly began to use these levels as a way of measuring progress and hence, as Mark McCourt states in this blog, ‘the broad and general pathway through a subject became an ill-informed and utterly ridiculous statement of learning that simply ignores the way in which learning happens.‘
Even the abolition of National Curriculum Levels in 2013, did little to alleviate this concept. Instead of using these Levels, schools just converted them to GCSE Grades and with it the use of flightpaths, (see a typical example below).
In turn, this meant every assessment had to be related to GCSE grades and this is a highly complex problem to solve. How can we use an in class assessment, say an end of term test, with a very narrow set of assessment criteria to create a GCSE grade? Students will have less to revise and demonstrate less knowledge. Also, and something many teachers do not know enough about, grade boundaries change every year to account for variations in the difficulty of the paper.
This is not to say that we do not want to know if a student is on track, but attempting to convert test scores to GCSE grades is trying to do the impossible and in doing so, create meaningless and misleading data. A great analogy can be read on Matthew Benyohai’s great blog here
If we understand the need for tracking students attainment then, what do we want to achieve with a tracking system?
Tom Sherrington (@Teacherhead) suggests some starting points in this blog. In this, Matthew’s article and also in Mark Enser’s fantastic book, ‘Teach Like Nobody’s Watching’, (978-1785833991), there is the commonality of some form of ranking of students based on school or National data; this is what happens at KS2, GCSE and A-Level anyway.
Why is ranking students, in some form, better than using GCSE grades?
Firstly, may of us will have had the conversation with parents of a Year 7 or Year 8 student that goes something like this ‘Caleb is currently working at Grade 2b, making progress from a 2c, last term, and if he makes expected progress this should mean he is on track for a GCSE Grade 5 in Year 11‘. After spending ten minutes explaining what the difference is between the nonsensical difference between a 2c and a 2b, the parent will often look glassy-eyed at you as though you are talking a foreign language, and just hears the ‘GCSE Grade 5’. They will have switched off to your wonderful comments about the quality of work, or what topics he had succeeded in last term. But, what if Caleb was progressing towards a GCSE Grade 3? Just the change in predicted grade changes the whole conversation. Suddenly, the parent can get quite anxious about the fact that they are being predicted to FAIL!
If we want to create a growth mindset, we need a system that does not conflate termly in class assessments with an external examination and that can measure relative progress. Standardised assessments can offer this opportunity although, even these need to be used with caution. Rebecca Allen in her blog, ‘Writing the Rules on the grading Game’ (Parts i-iii), mentions many studies that show how students can use the ranking concept to their advantage in a negative way; something Carol Dweck calls ‘learned helplessness’. As Allen, states in part iii,
What matters is how the grading information:
- Changes their beliefs about their attainment
- Changes their beliefs about their ability to learn and get better
- Changes their desire to keep playing the competition of trying to be the best, or maintain their position, or avoid the bottom rung
Before we move onto a potential solution to this, let’s remember what standardised assessments are.
Standardised assessments are tests where the raw score is converted to a standardised score based on a nationally representative sample. 100 is the ‘average’ score.
The main companies that offer these are Hodder Education (RS Assessment), NFER and GL Assessment, although they are only available for Mathematics, Science and Reading (English). With these assessments, students can be tracked according to know criteria, year-on-year, while progress can be measured as the relative position the student is year-on-year. Alternatively, students could be put into 7 groups (see the diagram below for possible groups) and student progress tracked by grouping year-on-year . One word of caution should be mentioned here. The scores are obviously a snapshot of attainment and has a margin of error. Nearly always the 90% confidence interval is given. This means the true score of attainment is 90% certain to be within this range.
By labelling the groups, say 1-7 or A-G, then students have some idea of their ‘rank’ in the school and nationally, but cannot convert this to a GCSE Grade easily. The focus for students becomes making progress rather than the end goal; so moving up a group is good, or even improving their score (above the 90% confidence range). For teachers and school leaders data analysis becomes easier as scores (relative ranking) can be compared year-on-year. An example of the measures used for progress can be seen below.
Just by using these assessments effectively, schools can track attainment and progress effectively with a fairly simple spreadsheet.
This data demonstrates how Emma is making expected progress over the year and is in the ‘average’ group, while Sasha has also made expected progress, but has moved up to the ‘low average’ group.
Why is measuring ‘progress’ important?
As mentioned previously, one of the major issues with any ranking system is the response from the student. Using the ‘rate’ of progress, gives another level of feedback to a student about the quality of their learning. Maintaining the same grouping, is saying to a student they are making the same progress as everyone else, while moving up or down, demonstrates that they are either learning better or worse than their peers and, in the case of underperformance, something should be done about it. This is where a favourite spreadsheet tool of mine comes in. Using a pivot table can quickly identify ‘key’ students within the cohort, like the one below.
The great advantage of using a pivot table is that by clicking on any relevant cell, the student’s names will be created for that group. For example, if there was a group of students not making expected progress, they can easily be determined and appropriate intervention provided for them.
As mentioned previously, this can be done for Mathematics, Science and English. So, in order for this to be effective across the curriculum, ‘every department needs an external reference mechanism to gauge standards: establish it and use it to evaluate standards at KS3.’ (from Teacherhead). Even using comparative judgements on work (for example No More Marking), can be used to rank students and place them into the appropriate groups.
A final word to my Primary school colleagues, although my discussion uses Key Stage 3 and 4, it is also very relevant for Key Stage 1 and 2, with Mathematics and Reading; all the companies mentioned above have tests available for these subjects and the same processes of tracking attainment and progress can be utilised. Also, by using a comparative marking system like ‘No More Marking’ then the same process, (ranking students into groups), can be followed.
I have now created attainment trackers for standardised assessments here. They are free to use and feel free to contact me if there are any errors or any changes you think I should make.