The argument between these two styles of teaching has been raging on #Edutwitter for the last few years and it has become very divisive over whether you are a ‘Discoverist’ or an ‘Instructionist’.
Well, let’s just think about our own experiences of learning for a while. If I was to attempt an A-Level Mathematics problem, I could answer it, fairly quickly, as I have the knowledge and skills needed to answer the question, whereas, if I was asked to deliberate on the political issues surrounding Hong Kong, I would struggle more, as I do not have the knowledge and skills in this area. I do though have enough knowledge to begin to understand the different issues on a superficial level and can begin to make connections and create some understanding of the issues. Now, if I think of some PhD work on virology, (sorry got to bring the pandemic in somewhere!), I would be totally lost about how viruses work and the chemistry involved.
These three scenarios, hopefully, give an insight into how we learn. In order to create connections between different concepts we need to have a good enough understanding of them in the first place. If we have no idea of the concepts, then we cannot begin to make connections and cannot begin to solve the problem. At the other end, if we have mastered a concept, then we can use that to support our development of further skills and concepts.
These have often been distilled down into 4 groups: –
Unconscious Incompetence – you don’t know about the concept and therefore have no idea how to use it; think about a newborn baby and feeding.
Conscious Incompetence – you know about the concept, but have not practiced it enough to be skilled in it; now think about the baby who has watched a parent feeding and tries to imitate them, they understand the concept, but do not have the motor skill to do it consistently.
Conscious Competence – you have practiced the concept, but it still takes a lot of thinking to complete it consistently; this is now the baby, who can feed themselves most of the time, but still needs to be cleaned afterwards.
Unconscious Competence – you have completely mastered the skill and could be thought of as an ‘automatic’ process; adults when they eat, although Mrs H, would beg to differ!
Using this in an education environment, we can immediately see that, in order to develop our understanding, we need to be exposed to the concept first and then practice it until it becomes mastered. And here we have this conjunction between discovery learning and direct instruction. Both are essential for learning, it is how they work together that is the key.
Returning to our four groups, Direct instruction is key to the first two groups; introduction to a new concept and ‘copying’ an expert as without these parts, we do not even know the concept exists and how it works. Only by exposure and imitation can we begin to internalise them. At this point, we have not yet ‘mastered’ the concept, just become skilled at using it. It is only when connections to other known skills are developed that mastery of the concept can happen. This is the time for discovery learning, using one’s own knowledge to solve problems and this is a vital part of the learning process. If we continue to just practice using the process, then we get better at using them, but we cannot appreciate, where it sits in our own constructed understanding and begin to master the concept and without mastery of the concept, you struggle to keep developing your own understanding. Just look at the interconnectedness of school age mathematics to begin to understand how, without mastery of a concept, everything weakens and begins to fall.
There are many Schemes of Learning that are free to use and claim to be a Mastery Scheme of Learning.
True mastery learning is almost impossible in most school environments as the groupings need to be as homogeneous as possible, which is not how we run education in this country. Often, there are additional demands of having to follow a ‘timeline’, meaning that teachers feel they have to ‘move on’ before true understanding has developed. If you want to understand more about mastery learning, I suggest you read Mark McCourt’s (@emathsUK) book ‘Teaching for Mastery’ (978-1912906185).
I have been a subscriber to CompleteMathematics and have to say, that once you get your head around the platform, it is the best way of quickly understanding where a class is with their learning. It is really worth booking one of their free demos. The platform is just amazing for planning effective mastery lessons as it does not follow a prescribed timeline of objectives, rather allows you to plan for progress of the class at the pace they need, (further details on maths age can be found here or another about mathematical maturation here)
Firstly, you need to have a look at this YouTube video, explaining how, as we know, mathematics is a series of interconnected skills, and we need to master the prerequisite skills before we can develop onto the new skills.
The CompleteMathematics platform uses the concept of prerequisites for planning, so when planning you can determine the skills needed in order to be able to access the new skill and also where it builds to.
Here is a video demonstration of the platform or you can read an excellent post about it by Chris McGrane here
The platform is £950+VAT per annum, which in my opinion is worth it, especially when schools are buying sets of textbooks every year, but I understand it is quite expensive (although not as expensive as other platforms!!).
White Rose Maths is a lot cheaper at £120 per annum and they have developed a scheme around a similar principle of small steps. This means teachers can begin learning the new skills for their class, once they have assessed the prerequisite skills and do not have to follow a ‘prescribed’ weekly plan. Prerequisite assessment can be done easily by using the AfL ‘supertool’ that is Diagnostic Questions by Craig Barton which has a White Rose Collection. This enables the teacher to have a sets of prerequisite questions as well as sets of AfL questions available for each topic.
Further details of schemes of learning can be found on this page.
Exit tickets have become a buzz word in education recently, but are they useful and are we using them correctly?
Are they useful?
Assessment for Learning is all about us, as teachers, knowing what students have learnt and understood so we can move on to the next idea, without fear that some students will not be able to grasp it. This immediate feedback loop is vital if we are to support students correctly and therefore, should happen constantly within the classroom setting and not just be at the end of a lesson or end of a topic. Exit Tickets are just one method of AfL, in particular, whether the class did not get the objective of the lesson and this will inform the class teacher to rethink the lesson and maybe return to the objective in another way to improve understanding. Harry Fletcher-Wood describes this brilliantly in his blog ‘Using exit tickets to assess and plan‘, where he explains how they should be used as to check the learning of the desired objective. The one thing they should not be used for is checking whether students understood the objective. At the end of a lesson any students demonstrating that they can remember what went on during the last hour is doing just that remembering the last hour, all the recall is has context and recency.
So how should we be using them?
Jo Morgan’s presentation on AfL, neatly sums up the process that the teacher should take when using exit tickets, in that they should only be used to inform the planning for the next lesson. They should not be used as evidence of learning for a later stage. Like all AfL, the feedback is for the teacher, not necessarily the students.
My personal favourite for simple exit tickets are Craig Barton’s Diagnostic Questions. There can be displayed and students answer online, or on paper. They can also be printed out so students can answer them. A quick look though the answers will inform the teacher very quickly. Also, using the insights on the platform has hundreds or even thousands of misconceptions to questions, which can really help with planning lessons. I find these easier to use than some of the exam question styles as, with these, there are more potential misconceptions that I might miss.
I have added links to the White Rose Diagnostic Questions on my Schemes of Learning, as well as creating Desmos activities and PDFs, if colleagues want hard copies.
During this ‘final’ lockdown, where we have all been retraining as ‘remote learning tutors’, we have all been experimenting with different methods of supporting students learning. I am sure we have all tried both synchronous and asynchronous lessons and finding many benefits and negatives for both. One tool I tried was the Desmos Activity Platform (not to be confused with the Desmos graphing tool) and I have found it fits both styles of lesson really well, but the MOST AMAZING thing I found out, was that it can be used for subjects other than mathematics.
I introduced the platform to the talented primary teacher, I call Mrs H, and she has taken it and run both English and science lessons on it with great success with her classes. Here is a screen shot of one of here activities (kindly agreed by Mrs H when I brought her a cup of tea!!)
So I thought I would write a blog about how the platform works and why it is so useful for remote learning.
Firstly, it is a very easy platform to work with, everything is very obvious as to the activity and the drag and drop method of creating screens make it simple to use. There is a large range of different responses available for students, including text, card sorts, sketchpads, graphing tool and numeric answers to go along with the standard multiple choice of most other platforms. This means that teachers can set a range of different answer formats for students, which will give a better understanding of the learning taking place, (Assessment for Learning!). For most of the options, there is the availability (for those more advanced in basic programming), to set the correct answer for a question.
This is also enhanced by the use of the teacher screen, where you can click on any question by any student and look at their response for accuracy, especially as if there is a ‘correct’ answer available the teacher screen will ‘auto-mark’ the response.
During ‘live’ lessons this screen becomes even more useful, as there is a ‘pause’ button. This option enables the teacher to stop the students from progressing onto any other screen, so that any whole class misconceptions can be immediately reviewed, (AfL at its best!). The teacher can also use the ‘teacher pacing’ button, which enables them to decide the pace at which the screens are moved forward.
There are also options to add pictures and videos to screens, so that the ‘live’ lesson doesn’t need to be recorded, the teacher can just add a video for those students not able to attend into the appropriate screen.
Finally, all the student responses are recorded and can be accessed at any time in the future, so reviewing work done, or even reviewing what needs to be revised can be found very easily.
Many of my mathematics colleagues have already found this amazing platform and use it for ‘live’ graphing’ as well as the activities. For my Key Stage 3 colleagues, who may use White Rose Maths, Charlotte Hawthorne (@mrshawthorne7), has created some incredibly skilful activities for the end of topic assessments on her website (sketchcpd.com) or on the desmos site here.
The end of ‘lockdown’ and a return to some sort of normality should not put anyone off using Desmos in the classroom or even for homework. If you haven’t tried to make an activity, then try doing one, even if it is for your next department meeting. I am sure you will find it as simple to use as I did and be a worthwhile addition to your expertise.
During these strange times, with so few aircraft in the skies, is it time for schools to review their use of another airborne analogy?
When the first National Curriculum came out in 1989, levels were used as guidelines for teachers to determine what a student had learned and to create some sort of progression of subject knowledge and skills. Combined with the introduction of School Performance Tables in 1992; a high stakes metric, school management quickly began to use these levels as a way of measuring progress and hence, as Mark McCourt states in this blog, ‘the broad and general pathway through a subject became an ill-informed and utterly ridiculous statement of learning that simply ignores the way in which learning happens.‘
Even the abolition of National Curriculum Levels in 2013, did little to alleviate this concept. Instead of using these Levels, schools just converted them to GCSE Grades and with it the use of flightpaths, (see a typical example below).
In turn, this meant every assessment had to be related to GCSE grades and this is a highly complex problem to solve. How can we use an in class assessment, say an end of term test, with a very narrow set of assessment criteria to create a GCSE grade? Students will have less to revise and demonstrate less knowledge. Also, and something many teachers do not know enough about, grade boundaries change every year to account for variations in the difficulty of the paper.
This is not to say that we do not want to know if a student is on track, but attempting to convert test scores to GCSE grades is trying to do the impossible and in doing so, create meaningless and misleading data. A great analogy can be read on Matthew Benyohai’s great blog here
If we understand the need for tracking students attainment then, what do we want to achieve with a tracking system?
Tom Sherrington (@Teacherhead) suggests some starting points in this blog. In this, Matthew’s article and also in Mark Enser’s fantastic book, ‘Teach Like Nobody’s Watching’, (978-1785833991), there is the commonality of some form of ranking of students based on school or National data; this is what happens at KS2, GCSE and A-Level anyway.
Why is ranking students, in some form, better than using GCSE grades?
Firstly, may of us will have had the conversation with parents of a Year 7 or Year 8 student that goes something like this ‘Caleb is currently working at Grade 2b, making progress from a 2c, last term, and if he makes expected progress this should mean he is on track for a GCSE Grade 5 in Year 11‘. After spending ten minutes explaining what the difference is between the nonsensical difference between a 2c and a 2b, the parent will often look glassy-eyed at you as though you are talking a foreign language, and just hears the ‘GCSE Grade 5’. They will have switched off to your wonderful comments about the quality of work, or what topics he had succeeded in last term. But, what if Caleb was progressing towards a GCSE Grade 3? Just the change in predicted grade changes the whole conversation. Suddenly, the parent can get quite anxious about the fact that they are being predicted to FAIL!
If we want to create a growth mindset, we need a system that does not conflate termly in class assessments with an external examination and that can measure relative progress. Standardised assessments can offer this opportunity although, even these need to be used with caution. Rebecca Allen in her blog, ‘Writing the Rules on the grading Game’ (Parts i-iii), mentions many studies that show how students can use the ranking concept to their advantage in a negative way; something Carol Dweck calls ‘learned helplessness’. As Allen, states in part iii,
What matters is how the grading information:
Changes their beliefs about their attainment
Changes their beliefs about their ability to learn and get better
Changes their desire to keep playing the competition of trying to be the best, or maintain their position, or avoid the bottom rung
Before we move onto a potential solution to this, let’s remember what standardised assessments are.
Standardised assessments are tests where the raw score is converted to a standardised score based on a nationally representative sample. 100 is the ‘average’ score.
The main companies that offer these are Hodder Education (RS Assessment), NFER and GL Assessment, although they are only available for Mathematics, Science and Reading (English). With these assessments, students can be tracked according to know criteria, year-on-year, while progress can be measured as the relative position the student is year-on-year. Alternatively, students could be put into 7 groups (see the diagram below for possible groups) and student progress tracked by grouping year-on-year . One word of caution should be mentioned here. The scores are obviously a snapshot of attainment and has a margin of error. Nearly always the 90% confidence interval is given. This means the true score of attainment is 90% certain to be within this range.
By labelling the groups, say 1-7 or A-G, then students have some idea of their ‘rank’ in the school and nationally, but cannot convert this to a GCSE Grade easily. The focus for students becomes making progress rather than the end goal; so moving up a group is good, or even improving their score (above the 90% confidence range). For teachers and school leaders data analysis becomes easier as scores (relative ranking) can be compared year-on-year. An example of the measures used for progress can be seen below.
Just by using these assessments effectively, schools can track attainment and progress effectively with a fairly simple spreadsheet.
This data demonstrates how Emma is making expected progress over the year and is in the ‘average’ group, while Sasha has also made expected progress, but has moved up to the ‘low average’ group.
Why is measuring ‘progress’ important?
As mentioned previously, one of the major issues with any ranking system is the response from the student. Using the ‘rate’ of progress, gives another level of feedback to a student about the quality of their learning. Maintaining the same grouping, is saying to a student they are making the same progress as everyone else, while moving up or down, demonstrates that they are either learning better or worse than their peers and, in the case of underperformance, something should be done about it. This is where a favourite spreadsheet tool of mine comes in. Using a pivot table can quickly identify ‘key’ students within the cohort, like the one below.
The great advantage of using a pivot table is that by clicking on any relevant cell, the student’s names will be created for that group. For example, if there was a group of students not making expected progress, they can easily be determined and appropriate intervention provided for them.
As mentioned previously, this can be done for Mathematics, Science and English. So, in order for this to be effective across the curriculum, ‘every department needs an external reference mechanism to gauge standards: establish it and use it to evaluate standards at KS3.’ (from Teacherhead). Even using comparative judgements on work (for example No More Marking), can be used to rank students and place them into the appropriate groups.
A final word to my Primary school colleagues, although my discussion uses Key Stage 3 and 4, it is also very relevant for Key Stage 1 and 2, with Mathematics and Reading; all the companies mentioned above have tests available for these subjects and the same processes of tracking attainment and progress can be utilised. Also, by using a comparative marking system like ‘No More Marking’ then the same process, (ranking students into groups), can be followed.
I have now created attainment trackers for standardised assessments here. They are free to use and feel free to contact me if there are any errors or any changes you think I should make.
Damian Hinds has pledged to ‘strip away’ teacher workload. Well we can all ‘strip away’ things, from paint to clothing, but surely this sounds like removing thin layers rather than wholesale changes.
So what things is he going to remove?
Well for a start he could insist that school managers remove the need for every book to be marked every day. Assessment of learning is vital in any good teacher’s role, but that does not mean explicitly marking every book and writing a comment.
Reading Bernard Trafford’s column in the TES this week resonated well with a lot of teachers. One comment was particularly enlightening, where an KS1 teacher spoke of having to write a progress comment in every Y1 pupil’s literacy book, using language that they may understand verbally, but have no hope of reading, let alone being able to act upon it.
Earlier in the week, Gavin Goulds wrote a piece about how his department had cut the amount of marking to a half page of A4 per lesson. Well this is not marking, it is evaluating learning through assessment and evidencing where the learning needs to go next. This is the problem, many school leaders have taken OfSTEDs words for ensuring progress by effective and regular assessment to mean’ Regular Marking’, as this not only ticks the box, it also gives parents & carers the impression that their child’s work is being looked at.
Again, the pedagogical methods of effective assessment and feedback have been hijacked to demonstrate the wrong things to the wrong people.
Let’s get back to what our job is, supporting the intellectual development of young people, through using an effective feedback loop that allows everyone to flourish at their own level and make progress beyond their current performance. If this means immediate verbal feedback that can be actioned within the lesson, peer and self assessment to set criteria and reviews of assessment and exams, then so be it.
Let’s give learning back to the child, then we could have more time planning more effective lessons acting on our assessment rather than endlessly trawling through book after book writing 3 stars and a wish or EBI statements, that will only be read by senior leaders, OfSTED and parents and will have NO impact on learning.