I have been reading Doug Lemov’s “Teach Like a Champion 2.0” over the last couple of weeks. The mainly US settings, combined with what Daisy Christodoulou wrote about in “Making Good Progress?”, has got me thinking about problems caused by standardised testing, though I remain a firm supporter of it.
Schooled exclusively in England, I took standardised testing for granted until I began work in an international school. It was there I met American teachers who graded their own students and were allowed to decide the criteria on which these grades were awarded. I was sceptical to begin with, because I thought that this would lead to inconsistency and varying standards. This may be true, but I now see that it also had its benefits.
Allowing teachers to develop and apply their own grading criteria means they are able to break down complex, lengthy and involved tasks into smaller steps and give pupils credit for achieving each. For example, while a teacher may want students in a class to eventually write analytical essays, they might feel expecting a Y7 to do this in their first half term is unreasonable and that grades derived from attempts to do so would be meaningless. Instead, the teacher might sensibly teach children to write one really good paragraph and then grade this.
The American teachers I know also typically grade children on ‘softer’ elements that we don’t usually assess, at least formally, in England. For example, participation and attitude might make up part of the grade that appears on a child’s end of term report card. Of course, effort and attitude also often appear on a child’s report in England too, but these are usually separate to the academic grade not components of it.
One significant advantage of this approach to assessment is that it allows teachers to teach and then hold children accountable for scholarly attitudes and behaviours likely to lead to greater academic success in the future. For example, in Teach Like A Champion, Doug Lemov describes how effective teachers create the conditions for good discussion in their classrooms by teaching listening, and how students should build their own responses on the ideas of others. In an example in the book a teacher incentivises this behaviour by reminding students he was giving credit for contributions made this way. In such systems assessment can be truly formative in that a grade is made up of multiple components that should, in the end, lead to better academic performance.
There is nothing to stop similar systems being applied in England but few school-wide assessment policies allow for it. GCSEs, which are standardised tests, mean many schools have developed systems that attempt to mirror England’s national standardisation on a micro-level. Assessments often take the form of a standardised test, which uses GCSE style questions and is marked using a GCSE mark-scheme. The opportunity presented by the demise of National Curriculum Levels has not been taken advantage of, with most schools choosing to develop models based on eventual examination performance.
This is unfortunate for two reasons.
Firstly such systems, by attempting to measure the end product before children have had enough time to develop it, emphasise the final result over the process of reaching it. A 16 mark essay question in history, for example, is designed to test a range of second order concepts that might take years to develop. Children in Year 7 are likely to perform very poorly on such a question. It is likely that systems that expect children to do this or similar tasks in exams won’t devote sufficient time to breaking the big task into smaller steps because children aren’t graded on them. This is also likely to affect the development of discursive techniques I mentioned earlier; children might well ultimately develop more nuanced and sophisticated ideas if they listen better to their peers but, as this is not a component of the standardised test, teachers may well come to the view they don’t have time to waste teaching it. This is also very true of basic knowledge tests – simple retrieval is not generally tested in GCSEs, which could lead schools to leave such tasks off assessments. This would be a mistake though, because knowledge is the water in which everything else swims and neglecting the development of it is perhaps the single most significant reason children underperform in standardised examinations.
Secondly, systems that align themselves completely around standardised tests do students a disservice because, regardless of how good a rubric is, it is still a rubric. If this rubric is applied right from the start of their secondary schooling children are in danger of coming to believe that the rubric is the subject. To them, being good at history might come to mean “evaluating evidence for two interpretations and then making a judgement”. This, while perhaps a workable way in which to assess how good a child is at learning history at fifteen or sixteen, is not a definition of history and systems that create this implicit assumption is misrepresent the disciplines being taught. It also leads to mechanical, dreary writing, which I wrote about here.
None of what I have written here means I don’t believe in standardised testing. I do. It isn’t perfect, and I would like improved versions of it, but I can’t think of a better in which to consistently assess the performance of large cohorts of children. But the tail shouldn’t wag the dog and nor does it need to. Sensible assessment systems shouldn’t be based on GCSE criteria at least until Year 11 and should test the component skills of eventual success without expecting children to necessarily demonstrate the final product in each test. None of this, of course, is new thinking. Daisy Christodoulou said it first and better and Christine Counsell’s “messy markbook” way before this shows that practical solutions to this issue have been around for years. I don’t know whether that’s more encouraging or depressing.