Take two groups of teachers…
Group One: teachers whose pupils have made high progress for three years in a row
Group Two: teachers whose pupils have made low progress for three years in a row
Imagine you’re shown video clips of the individuals from both groups teaching, but you’re not told who belongs to which group. Do you think you’d be able to accurately categorise them?
If, like me, you answered yes, it’s odds-on that you’d be wrong because – put simply – we’re just not as good at making judgements as we think. This is something that the American academic Michael Strong investigated in 2011. He and his team found that observers only achieved a 50% success rate which, as a ratio, is ‘indistinguishable from chance using any statistical test.’ He concluded:
‘In every case, judges achieved relatively high levels of agreement but were absolutely inaccurate, leading us to question whether educators can identify effective teachers when they see them.’
You can read the paper by clicking here.
Making accurate judgements on the quality of teaching is particularly hard because we can only make vague inferences about what students have learned from their performance in the moment. For example, a student’s ability to provide correct answers to a series of questions in class offers absolutely no guarantee that they’ll be able to retain and transfer the same information at a later date. But, in the moment, it does look good and that’s really quite compelling. One more example: students can complete lots and lots of work and yet be totally unable to explain or remember any of it (see Robert Coe’s work on false proxies for learning). All of which is a very long-winded way of emphasising that the conversations we have around teaching and learning – stemming from learning walks and lesson visits etc. – are far, far more important than the summative judgements we make.
Thanks for reading –