A response to critics #2: data and dichotomies
One month ago, I published a historical critique of progressive education entitled Progressively Worse. It has two essential arguments. Firstly, from the 1960s onwards progressive education became a powerful orthodoxy within the state education system. Secondly, it has failed to improve the quality of education in our schools.
This is the second in a series of blogs responding to the criticism Progressively Worse has received so far. To read the first blog, click here.
‘The road to hell in education is paved with false dichotomies’. So went Sir Michael Barber’s meme that has spread through the education debate.
Many have accused Progressively Worse of establishing a false dichotomy between progressive and traditional approaches. A false dichotomy is an either/or choice where some middle ground is actually possible. At no point in Progressively Worse do I offer an either/or choice between progressive and traditional education. Instead, I argue that in recent decades British schools have seen far too much of the former, and not enough of the latter. As I write in the introduction:
Such dichotomies (skills/knowledge, child-centred/teacher-led) are perhaps better thought of as sitting at opposite ends of a spectrum. If we are to decide what constitutes a sensible position on each spectrum, we need to appreciate better how far British schools currently gravitate towards the progressive ends. Whilst a wholesale move towards traditionalist modes of education would be harmful, a corrective shift in that direction is desperately needed.
Writing about my book, Harry Webb offers the helpful analogy of the economic poles of Keynesianism and monetarism. Few economists are absolutist in their support for one or the other, but the terms remain necessary for describing either end of a continuum. Amongst, for example, the increasing number of teachers who are calling for a greater focus on knowledge, and a lesser focus on skills, there is not one who proposes an absolute dichotomy between the two. This is an argument projected upon them by their critics.
Gerrard accuses Progressively Worse of oversimplifying complex debates by using terms such as ‘an authoritative teacher vs independent learning’. Gerrard is correct, to the extent that categorisation invariably simplifies. This can be seen in all walks of life: music genres; architectural styles; political labels. However, though imprecise, categories are vital in allowing discussion to take place. Those who protest over their skinny lattes that they are far too sophisticated to use such un-nuanced language (as characterised by Harry Webb), are more often than not just trying to shut down debate.
The ‘data bore’ is a common creature in contemporary debates, characterised by their lofty disdain for anything so naïve as ‘having an opinion’. Gerrard concludes her critique of my book by writing, ‘This isn’t the kind of evidence-based approach to policy that government needs to use. Let the data speak for themselves.’
I have written about the problems with such a stance in my post ‘When evidence doesn’t work’. Firstly, some of the key debates in education are based on value judgements, not efficacy. What ‘evidence’ tells us that Shakespeare should be studied at GCSE, for example?
Secondly, evidence can tell us what improves academic results. However, one thing that traditionalists and progressives usually agree on is that schools are responsible for more than just academic results. This is why Robbie Coleman, the Research and Communications Manager at the Education Endowment Foundation (EEF), has written ‘Evidence is good at helping us work out how to get where we want to go… But evidence can’t tell us where we want to go in the first place. If we forget this, we end up following whichever road we find first, and only attributing value to the things we can easily measure.’
Thirdly, data are simply not able to ‘speak for themselves’. Its voice is always mediated by human judgement. Just look at the EEF’s Teachers Toolkit. It is a fantastic resource, and much like Hattie’s work, an admirable synthesis of existing educational research. However, one cannot entirely outsource one’s judgement making faculties to the EEF’s pound signs and monthly measures. The researcher, just like the historian, suffers from selection bias. Where are the toolkit results for knowledge-based curriculums; detentions; and end of year exams? All things that I value, but all things the EEF toolkit has not (yet) isolated and measured.
In addition, Guy Woolnough attacks my use of the Hattie’s data in Visible Learning. He claims that I use his synthesis of meta-analysis ‘to support his argument in favour of ‘traditional’ teaching methods.’ I do not. I simply use his evidence to discredit the dominance of constructivism within teacher training. As I write in my book:
Hattie is duly critical of the ‘constructivist theory of teaching’, which is the psychological school that underpins child-centred practice. This is not to say he promotes an alternative of pure didacticism.
Woolnough additionally quotes Hattie’s claim that teachers need to employ ‘less talk’, and his observation that ‘Teachers love to talk, but unfortunately most of their talk, even when it calls for a student response, fosters lower-order learning.’ Here, I would question the conclusion Hattie draws from his own findings. Undoubtedly, many teachers are guilty of poor quality teacher talk. However, the remarkable success of programs such as direct instruction suggest the solution need not be less teacher talk, but better teacher talk.
Some education research enthusiasts seem to believe that data can overcome opinion. It cannot. It can inform opinion. It can promote or discredit opinion. But data will never be opinion.