Wednesday, 5 June 2013

Teacher Proof: Why Educational Research doesn't always mean what it claims, and what you can do.

So, I have a book out.

It's been a long time coming. Since I started teaching, I knew there was something suspicious about what I was being told worked in classrooms, and what actually happened. It started in teacher training, as well-meaning lecturers and reading lists advocated apparently cast-iron guarantees that this method of educating children, or that way of directing behaviour, would be efficient. It continued on DfE sponsored training programs where I was taught how to use NLP, Brain Gym, Learning Styles and soft persuasion techniques akin to hypnosis.

Then I began teaching, guided by mentors who assured me that other contemporary orthodoxies were the way to win hearts and minds. It took me years to realise that thing I could smell was a bunch of rats wearing lab coats. And why should any new teacher question what they are told? Establishment orthodoxies carried the authority of scripture. And often it was justified with a common phrase- ‘the research shows this.’

I remember reading Ben Goldacre’s Bad Science, and being amused and horrified by the cavalier ways in which science could be hijacked by  hustlers. His harrowing of Brain Gym led me to wonder what else, like Descartes, I needed to question. What I discovered led me to write Teacher Proof.

First of all I discovered that a lot of what was considered to be absolute dogma by many teachers, was built on quicksand.  Learning Styles, for example, were almost universally accepted by every teacher who trained me. It was a Damascan epiphany to find out that there was hardly a scrap of evidence to substantiate it, that the serious academic  community had washed its hands of it long ago. But it lingered on, a zombie theory, staggering from classroom to classroom, mauling lesson plans.

Once I had peeled one strip of paper from the wall, I could do nothing else but keep pulling, and see how much came off. Much, much more, it turned out. First of all, I entered the world of pseudo-education, where optimistic internet sites boasted of Olympian gains to made by the adoption of this pill (often Omega 3), that smell (sometimes Lavender, sometimes not) or even this sound (the Mozart Effect, for instance). These, at least, seemed to be obvious pigs in pokes. Other companies sold hats- literally, thinking hats- of various colours, or exercises that promised to boost brain power. But they asked customers to gamble a lot more than a stamp, as Charles Atlas innocently proposed.

Unfortunately, it was often just as bad when I progressed to the realms of alleged propriety; I found that a lot of what was practically contemporary catechism, was merely cant. Group work, three-part lessons, thinking skills, multiple intelligences, hierarchies of thinking like Bloom’s, all- at least to my poor eyes- appeared to rely on opinion and rhetoric as much as data. Delving deeper, I found that this was an affliction that affected the social sciences as badly as the natural sciences- perhaps worse, as natural sciences are at least readily amenable to verification. But any social science- from economics to sociology- is subject to inherent methodological restrictions that makes any claims to predictive or explanatory powers intrinsically difficult.

Which isn’t to say that social science isn’t’ a powerful and urgent device with which to accrue an understanding of the human condition. But merely to require that its claims be interpreted appropriately. It is a very different proposition to claim, for example, that water boils at 100 degrees Celsius at sea level, than it is to say that children learn best in groups. The first can be at least disputed immediately, or not, by testing. The latter requires a plethora of causal factors to be adjusted and  accounted for. And to confound matters further, humans are notoriously hard to fit on a microscope slide. Nor are we always the most reliable of subjects.

Sometimes this was the faulty of those writing the research; sometimes the research was, as Richard Feynman describes, Cargo Cult Science; sometimes the writers appeared to have no idea what the scientific method was, believing it to be some kind of fancy dress with which one clothed a piece of journalism; sometimes allegedly sober pieces of research were simply misinterpreted by a willing media; sometimes it was the teachers themselves that had misappropriated the findings; sometimes it was the policy makers who were hungry for a magic bullet and had already made their minds up about what they were buying.

Whatever the reasons, it was clear: the educational research we were asked to assimilate in schools was often more like magic beans than magic bullets. That’s unhealthy. There are armies of earnest, dedicated professionals working in educational research who are horrified by some of the fantastical or flimsy claims made by the hustlers and their PRs. If educators want to get past this unhealthy  system of intellectual bondage, we need to become more informed about what the research actually says, and what good research actually means; about how hard it is to say anything for certain in education, and when claims can be ignored, and when they should be listened to.

So I wrote Teacher Proof. It’s aimed primarily at people who work in schools, but it’s also for anyone involved in education, research and policy. I am, unashamedly, a teacher. I admit I have entered a world- of educational research- in which I am only a guest. I am aware that in my travels I may be more of a tourist than a native. But I have tried to write as honestly and as plainly as I can, about matters that affect me deeply- the education of children. If I have made any errors- and I’m sure that I have- I welcome correction, and discussion. I can’t shake the feeling that teachers would do well to make research more of their business, get involved, participate in studies, and perhaps even conduct some of their own, with guidance. I’d also like to think that researchers would be well advised to ensure their theories are tested objectively, with an eye to disproving them, in classrooms with meaningful sample sizes. There is a great deal of good that the two communities can do together.

Perhaps then teachers can look forward to hearing the latest research, and run towards it; and researchers can see classrooms not as awkward inconveniences between data sampling and publication. There’s an awful lot of good research out there, but it gets drowned out by the bad.

Good ideas, like decent whisky, need time to settle and mature. I suspect that we need to develop more of a critical faculty to sift the ideal from the merely idealistic. Maybe then we’ll be immune to novelty and fashion in pedagogy. Or, as I call it, Teacher Proof.
Buy Teacher Proof HERE