Everybody Needs A Hero
Educators, as much as any other group of people, are in need of a hero. Someone who has the answers regarding which instructional strategies and materials and approaches work and which ones do not. I recently did an in-depth study of someone who has achieved hero status, in the opinion of many thousands of teachers on several continents. John Hattie directs the Melbourne Education Research Institute at the University of Melbourne, Australia. He also directs the Science of Learning Research Centre, which works with over 7,000 schools worldwide. He holds a PhD. in statistics and measurement.
In 2008, Dr. Hattie’s second book Visible Learning: a Synthesis of Over 800 Meta-analyses Relating to Achievement was published. The title pretty much tells the story. Right? Well, not so much. It requires a little explanation. A meta-analysis is a statistical procedure conducted on a collection of individual research studies on a common topic, for example, the impact of computer-assisted-instruction on student achievement. The results of the 6 or 8 or 10 (or more) individual studies are used to develop a combined “score” or effect. The intent is to give greater power to the results, to make more people pay attention to the findings, to give greater impact to the work undertaken by the researchers. John Hattie extended the concept dramatically by synthesizing the results of 815 meta-analyses, involving literally millions of students. For all intents and purposes, he invented the meta-meta-analysis. Still with me? (I am a research nerd, so I’m not sure.) What Hattie did next was to rank all of the findings, based on which ones had a small, medium or large effect on student learning. Some examples — the impact of retaining a student in the same grade level was Low (-0.13) doing more harm than good; use of cooperative learning strategies was Medium (0.59), likely to lead to some academic improvement; and, the impact of giving and getting feedback was High (0.75) clearly a recommended practice. Adding to Hattie’s reputation, he eventually assessed and ranked over 1200 different meta-analyses, involving 250 million students around the world. 250 Million!
You can probably imagine the impact on teachers and administrators (and Board of Education members) of being presented a list of “research-validated instructional strategies that improve student achievement” along with a list of “what doesn’t work so well.” Wow! Exactly what we need to improve our students’ achievement and satisfy any possible concerns raised by anyone! I have to admit that after I finished the book (which included my underlining key ideas, putting brackets around important future quotes, and writing a summary of the book for my own use), I had to have more information. So, I read some of Hattie’s other books: Visible Learning for Teachers: Maximizing Impact on Learning 2011; Visible Learning and the Science of How We Learn 2013; and, Ten Mindframes for Visible Learning: Teaching for Success 2017. Now I really new it all!
Unfortunately, I couldn’t let well enough alone! I took note of the fact that Hattie was rather dismissive of other educators who had come before him, especially those who had established a “cult-like” following, based on strategies that he found to be lacking in research support. That is, his research support. “Hmmm. Maybe I better take a closer look.” I said to myself. It didn’t take long to discover quite a lot of material written by other respected educators and researchers who cast doubt on Hattie’s work.
1) Scott Eacott, a contemporary of Hattie in Australia wrote, “…contemporary thought and analysis in Australian school leadership has submitted to the cult of the guru. Specifically, I contend that dialogue (much less debate) has settled on the work of John Hattie’s meta-meta-analysis giving rise to the Cult of Hattie. The uncritical acceptance and proliferation of this cult is a tragedy for Australian school leadership.”
2. In Invisible Learning? A Commentary on John Hattie’s book Visible Learning John G. O’Neill writes in the New Zealand Journal of Educational Studies (2009) “Hattie says that he is not concerned with the quality of the research in the 800 studies but, of course, quality is everything. Any meta-analysis that does not exclude poor or inadequate studies is misleading, and potentially damaging if it leads to ill-advised policy developments. He also needs to be sure that restricting his database to meta-analyses did not lead to the omission of significant studies of the variables he is interested in.” O’Neill is adamant about the fact that Hattie has ignored an essential tenet of research,
“Any meta-analysis that does not exclude poor or inadequate studies is misleading, and potentially damaging if it leads to ill-advised policy developments.”
3. Shaun Killian, writing for Australian Society for Evidence Based Teaching (2015), recognizes the worldwide impact of Hattie’s work, “When John Hattie first published Visible Learning in 2009, his work quickly became known as the Holy Grail of ‘all things education’.” But he knows research flaws when he sees them. “Hattie found that homework had a relatively small-moderate impact on student results with an effect size of 0.29. However, if you look at the underlying research, it was clear that homework has a significant effect in the senior year, and an even lower than the reported impact in the early years. In this case, averaging the results obscures this reality.”
4. Finally, I cannot omit the comments of Robert Slavin, one of America’s most respected educators and a key creator in the worldwide acceptance and use of Cooperative Learning. Until his death a few months ago, he was a distinguished professor and director of the Center for Research and Reform in Education at Johns Hopkins University. After a deep dive into Hattie’s research, Dr. Slavin concluded, “Hattie is profoundly wrong. He is merely shoveling meta-analyses containing massive bias into meta-meta-analyses that reflect the same biases.” 2018 For the benefit of those who care, here is another of Slavins’ conclusions. , “There is now overwhelming evidence that effect sizes are significantly inflated in studies with small sample sizes, brief durations, use measures made by researchers or developers, are published (vs. unpublished), or use quasi-experiments (vs. randomized experiments) (Cheung & Slavin, 2016). Many meta-analyses even include pre-post studies, or studies that do not have pretests, or have pretest differences but fail to control for them.”
Rather than belabor the point, I will simply say that there is no shortage of additional critiques of Hattie and his research, easily accessible on the web.
As is often the case, Hattie’s fans stick with him and they can provide anecdotal evidence of the success they have achieved using his recommended strategies. This makes perfectly good sense, as many of his high effect recommendations are fully supported in the research literature. The problem is that many strategies he rejects are, in fact, also very effective and important, and many that he touts as having high effects are less important than many other strategies.
So I will keep looking! Your comments are appreciated and welcomed.
School leadership and the cult of the guru: the neo-Taylorism of Hattie
Scott Eacott, 2017 https://doi.org/10.1080/13632434.2017.1327428
INVISIBLE LEARNINGS? A COMMENTARY ON JOHN HATTIE’S BOOK
VISIBLE LEARNING John G. O’Neill, New Zealand Journal of Educational Studies · January 2009 An Objective Critique of Hattie’s VISIBLE LEARNING RESEARCH
Shaun Killian, Australian Society for Evidence Based Teaching (2015) https://www.evidencebasedteaching.org.au/wp-content/uploads/An-Objective-Critique-of-Hatties-Visible-Learning-Research.pdf
John Hattie is Wrong By Robert Slavin, 2018 https://robertslavinsblog.wordpress.com/2018/06/21/john-hattie-is-wrong/