Opportunity Knocks – Again, and Again, and Again

Last week educational globetrotter and purveyor of all things ‘evidence-based’, Visible Learning, tweeted ‘What is John Hattie working on at the moment?” Before they informed us it was Visible Learning for Parents, my first thought was $kerching$. While some may welcome the umpteenth variation of Visible something-or-other, I am skeptical of this addition to the arsenal of products and wanted to share a few thoughts.

The greasy pig that is the secure relationship between intervention and outcome has been a focus of Hattie and Visible Learning’s output for nearly ten years. It seems that across the globe individuals, schools and professional organizations have hailed Hattie’s meta-analysis as an important step forward in making educational decision-making more evidence-based. It also satisfies those who love a list and rank order (updated 2016). It is understandable that in the quest for certainty in an inherently complex and uncertain place like a school, this work would be welcome.

Hattie is no doubt aware that much has been written about how family involvement with a child’s schooling can affect achievement. That said, a myriad of complexities and nuances prevent the evidence from being reliable enough to anchor down definitive interventions that lead to improvements in achievement. He should ‘know the impact’ I hear you say! Well, is this Hattie’s angle? To fill this lacuna?

The influence of Hattie’s meta-analysis and encompassing rhetoric can be seen in many places, from bookshelves to unit plans, classrooms to conferences, national toolkits to policy. There are products and strap-lines abound to reinforce the brand. We see him commentate on television about school improvement trials which give him access to families and communities’ hearts and minds. He can be seen spanning organizations with significant professional clout to manage up to policy and down to the standards that drive teacher practice. Writing for Pearson, he has reminded us of the Politics of Distraction, those things which ‘don’t work’ and suggests where our efforts and thinking should be channeled. He has also proclaimed in evangelical form that he has a dream for educators to be, wait for it, ‘change agents’. So that’s d = 1.57 right, the ‘collective teacher efficacy’ super factor? He and his work also benefit from an extended partnership between ACEL/Corwin/Visible Learning.

What interests me is the work that has been done to shine a spotlight on the short-comings of using meta-analysis and effect sizes to validate all manner of commercial and educational activity and supposed policy legitimacy. For example, back in 2011 Snook et al wrote a critique of Visible Learning. Of particular note were their concluding concerns. After picking apart the methodological inconsistencies, the authors noted that politicians may use his work to justify policies which he (Hattie) does not endorse and his research does not sanction”. They go on to state that “the quantitative research on ‘school effects’ might be presented in isolation from their historical, cultural and social contexts, and their interaction with home and community backgrounds”.

This final point is of interest when we consider the forthcoming publication of Visible Learning for Parents. What might the book be geared towards? Parents understanding of, and endorsement of school efforts to execute and inculcate strategically selected interventions to improve achievement? Perhaps we may see further brand-strengthening through the introduction of an armada of products and services. There will be a book of course, but what about ($kerching$) an online portal, app or school-home support software to connect schools with parents and families. Perhaps ($kerching$) PD will follow, of course through accredited providers. I dare say the initiative will do the rounds at ($kerching$) conferences the world over, exhorting the vital role parents/families play in the educative process, nailing the critical support of the home situation.

Beyond a schools choice to adopt strategies which anchor themselves in meta-analysis, there is the bigger question of how far up the system chain does the acceptance of intervention effectiveness go and how wide does the sphere of influence extend? Simpson (2017) has noted that our preoccupation with ‘what works’ in education has led to an emphasis on developing policy from evidence based on comparing and combining a particular statistical summary of intervention studies: the standardised effect size.” The paper suggests that research areas which lead to the array of effective interventions are susceptible to research design manipulation – they stand out because of methodological choices. It also asserts that policy has fallen victim to metricophilia: “the unjustified faith in numerical quantities as having particularly special status as ‘evidence’ (Smith 2011)”. Dr Gary Jones does a great job of highlighting this and other worries in his blog post about how this paper puts another ‘nail in the coffin’ of Hattie’s Visible Learning. Similarly, Ollie Orange ably dismantles the statistical concerns of Hattie’s meta-analysis.

The seductive rhetoric of Hattie’s work can be found almost everywhere and certainly seems compelling. However, if education is solely about impact and effect size, i.e. one year’s growth for one year’s input, will this book actually add anything of value to the combined community’s pursuit of improvement in young people’s achievement? With questions being asked of the methodological credibility upon which all else gushes forth, shouldn’t we be questioning how much we buy in to it?

2 thoughts on “Opportunity Knocks – Again, and Again, and Again

  1. Thanks Jon for an excellent article. You may be interested in Professor John O’Neill’s analysis of Hattie’s influence on New Zealand Education Policy,

    “public policy discourse becomes problematic when the terms used are ambiguous, unclear or vague” (p1). The “discourse seeks to portray the public sector as ‘ineffective, unresponsive, sloppy, risk-averse and innovation-resistant’ yet at the same time it promotes celebration of public sector ‘heroes’ of reform and new kinds of public sector ‘excellence’. Relatedly, Mintrom (2000) has written persuasively in the American context, of the way in which ‘policy entrepreneurs’ position themselves politically to champion, shape and benefit from school reform discourses” (p2).

    I’ve got more details of Hattie’s research here and am looking for other to help – what surprises me most is the number of times meta-analyses are misrepresented by Hattie – http://visablelearning.blogspot.com.au/

  2. Jon,

    Thanks for the very interesting article. The Simpson article does much more than suggest that Hattie (and in the UK the EEF) are encouraging a superficial approach to school policy based on numerical quantities derived from research papers. He argues that the interpretation of those numbers is plainly wrong.

    He argues that Hattie’s ‘d’ (and the EEF’s far more insidious ‘months progress’) is assumed to be an indication of the educational effectiveness of an intervention. It isn’t. It is a measure of the clarity of the experiment and is highly sensitive to the choice of test, the choice of sample and the type of control group.

    In some areas (like metacognition and feedback) it is easy for a research to choose a test tightly bound to the topic taught with the intervention, a restricted sample and a control group which gets none of the intervention. That leads to a large d. In other areas (like behavioural interventions or altering the length of the school day) tests tend to have to be public test results, the sample tends to be wide and you can’t have a ‘zero’ control group (you can’t refuse to have any behavioural intervention and you don’t compare to a zero hour school day!). That leads to small ds. So feedback being higher than behavioural interventions is nothing to do with it being more educationally effective, it is simply that it is easier to do clear experiments.

    When you then go to meta-analysis and combine studies with different control groups, different types of test and different types of population, you just get a big mess. Hattie and the EEF would argue that meta-analysis smooths all of this out, but it doesn’t since those big influences on d tend to be biased between different educational areas.

    The most basic of statistical assumptions which need to be met to allow you to rank order research outcomes are not tested for by Hattie or the EEF (who really do know they should be testing for them) and are very obviously violated by the sets of research they examine.

Leave a Reply

Your email address will not be published. Required fields are marked *