When taking the necessary in-depth look at Visible Learning with the eye of an expert, we find not a mighty castle but a fragile house of cards that quickly falls apart.
Source: HOW TO ENGAGE IN PSEUDOSCIENCE WITH REAL DATA: A CRITICISM OF JOHN HATTIE’S ARGUMENTS IN VISIBLE LEARNING FROM THE PERSPECTIVE OF A STATISTICIAN | Bergeron | McGill Journal of Education / Revue des sciences de l’éducation de McGill
Hattie’s effect sizes are often thrown around as catch-all measurements of classroom methods. This reminds me of the learning styles discussions from several years ago. Both of these approaches have the same critical danger: reducing teaching and habits to single styles or single measures of effect is bad practice.
The idea of learning styles or effects on instruction are fine, but not when presented as scientific fact. A statistical breakdown of Hattie’s effect sizes shows the clearly, as evidenced by this line:
Basically, Hattie computes averages that do not make any sense. A classic example of this type of average is: if my head is in the oven and my feet are in the freezer, on average, I’m comfortably warm.
Aggregating each category into a single effect size calculation disregards all of the other confounding variables present in a given population or individual. Learning styles has the same reductionist problem. In the mornings, reading works better for me. By the end of the day, I’m using YouTube tutorial videos for quick information. The style changes given the context and the idea of a single, best style ignores those context clues.
Use descriptors and measurements with care. Recognize the deficiencies and adjust for context as needed.