At a governors’ meeting this morning, at the primary school where I’m the chair of governors, we discussed the school’s results in the Key Stage 2 SATs (for those who don’t know, it’s the national test children are required to take at the end of their primary school career in England, aged 10-11). The results were excellent. Our philosophy is that we aim for ever-greater attainment so we look for year-on-year improvement, every year.
However, at this time of year I always reflect on the inability of the entire educational, policy and political establishment to understand that the variability observed in a sample will always be larger, the smaller the sample. Ours is currently a small school with under 30 pupils in the final year, and as few as 24 or 25 might be entered for a test subject depending on their circumstances. Each child is worth several percentage points in the results table. Stuff happens, and one child might do better or worse on test day. There is no real meaning to be read into quite large year-on-year changes in the scores for a small school – and more meaning to be read into them the bigger the school. So I’m delighted by the big increases in our results this year (and one would never want to use small sample size as an excuse for a decline without really careful probing); but I take more comfort from the upward trend over three years, and even more from the other data that we look at as governors, and from observing lessons and looking at children’s work in school.
The bigger the sample, the more likely it is that extremes at one and and the other will cancel each other out when you calculate the average. This inverse relationship between variability in the sample and sample size is well-explained in Chapter 1 of Howard Wainer’s excellent book, [amazon_link id=”0691152675″ target=”_blank” ]Picturing the Uncertain World: How to Understand, Communicate and Control Uncertainty through Graphical Display[/amazon_link]. He gives brilliant examples of people drawing incorrect or at least unproven conclusions from their failure to take account of this relationship. They include the movement in the US supporting smaller schools (as he puts it, billions of dollars are being spent on increasing variance), interpretations of cancer “clusters”, supposed differences in intelligence or attainment between the sexes – there are countless examples.
I don’t think basic statistical literacy is included in the new curriculum for English primary schools – a shame when there’s evidence everywhere of its absence.
[amazon_image id=”0691152675″ link=”true” target=”_blank” size=”medium” ]Picturing the Uncertain World: How to Understand, Communicate, and Control Uncertainty through Graphical Display[/amazon_image]
Have you read Gerd Gigerenzer’s ‘Reckoning with Risk’? It is scary how few medical practitioners could figure out the probability of you having a disease if the test returns positive, when you know the frequency of false negatives and positives.
Your point applies much more generally. How many bad political decisions are made because we overreact to rare but salient events (say, reducing civil liberties after a horrendous terrorist incident) and underreact to frequent, but un-noteworthy events (say, doing nothing about the number of road accidents).
I haven’t read it so thank you for the tip. People obviously find probability really, really difficult – although as you say one would have hoped doctors had been taught how to interpret test results properly. Economists aren’t all that much better. And as for the policy world…
All the more reason for teaching statistics. There’s lots of dull stuff taught in CCW classes that could easily be eliminated to make way for it. All that we need is the teachers who understand it in the first place.
I like much of Gigerenzer’s work – who is on the plus side a great communicator of his ideas to lay audiences, too. (Though usually not to primary school classes 😉
Here’s a link to a pdf by him on statistical literacy and medicine/health:
http://citrixweb.mpib-berlin.mpg.de/montez/upload/PaperLibrary/GG_etAl_Helping_doctors-1.pdf
Thank you – have downloaded it to read.
Yes, he’s right that it’s hard to think in terms of probabilities – best to use natural frequencies.
But whenever I try to use Bayes’ theorem, I just use the four boxes – as in this example:
http://stumblingandmumbling.typepad.com/stumbling_and_mumbling/2007/03/seeing_bayes_th.html
Kahneman and Tversky’s classic paper in Science would be a good place to start. Very clear and readable.
Ref:
Judgment under Uncertainty: Heuristics and Biases
Amos Tversky; Daniel Kahneman
Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131.
http://psiexp.ss.uci.edu/research/teaching/Tversky_Kahneman_1974.pdf
Thanks for the link – also in my in-tray to read now.
The ideal answer is –
Life is hard. After all, it kills you. (c)Katharine Hepburn Quotes