I enjoyed Oskar Morgenstern’s trenchant observations about the (in)accuracy of economic statistics in [amazon_link id=”0691003513″ target=”_blank” ]On The Accuracy of Economic Observations[/amazon_link]. Here are a few more examples:
[amazon_image id=”0691003513″ link=”true” target=”_blank” size=”medium” ]On Accuracy of Economic Observations[/amazon_image]
“The idea that as complex a phenomenon as the change in a ‘price level’, itself a heroic theoretical abstraction, could at present be measured to such a degree of accuracy [a tenth of one percent] is simply absurd.”
“It behooves us to pause in order to see what even a 5 percent difference in national income means. Taking the US and assuming a Gross National Product of about 550 billion dollars, this error equals + or – 30 billion dollars. This is more than twice the best annual sales of General Motors…. It is far more than the total annual production of the entire electronics industry in the United States.”
(Updating and relocating this exercise, a 5% error in the £1.7 trillion GDP of the UK would be almost the same size as the entire UK motor industry including the supply chain, more than the total profits of the financial services sector, or about the same as households spend in total on food and drink.)
The errors don’t get the attention they deserve, Morgenstern writes: “Instead, in Great Britain as in the United States and elsewhere, national income statistics are still being taken at their face value and interpreted as if their accuracy compared favourably with that of the measurement of the speed of light.” And he points out that arithmetically, when you are looking at growth rates of figures each measured with some error, even proportionately small errors in the levels turn into large errors in the rate of change. He gives an arithmetical example, of a ‘true’ growth rate of 1.8% being measured as somewhere between -7.9% and +12.5% for measurement errors of up to 5% in the two levels.
It’s interesting that every economist and statistician would acknowledge the errors problem and yet virtually all ignore it. We’ve invested so much that to admit great uncertainty would undermine the totemic value of the figures and the ritual pronouncements about them. At a talk I did at IFN in Stockholm yesterday about [amazon_link id=”0691169853″ target=”_blank” ]GDP[/amazon_link], one of the respondents, Karolina Ekholm, State Secretary at the Ministry of Finance, said it made her uneasy that key policy decisions such as cutting government spending depended so much on the output gap – the difference between two imaginary and uncertain numbers. Of course we have to try to measure, and how marvellous it would be if the official statisticians got some extra resources to improve the accuracy in the raw data collection, and yet I think she’s right to be uneasy.
Next on my reading pile: [amazon_link id=”B00SLUQ5HS” target=”_blank” ]The Politics of Large Numbers: A History of Statistical Reasoning[/amazon_link] by Alain Desrosières and [amazon_link id=”069102409X” target=”_blank” ]The Rise of Statistical Thinking 1820-1900[/amazon_link] Theodore Porter.
[amazon_image id=”B00SLUQ5HS” link=”true” target=”_blank” size=”medium” ]The Politics of Large Numbers: A History of Statistical Reasoning: Written by Alain Desrosieres, 2002 Edition, (New Ed) Publisher: Harvard University Press [Paperback][/amazon_image] [amazon_image id=”069102409X” link=”true” target=”_blank” size=”medium” ]The Rise of Statistical Thinking, 1820-1900[/amazon_image]