Party animal that I am, I’ve been spending my spare time in Stockholm this week (where I’ve been doing a couple of events for the Institutet for Naringslivsforskning) reading Oskar Morgenstern’s [amazon_link id=”B00177CAI0″ target=”_blank” ]On The Accuracy of Economic Observations[/amazon_link]. The 1963 edition of this 1950 book was just sent to me by a friend who had read a draft paper of mine that will be out quite soon, which refers to the book in a quotation from a recent paper by Charles Manski.
[amazon_image id=”0691003513″ link=”true” target=”_blank” size=”medium” ]On Accuracy of Economic Observations[/amazon_image]
As so often when you start researching something, it turns out there’s nothing completely new. Morgenstern wrote trenchantly about some of the things that have been bothering me about the way economists use economic statistics. For example: “A significant difference between the use of data in the natural and social sciences is that in the former the producer of observations is usually also their user.” Economic data on the other hand are collected by many hands, often consist of time series, and are processed by statisticians. Economists rarely think hard enough about their numbers – a problem made all the worse by the easy availability of statistics to download from handy websites and pour into a software package that will churn through regressions and generate test statistics.
Morgenstern also notes the strong incentives many ‘creators’ of economic data have to give misleading responses to survey questions. What’s your income? What price do you charge for this service, oh oligopoly provider? What is the level of your GDP, oh Greek government? “‘Strategic’ considerations play havoc with reliability.” Even if all respondents are well-intentioned, the sampling error, general mistakes in data collection and processing and so on mean we place a ludicrous amount of confidence in any and all economic statistics. “Three or four digits is probably the maximum accuracy of primary data that ever needs to be considered in the vast majority of economic arguments,” he wrote.
Gratifyingly, he likes the technique of spectral analysis of time series, which I did a lot of in my PhD days, because they arrange the data in frequency bands. “There can hardly be any doubt that the powerful new techniques of spectral analysis will put the study of economic fluctuations on a new basis,” the book says. Alas, it didn’t catch on except among a small number of time series anoraks. The chapter I’ve just finished concludes that economics still needs to learn to live with the fundamental importance of errors in the data, and include them in its theories – just as modern physics has. Alas, this too has not caught on among economists. All of which raise the question – why not? Why haven’t economic statistics improved, it seems, since 1960?
My favourite chart at the moment is the Bank of England’s GDP forecast in the Inflation Report. February’s showed a 90% chance that UK annual GDP growth was between 1% and 5% at the start of 2015, taking into account simply past experience of revisions of the data, and not any of the additional potential sources of error. This is the difference between the standard of living doubling in 70 years and 14 years, or in other words vast. Something to bear in mind for anyone making any last minute decisions about how to vote today on the basis of parties’ economic claims.