Money, money, money

Money has always seemed mysterious to me, and so I’ve always carefully avoided monetary economics as too difficult (which makes it ironic that when I returned from my US PhD programme to a job in the UK Treasury in 1985 I was assigned to the monetary policy unit – this in the days long before Bank of England independence, when the Treasury and Chancellor made the policy decisions). Still, from time to time I dip in, and found Stefan Eich’s The Currency of Politics: The Political Theory of Money from Aristotle to Keynes an interesting read.

The book is an intellectual history of how certain key thinkers regarded money, covering Locke, Fichte and Marx in between Aristotle and Keynes. The selection is used to illustrate a core point that how money is theorised and governed involves political choices, not technocratic ones. This repeated theme reminded me of Paul Tucker’s Unelected Power, similarly arguing against seeing monetary policy as an expert domain. I was initially resistant to this but on reflection at least partly agreed (partly first, because the technical affordances set the boundaries of policy feasibility, and second, because I hold on to the idea that most ‘experts’ in such areas are motivated by a sense of the public good rather than ideology or personal philosophy).

Anyway, back to this book. Each chapter reflects a close reading of the relevant work of each subject combined with an analysis of the contemporary political context. Thus he argues that Locke, for example, in his contributions to the debate about England’s increasingly clipped silver coinage, made the political move of ‘depoliticising’ money, arguing for its ‘intrinsic value’ linked to a quantity of metal: “Locke’s intervention was itself political, even where it removed political discretion,” with the aim of bolstering the role of the state as a general guarantor of (classical) liberal freedoms while limiting its scope to act in detail.

Similarly on Keynes, he writes, monetary policy, “was a public task tied to social justice. It derived its legitimacy from the implicit political covenant that also grounded the state. But it was nonetheless removed by at least one degree from popular politics since it relied on management by a group of experts who had to carefully navigate between democratic legitimacy and the political uses of their expertise.” This seems spot on. And the act of navigation is challenging in turbulent times. Independent central banks have broadly done a good job of stabilising the aggregate economy since the mid-2000s but a bad job in not recognising the distributional and political consequences of QE on a massive scale.

The other message I took from the book was that political and ideological contention both contributes to the emergence of new monetary technologies and is channelled by the affordances of the technologies. When I worked in the Treasury, my job was basically to try and figure out why monetary aggregates were growing so damn quickly – this was the tail end of the pure monetarist experiment in the UK. It turned out that trying monetary targeting at a time of huge technological change (derivatives markets exploding, ATMs and credit/debit card use spreading rapidly, deregulation of consumer credit) was doomed to failure. I still don’t understand cryptocurrencies but they are certainly part of this continuing dialectic of  – to mix metaphors horribly – walking the tightrope between the inevitably political character of the monetary system and the desirability of stability in the economy which requires taking it out of politics. The Currency of Politics really helps understand this.

Screenshot 2024-04-11 at 10.12.23

 

 

Unaccountable

I read a proof of The Unaccountability Machine by Dan Davies with a view to blurbing it, and was more than happy to recommend it. This is a fascinating book. The subtitle indicates its scope: “Why Big Systems Make Terrible Decisions and How the World Lost Its Mind”. The book asks why mistakes and crises never seem to be anybody’s fault – it’s always ‘the system’. Davies uses the concept of the ‘accountability sink’ – a policy or set of rules that prevent individuals from making or changing decisions and thus being accountable for them. He writes: “For an accountability sink to function, it has to break a link; it has to stop feedback from the person affected by the decision from affecting the operation of the system. The decision has to be fully determined by the policy, which means that it cannot be affected by any information that wasn’t anticipated.” I predict that the more machine learning automates decisions, the more accountability sinks we will experience. Think Horizon. But there are plenty of non-automated examples. Davies cites, for example, Gill Kernick’s wonderful book on the Grenfell disaster (and others), Catastrophe and Systemic Change.

The book draws heavily on Stafford Beer’s cybernetics, providing the public service of digesting all of his writings and making them accessible. Cybernetics was of course concerned with using the flow of information and enabling feedback. Decisions about how to make decisions are part of the system. Hence the often-quoted principle that “the purpose of a system is what it does” – and not what it says it does. The book has several chapters describing how systems operate, including how to conceptualise a ‘system’ in the complex, messy real world. Davies observes that this requires a representation that is “both rigorous and representative of reality.” The selection of categories and relationships in a system is a property of the choices about description and classification made by the analyst rather than inherent reality. He describes – using plentiful examples – how systems so often malfunction.

The book has a chapter specifically diagnosing the strengths but also malfunctions of economics. He writes: “Economics has been a major engine of information attenuation for the contrl system. Adopting the economic mode of thinking reduces the cognitive demands placed on our ruling classes by telling them there are a lot of things they don’t have to bother thinking about. … when decisions are made that have disastrous long-term conseqneuces as a result of relatively trivial short-term cash savings, the pathology is often directly related to something that seemed like a good idea to an economist.”  There’s an interesting section on ‘markets as computing fabric’, a ‘magic calculating machine’. This was echoed recently in some terrific posts by Henry Farrell and Cosma Shalizi. It’s a fruitful way of thinking about collective economic outcomes. I also strongly agree with the sections about collecting the data – classification and data collection is a super-power (as I’ve been writing for years in connection with GDP and beyond). The book says, “Numbers are collected for a purpose and it’s often surpriginly hard to use them for any other purpose.” Moreover, many numbers are not collected, which makes it hard to ‘prove’ claims about the potential for the system to operate differently.

The book ends by returning to system dysfunction – ‘morbidity’. From the toxic idea of shareholder value maximisation to the fentanyl crisis in the US, from the collapse of public infrastructure networks to the advers effects of private equity (which Brett Christophers has dissected forensically in his book), economic and financial systems need a redseign. Davies suggests one step that he thinks would have a big impact: make these investors liable for company debts. Oh, and make sure the economists are not in charge: “Every decision-making system set up as a maximiser needs to have a higher-level system watching over it.”

The Unaccountability Machine does not directly address my current preoccupation, which is the implications for automated decision-making in public services, in particular, of GOF machine learning and generative AI, but is higly relevant to it. It’s a cracking read and I highly recommend it.

Screenshot 2024-03-26 at 09.45.36

AIs as the best of us

Another book of many out on AI is As If Human: Ethics and Artifical Intelligence by Nigel Shadbolt and Roger Hampson. I found this a very accessible book on AI ethics, possibly because neither author is an academic philosopher (sorry, philosophers….). Generally I’m a bit impatient with AI ethics, partly because it has dominated debate about AI at the expense of thinking about incentives and politics, and partly because of my low tolerance for the kind of bizarre thought experiments that seem to characterise the subject. Nevertheless, I found this book clear and pretty persuasive, with the damn trolley problem only appearing a small number of times.

The key point is reflected in the title: “AIs should be judged morally as if they were humans,” (although of course they are not). This implies that any decisions made by machines affecting humans should be transparent, accountable and allowing appeal and redress; we should treat AI systems as if they were humans taking the decisions. There may be contested accountability beyond that to other groups of humans – the car manufacturer and the insurer, say – but ‘the system’ can’t get away with no moral accountability. (Which touches on a fantastic new book, The Unaccountability Machine, by Dan Davies that I will review here soon).

Shadbolt and Hampson end with a set of seven principles for AIs, including ‘A thing should say what it is and be what it says’, and ‘Artificial intelligences are only ethical if they embody the best human values’. Also that private mega-corps should not be determining the future of humanity. As they say, “Legal pushback by individuals and consumer groups against the large digital corporations is surely coming.”

As If is well worth a read. It’s short and high level but does have examples (and includes the point that seems obvious but I have seen too rarely made, that the insurance industry is steadily putting itself out of business by progressively reducing risk pooling through data use).

Screenshot 2024-04-03 at 09.46.09

 

Digital design

Over the holiday weekend I read (among other things*) Digital Design: A History by Steven Eskilson. I enjoy reading design books in general – a window into a more glamorous specialism than economics. This one covers a range of aspects, from the design of gadgets (from the IBM Selectric typewriter to Apple’s dominance in this arena) to fonts to web design to data visualisation to architecture. So it’s quite eclectic, and includes using digital tools to design (as in architecture) as well as the design of digital artefacts. But one theme that emerges across all these areas is the lasting influence of Bauhaus (about which I read a terrific book last year,  a biography of Gropius by Fiona McCarthy, a while back). Digital Design also a very handsome book with loads of images. Screenshot 2024-04-02 at 11.48.02* Robin Ince’s Bibliomaniac and two thirds of The Currency of Politics by Stefan Eich, which I’ll write about another time.

 

AI and us

Code Dependent: Living in the shadow of AI by Madhumita Murgia is a gripping read. She’s the FT’s AI Editor, so the book is well-written and benefits from her reporting experience at the FT and previously Wired. It is a book of reportage, collating tales of people’s bad experiences either as part of the low-paid work force in low income countries tagging images or moderating content, or being on the receiving end of algorithmic decision-making. The common thread is the destruction of human agency and the utter absence of accountability or scope for redress when AI systems are created and deployed.

The analytical framework is the idea of data colonialism, the extraction of information from individuals for its use in ways that never benefit them. The book is not entirely negative about AI and sees the possibilities. One example is the use of AI on a large sample of knee-xrays looking for osteo-arthritis. The puzzle being tackled by the researcher concerned was that African American patients consistently reported greater pain than patients of European extraction when their X rays looked exactly the same to the human radiologists. The solution turned out to be that the X rays were scored against a scale developed in mid-20th century Manchester on white, male patients. When the researcher, Ziad Obermeyer, fed a database of X-ray images to an AI algorithm, his model proved a much better predictor of pain. Humans wear blinkers created by the measurement frameworks we have already constructed, whereas AI is (or can be) a blank slate.

However, this is one of the optimistic examples in the book, where AI can potentially offer a positive outcome for humans. It is outnumbered by the counter-examples – Uber drivers being shortchanged by the algorithm or falsely flagged for some misdemeanour and having no possibility of redress, women haunted by deepfake pornography, Kenyan workers traumatised by the images they need to assess for content moderation yet unable to even speak about it because of the NDAs they have to sign, data collected from powerless and poor humans to train medical apps whose use they will never be able to afford.

The book brought to life for me an abstract idea I’ve been thinking about pursuing for a while: the need to find business models and financing modes that will enable the technology to benefit everyone. The technological possibilities are there but the only prevailing models are exploitative. Who is going to find how to deploy AI for the common good? How can the use of AI models be made accountable? Because it isn’t just a matter of ‘computer says No’, but rather ‘computer doesn’t acknowledge your existence’. And behind the computers stand the rich and powerful of the tech world.

There are lots of new books about AI out or about to be published, including AI Needs You by my colleague Verity Harding (I’ll post separately). I strongly recommend both of these; and would also observe that it’s women in the forefront of pushing for AI to serve everyone.

Screenshot 2023-12-30 at 10.32.59Screenshot 2023-12-30 at 10.33.50