AI and us

Code Dependent: Living in the shadow of AI by Madhumita Murgia is a gripping read. She’s the FT’s AI Editor, so the book is well-written and benefits from her reporting experience at the FT and previously Wired. It is a book of reportage, collating tales of people’s bad experiences either as part of the low-paid work force in low income countries tagging images or moderating content, or being on the receiving end of algorithmic decision-making. The common thread is the destruction of human agency and the utter absence of accountability or scope for redress when AI systems are created and deployed.

The analytical framework is the idea of data colonialism, the extraction of information from individuals for its use in ways that never benefit them. The book is not entirely negative about AI and sees the possibilities. One example is the use of AI on a large sample of knee-xrays looking for osteo-arthritis. The puzzle being tackled by the researcher concerned was that African American patients consistently reported greater pain than patients of European extraction when their X rays looked exactly the same to the human radiologists. The solution turned out to be that the X rays were scored against a scale developed in mid-20th century Manchester on white, male patients. When the researcher, Ziad Obermeyer, fed a database of X-ray images to an AI algorithm, his model proved a much better predictor of pain. Humans wear blinkers created by the measurement frameworks we have already constructed, whereas AI is (or can be) a blank slate.

However, this is one of the optimistic examples in the book, where AI can potentially offer a positive outcome for humans. It is outnumbered by the counter-examples – Uber drivers being shortchanged by the algorithm or falsely flagged for some misdemeanour and having no possibility of redress, women haunted by deepfake pornography, Kenyan workers traumatised by the images they need to assess for content moderation yet unable to even speak about it because of the NDAs they have to sign, data collected from powerless and poor humans to train medical apps whose use they will never be able to afford.

The book brought to life for me an abstract idea I’ve been thinking about pursuing for a while: the need to find business models and financing modes that will enable the technology to benefit everyone. The technological possibilities are there but the only prevailing models are exploitative. Who is going to find how to deploy AI for the common good? How can the use of AI models be made accountable? Because it isn’t just a matter of ‘computer says No’, but rather ‘computer doesn’t acknowledge your existence’. And behind the computers stand the rich and powerful of the tech world.

There are lots of new books about AI out or about to be published, including AI Needs You by my colleague Verity Harding (I’ll post separately). I strongly recommend both of these; and would also observe that it’s women in the forefront of pushing for AI to serve everyone.

Screenshot 2023-12-30 at 10.32.59Screenshot 2023-12-30 at 10.33.50

AI needs all of us

There’s no way I can be unbiased about Verity Harding’s new book AI Needs You: How we can change AI’s future and save our own, given that it began with a workshop Verity convened and the Bennett Institute hosted in Cambridge a few years ago. The idea – quite some time before the current wave of AI hype, hope and fear – was to reflect on how previous emerging disruptive technologies had come to be governed. After some debate we settled on space, embryology, and ICANN (the internet domain naming body), as between them these seemed to echo some of the issues regarding AI.

These discussions set the scene for Verity’s research into the detailed history of governance in each of these cases, and the outcome is a fascinating book that describes each in turn and reflects on the lessons for us now. The overall message is that the governance and use of technology in the public interest, for the public good, is possible. There is no technological determinism, nor any trade-off between public benefit and private innovation. The ‘Silicon Valley’ zeitgeist of inevitability, the idea that the tech is irresistible and society’s task is to leave its management to the experts, is false.

The implication of this – and hence the book’s title – is that: “Understanding that technology – how it gets built, why, and by whom – is critical for anyone interested in the future of our society.” And hence the ‘Needs You’ in the title. How AI develops, what it is used for an how – these are political questions requiring engaged citizens. This is why the historical examples are so fascinating, revealing as they do the messy practicalities and contingency of citizen engagement, political debate, quiet lobbying, co-ordination efforts, events and sheer luck. The embryology example is a case in point: the legislation in the UK was based on the hard work of the Warnock Commission, its engagement with citizens, tireless efforts to explain science; but also on years of political debate and a key decision by Mrs Thatcher about its Parliamentary progress. The resulting legislation has since stood the test of time and also set an ethical and regulatory framework for other countries too. The lesson is that the governance of AI will not be shaped by clever people designing it, but as the outcome of political and social forces.

The book is beautifully written and a gripping read (more than you might expect for a book about regulating technology). There are quite a few new books on AI out this spring, and there are others I’ve read in proof that are also excellent; but this will definitely be one of the ones that stands the test of time. Not for nothing did Time magazine name Verity as one of the 100 most influential people in AI. She is now leading a Bennett Institute Macarthur Foundation-funded project on the geopolitics of AI. I’ll be in conversation with her at Waterstones in Cambridge on 14th March.

Image

 

Politics and economics

There’s a sentence I underlined twice in Ben Ansell’s Why Politics Fails: “Politics makes growth.” This is my main takeaway from editing a series of policy essays from my colleagues in the Productivity Institute, which will be out at the end of November for National Productivity Week. If you ask people what are the two main causes of the UK’s dismal productivity in recent times they will pick what’s close to their own interests – various skills policies (the madness of the student loan system, the dreadful FE system, the madness of tearing up painstakingly-agreed T-levels for a new system…), R&D policies and the lack of institutions to enable the commercialisation of innovations, low investment levels because of a gazillion tax changes and low saving. But every essay circles back to politics: the politics of ‘announceables’, of over-centralisation, of silos, and so on.

Anyway, Why Politics Fails is an excellent introduction that does what it says in the title: it analyses political failures through the lens of five traps or, more accurately, trade-offs. The first chapter is about the tension in democracy between honouring majority preferences and protecting minorities. The equality trap is the trade-off between equal rights and equal outcomes. The third chapter is about solidarity, manifested only in situations of individual need. The security trap is the  balance between (too much) anarchy and (too much) order. And the prosperity trap concerns the short-run economic (and electoral) gains looking more attractive than long-run decision-making that will enable prosperity over time.

The book has lots of examples, contemporary and historical. It would make an excellent additional read for students, as well as being accessible to the general readership (and Reith Lectures audience). I liked its emphasis on the delicate role played by a country’s institutions – for example, the book suggests an argument against UBI is its institutional fragility, compared to making improvements in existing, well-established welfare states. And the prosperity chapter rightly points to the importance of institutions as bulwarks against short-termism. It occupied me for a trans-atlantic flight & I enjoyed reading it.

Screenshot 2023-11-03 at 23.40.08

Numbers, objectivity and meaning

I’ve carved out as many empty days as possible this summer to make significant headway with my next book, and as well as writing I’ve been re-reading some golden oldies. One is Theodore Porter’s classic Trust In Numbers: The Pursuit of Objectivity in Science and Public Life. For those who haven’t read it, it’s a historical exploration of the pursuit of quantification in economic domains (accounting, cost benefit analysis) as an expression of objectivity. A core argument is that the assertion of quantified objectivity is a signal of a group’s lack of power rather than the opposite; powerful groups or people expect to have their judgment trusted.

This is counter-intuitive if one has read so often that the deployment of numbers is the way social and political power is exerted by economists and others. But the case Porter makes is persuasive, certainly as far as the historical origins of quantification go. He also acknowledges that objectivity has become a desired characteristics of societies governed by the rule of law: “A decision made by the numbers … has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand. … Quantification is a way of making decisions without seeming to decide.” But he adds: “Objectivity lends authority to officials who have very little of their own.” So numbers have the dual purpose of signalling impartiality and thereby giving authority to the number-producers: “The reputation of accounts and statistics for grayness helps to maintain their authority.”

My book will be anything but gray. It is looking at how economic statistics are constructed and how inadequate they have become as metrics of social progress (or its absence) given the technology-driven changes in the structure of the economy as well as the imperatives of making the environment count. These changes have been under way at least since my first book The Weightless World was published 26 years ago, but the social process of constructing the statistics is a slow one, carried out within the expert community of national statisticians. Thinking about how to replace what we have now – given the issues I highlighted in GDP – involves some deeply conceptual and philosophical questions.

Screenshot 2023-08-26 at 11.10.32

Private government?

I’d previously read about Elizabeth Anderson’s Private Government, but hadn’t actually read it until this weekend. The book consists of her two 2014 Tanner Lectures and the four responses, so is quite old. The lectures draw an analogy between public government – “the people free under the state” – and the private government workers experience when their bosses boss them in unaccountable ways. In other words, the state’s exercise of power in a democracy is justified whereas employer’s exercise of power is not. Along the way, the lectures trace the evolution of the idea of a free market as a means of exercising freedom (in the 17th century with the Levellers and the 18th with Adam Smith) to the 21st century ideology of ‘free markets’ as essentially a means of exercising corporate power.

As respondent Niko Kolodny asks, though, what’s wrong with being governed, even at work? And Tyler Cowen argues that the costs of exiting a job are relatively low – Anderson compares leaving a job as a path to freedom is like saying Italians under Mussolini were free because they could leave the country (until they couldn’t, of course). This is surely hyperbole. There are without question abusive employers of marginalised workers and it behoves those of us with good jobs to appreciate this. But an argument about employer abuses is an argument about the need for the state (public government) to do a better job with legal protections and their enforcement. For instance, governments (and the legal profession) are finally bearing down on the extensive use of NDAs; good. It is harder than it was even 10 years ago to fire an employee over their sexual preferences. People can be fired for expressing some views on social media – when these are illegal or just vile and damaging to their employer’s reputation, also good.

Anderson – whose Value in Ethics and Economics is a terrific book* – doesn’t bring in to the argument two issues that seem relevant. One is the Hirschman triptych of exit, voice and loyalty, which is a useful way of thinking about power in economic relationships and could have shed light on this context. The other is Elinor Ostrom,** whose private governance model by definition takes a form that is not arbitrary and abusive but consensual – it would have been interesting to see her design principles discussed in the context of the worker-employer relationship. The master key to governance design seems to be information asymmetries and the possibility of monitoring – I think this is why in the context of modern digital technologies we see on the one hand increased surveillance of workers in some jobs and firms, and on the other hand increased autonomy in decision-making for workers in different jobs and firms. The latter are high-trust and more productive organisations.

So I have every sympathy with Anderson’s criticism of bad workplace relationships, and the value of worker autonomy. But the lectures aren’t all that persuasive.

*I have an old copy – not sure why it’s so expensive even 2nd hand now.

**Also weirdly priced at £226.84 for the paperback on Amazon today – maybe the algorithm doesn’t like the heat?

  • Screenshot 2023-06-25 at 14.46.37