Robots among us

I ended up with mixed reactions to Waiting for Robots: The Hired Hands of Automation by Antonio Caselli.

The powerful point it makes is the complete dependence of AI and digital technologies generally on ongoing human input. Many years ago, my husband – then a technology reporter for the BBC – was digging out the facts about a hyped dot com company called Spinvox. Its business was said not be automated voice transcription, but it turned out the work was mainly done by humans, not computers (although the story turned scratchy –  the linked post responds to the company’s points). Waiting for Robots gives many examples of apps that similarly involve cheap human labour rather than digital magic – I was surprised by this. Less surprising – and indeed covered in other books such as Madhumiat Murgia’s recent Code Dependent – is the use of humans in content moderation (remember when big social media companies used to do that?), data labelling and other services from Mechanical Turk to reinforcement learning with human feedback for LLMs.

The book also claims much more as ‘labour’ and this is where I disagree. Of course big tech benefits from my digital exhaust and from content I post online such as cute dog photos. But this seems to me categorically different from often (badly) paid employment relationships. Although the stickiness of network effects or habit might keep me on a certain service, although the companies might set the defaults so they hoover up my activity data, the power dynamics are different. I can switch, for instance from X to BlueSky, or from Amazon to my local bookstore. So I’m not a fan of portraying these types of data-provision as more ‘digital labour’.

Having said that, the book makes a compelling case that robots and humans are interdependent and will remain so. Generative AI will continue to need human-produced material (‘data’) and intervention to avert model collapse. Humans are also going to have to pay for digital services so will need to have money to pay with. Focusing on the economic dynamics involved is crucial, as it is clear that the market/platform/ecosystem structures are currently tilted towards the (owners of) robots and away from humans. So, for all that I’m not persuaded by the classification of different types of ‘digital labour’ here (and find the anti-capitalist perspective on tackling the challenges unpragmatic apart from anything else), there is a lot of food for thought in Waiting for Robots.

61o8lw6oQAL._AC_UY218_

The tech coup

It’s some months since I read Marietje Schaake’s The Tech Coup, as she delivered the ST Lee Poicy Lecture here in Cambridge last November 11th, right after the US presidential election. Just a short time later, her warning looks even more prescient than it did on the day, as the American tech executives bend the knee at the court of Mar A Lago.

Most of the book is a descriptive analysis of how the US tech companies have come to occupy such a central role in daily life and in the politics of the west, often under the cover of “innovation” and their role in delivering economic growth. The chapters pick up on specific concerns, such as facial recognition being used by police forces as well as authoritarian regimes, misinformation on social media, the cyber insecurity due to corporate practices, and the loss of sovereignty by states other then the US. The thread running through all these is the vanishing concern for the public interest in the development and deployment of digital technology. While the issues are sadly familiar, Schaake brings the unique perspective of someone who was an MEP with responsibilities for the digital sector and now a Stanford University academic, in the heart of Silicon Valley.

The conclusion is titled, “Stop the tech coup, save democracy.” she writes, “The tech coup shifting power from public and democratic institutions to companies must stop.” But continues, “Invisibly or indirectly, a whole host of technologies is privatizing responsibilities that used to be the monopoly of the state.” As I write this post, the headlines today feature Mr Musk getting an office in the White House later this month, the European Commission ‘pausing’ its anti-trust actions against the big US tech firms under the EU DMA to consider the political ramifications, and the UK government, on the advice of a tech investor, going gung-ho on getting AI used through the public sector asap. Interestingly, yesterday I took part in a webinar at ICRIER, the Delhi-based think tank, where there was much emphasis on the direct role of the state in running digital public infrastructure. Public options must surely be a part of, not stopping, but turning back, the coup – or if you prefer a less dramatic turn of phrase, putting public interest back at the centre of innovation in this amazing technology.

61SgA5XC93L._AC_UY436_QL65_

 

Engineers and their problems

I bought Wicked Problems: How to engineer a better world by Guru Madhavan because of a column by the author in the FT, The Truth About Maximising Efficiency: it argues that governments, like engineered artefacts and indeed our bodies need some redundancy and safety marging. How true!

I enjoyed reading the book but in terms of analysis didn’t get much out of it beyond the FT column. It advocates a systems engineering approach even to ‘hard’ problems ie. well-definable ones. It classifies problems into hard (solvable), soft (only resolvable) and messy (need redefining) and takes wicked problems as the union of these categories. It was interesting to me to read a critique of engineering similar to the one I apply to economics, namely that engineers too often ignore the normative or political context for their solutions. The book sort of makes the case that engineering is social but not in a particularly clear way.

Having said that, the book has lots of examples of messiness and wickedness from the world of engineering, and particularly aircraft training and engineering. It focuses on the career of Ed Link – whom I had never heard of – who went from making player pianos to inventing the first on the ground flight training simulator to inventing and building submersible vessels. The book is full of the kind of fact that pleases me no end – for example that black boxes are orange and were created by Lockheed Air Services along with food company General Mills and a waste disposal company, Waste King. Also – tragically relevant – that engines are tested for resilience against bird strikes by lobbing chickens at them – real birds rather than imitation ones, and freshly killed rather than frozen and defrosted. A cited paper by John Downer, When the Chick Hits the Fan, observes that birds have adapted to devices meant to scare them away, so there is a sort of arms race between engineers and birds. (The paper is fascinating – there is an expert debate about how many birds of what size and being lobbed in how fast constitute an adequate test. The resulting pulp is known as ‘snarge’.)

Most of the examples of wicked problems in the book involve engineering rather than social problems. On the one hand, that’s an issue because we tend to think of wicked problems as social and political – paying for the groing need for adult social care, for example, On the other hand, the one main example of that type, reducing homelessness among veterans in the US, discusses how to get the different agencies and stakeholder to talk to each other and respect their differences. but doesn’t in the end describe a solution. Perhaps the moral one is meant to take is that wicked problems don’t have a solution?

All in all, an enjoyable read, and I for one am on board with systems engineering approaches, resilience and organisational flexibility.

81VmZuPb1LL._AC_UY436_QL65_

The narrow path from votes of despair

I read Sam Freedman’s Failed State: Why Nothing Works and How We Fix It with a mixture of nods of recognition and gasps of disbelief. It’s all too apparent  that – as the subtitle puts it – nothing works in aspects of life in the UK dependent in some way on the successful design and implementation of government policy (which is most aspects tbh). Those of us who have engaged with the policy world in some way will have our own experiences; the relevant chapters of the book reflect my own very accurately. What makes this an incredibly sobering book (although well-written, with humour) is the accumulation of evidence across all eight chapters, covering everything from parliament and the excessive growth of executive power, the House of Lords, political parties and the character of MPs, the judiciary, the criminal justice system, the civil service, local government, non-departmental public bodies and the media. A relentless accumulation of depressing dysfunction.

I’m very much on board with the book‘s main recommended fix, substantial devolution of power from central to sub-national governments – this is a journey I’ve been advocating since first getting involved with Greater Manchester’s case for greater powers from 2008 on. But this is not a simple matter. Many people point to the hollowing out of capacity in local government – true to some extent but one can see how to tackle that. Harder are the questions of accountability that raises. But – as the book argues – it’s hard to see any other plausible change that would shift the dial on interconnected institutional reforms. And equally hard to see how nothing can change: “Public trust in politicians and politics – never high – has crashed through the floor.” The governance travails of the Labour Government in the few weeks since the election suggest the time for something to change can’t be far away – surely? There has at least being slowly growing consensus about the need for decentralisation from Whitehall and Westminster, as a potentially feasible path for reducing the powers of an over-dominant executive branch that can’t deliver and can’t cope.

This book joins other incisive critiques and reviews of how the UK is governed – my colleague Mike Kenny led a major inquiry into the constitution, the Institute for Government documents failures across the board, and others such as Martin Stanley are excellent on specific aspects (the civil service and regulatory state in his case). It’s cold comfort that other countries are experiencing similar failures against a background of slow growth and hyper-fast social media, and colder still that in so many extremist parties are capturing the votes of despair. It’s a narrow path from today’s failures to a less disturbing outcome. The book ends posing a question to those in central government with the power and opportunity to start the process of change: if the UK goes down the path of crisis and reaction, “Politicians will find themselves asking: why didn’t we do things differently when we had the chance?”

Screenshot 2024-10-09 at 14.13.02

The welcome application of good sense to AI hype

Summer over in a flash, autumn wind and rain outside – perhaps cosy evenings will speed up both my reading and review-posting.

I just finished AI Snake Oil by Arvind Narayanan and Sayash Kapoor, having long been a fan of the blog of the same name. The book is a really useful guide through the current hype. It distinguishes 3 kinds of AI: generative, predictive and content moderation AI – an interesting categorisation.

On the generative AI so much in the air since ChatGPT was launched in late 2022, and the persuasive debunking here is of the idea that we are anywhere close to ‘general’ machine intelligence, and of the notion that such models pose existential risks. The authors are far more concerned with the risks associated with the use of predictive AI in decision-making. These chapters provide an overview of the dangers: from data bias to model fragility or overfitting to the broad observation that social phenomena are fundamentally more complicated than any model can predict. As Professor Kevin Fong said in his evidence to the Covid inquiry last week, “There is more to know than you can count.” An important message in these times of excessive belief in the power of data.

The section on the challenges of content moderation were particularly interesting to me, as I’ve not thought much about it. The book argues that content moderation AI is no silver bullet to tackle the harms related to social media – in the authors’ view it is impossible to remove human judgement about context and appropriateness. They would like social media companies to spend far more on humans and on setting up redress mechanisms for when the automated moderation makes the wrong call: people currently have no recourse. They also point out that social media is set up with an internal AI conflict: content moderation algorithms are moderating the content the platform’s recommendation algorithms are recommending. The latter have the upper hand because it doesn’t involve delicate judgements about content, only tracking the behaviour of platform users to amplify popular posts.

There have been a lot of new books about AI this year, and I’ve read many good ones. AI Snake Oil joins the stack: it’s well-informed, clear and persuasive – the cool breeze of knowledge and good sense are a good antidote to anybody inclined to believe the hyped claims and fears.

Screenshot 2024-09-29 at 17.31.57