The welcome application of good sense to AI hype

Summer over in a flash, autumn wind and rain outside – perhaps cosy evenings will speed up both my reading and review-posting.

I just finished AI Snake Oil by Arvind Narayanan and Sayash Kapoor, having long been a fan of the blog of the same name. The book is a really useful guide through the current hype. It distinguishes 3 kinds of AI: generative, predictive and content moderation AI – an interesting categorisation.

On the generative AI so much in the air since ChatGPT was launched in late 2022, and the persuasive debunking here is of the idea that we are anywhere close to ‘general’ machine intelligence, and of the notion that such models pose existential risks. The authors are far more concerned with the risks associated with the use of predictive AI in decision-making. These chapters provide an overview of the dangers: from data bias to model fragility or overfitting to the broad observation that social phenomena are fundamentally more complicated than any model can predict. As Professor Kevin Fong said in his evidence to the Covid inquiry last week, “There is more to know than you can count.” An important message in these times of excessive belief in the power of data.

The section on the challenges of content moderation were particularly interesting to me, as I’ve not thought much about it. The book argues that content moderation AI is no silver bullet to tackle the harms related to social media – in the authors’ view it is impossible to remove human judgement about context and appropriateness. They would like social media companies to spend far more on humans and on setting up redress mechanisms for when the automated moderation makes the wrong call: people currently have no recourse. They also point out that social media is set up with an internal AI conflict: content moderation algorithms are moderating the content the platform’s recommendation algorithms are recommending. The latter have the upper hand because it doesn’t involve delicate judgements about content, only tracking the behaviour of platform users to amplify popular posts.

There have been a lot of new books about AI this year, and I’ve read many good ones. AI Snake Oil joins the stack: it’s well-informed, clear and persuasive – the cool breeze of knowledge and good sense are a good antidote to anybody inclined to believe the hyped claims and fears.

Screenshot 2024-09-29 at 17.31.57

Humans and machines

My colleague Neil Lawrence’s new book, The Atomic Human: Understanding Ourselves in the Age of AI, is a terrific account of why ‘artificial intelligence’ is fundamentally different from embodied human intelligence – which makes it on the one hand an optimistic perspective, but on the other leads him to end with an alarming warning, that the potential of pervasive machine intelligence, “could be as damaging to our cultural ecosystem as our actions have been to the natural ecosystem.” The influence of AI on human society could be parallel to our adverse influence on the environment – no matter how good the intentions – because just as nature moves at the pace of evolutionary time so the interface between humans and nature has failed to take account of the damage humans cause, so the computer-human interface is characterised by this mismatch in information-processing speeds.

The book does not offer a handy list of actions to prevent the damage AI might do to us, but ends by warning about two things: the immense concentration of power in its development and use; and the use of automated decision-making in contexts where any judgement is essential – which is many contexts where uncertainty enters the picture. I rather fear those horses have bolted, though.

Most of the book is a fascinating account of both types of intelligence, AI and human cognition, using information theory as well as cognitive science to explain the profound differences. As he notes, “Shannon defined information as being separated from its context,” but humans need contextual understanding to communicate. Neil uses stories to provide context, to make what could be rather dry material more engaging, braiding the same examples (many from wartime: Bletchley Park, his grandfather’s D-Day experience alongside General Patton’s, the development of radar, missile testing…) through the text. Sometimes I found these confusing, but I have a very literal mind.

There have been lots of books about AI out this year, and I’ve generally enjoyed the ones I’ve read – although whatever you do, avoid Ray Kurzweil’s. I’d recommend adding this one to the to-read list, as it offers a fresh perspective on AI from a super-expert and super-thoughtful practitioner.

Screenshot 2024-06-22 at 11.18.14

Escape velocity?

I’ve read Ray Kurweil’s jaw-dropping book, The Singularity is Nearer: When We Merge With AI, so you don’t have to. He does literally believe we will be injected with nanobots to create an AI super-cortex above our own neo-cortex, plugged into the cloud and therefore all of humanity’s accumulated intelligence, and thus become super-intelligent with capabilities we can hardly imagine. Among the other possibilities he forsees AI ‘replicants’ (yes, he calls them that) created from the images and texts of deceased loved ones, to restore them to artificial life. The main challenge he forsees will be their exact legal status. The book has a lot of capsule summaries about consciousness, intelligence, how AI works – and also the general ways in which life is getting better, there will be more jobs, and our health and lifespans will improve by leaps and bounds.

Might he be wrong about reaching ‘longevity escape velocity’ and the AI singularity by 2030? A hint of this when he says that book production is so slow that what he has written in 2023 will already be overtaken by events by mid-2024 when we are reading: “AI will likely be much more woven tightly into your daily life.” Hmm. Not sure about that prognostication. Although one of the scariest things about the book is the advance praise from Bill Gates, who writes that the author is: “The best person I know at predicting the future of artificial intelligence.” Do all the Tech Types believe this?

One suspects they believe they’re already more super-intelligent than the rest of us, so what could possibly go wrong?

Screenshot 2024-06-07 at 07.06.50

 

 

AIs as the best of us

Another book of many out on AI is As If Human: Ethics and Artifical Intelligence by Nigel Shadbolt and Roger Hampson. I found this a very accessible book on AI ethics, possibly because neither author is an academic philosopher (sorry, philosophers….). Generally I’m a bit impatient with AI ethics, partly because it has dominated debate about AI at the expense of thinking about incentives and politics, and partly because of my low tolerance for the kind of bizarre thought experiments that seem to characterise the subject. Nevertheless, I found this book clear and pretty persuasive, with the damn trolley problem only appearing a small number of times.

The key point is reflected in the title: “AIs should be judged morally as if they were humans,” (although of course they are not). This implies that any decisions made by machines affecting humans should be transparent, accountable and allowing appeal and redress; we should treat AI systems as if they were humans taking the decisions. There may be contested accountability beyond that to other groups of humans – the car manufacturer and the insurer, say – but ‘the system’ can’t get away with no moral accountability. (Which touches on a fantastic new book, The Unaccountability Machine, by Dan Davies that I will review here soon).

Shadbolt and Hampson end with a set of seven principles for AIs, including ‘A thing should say what it is and be what it says’, and ‘Artificial intelligences are only ethical if they embody the best human values’. Also that private mega-corps should not be determining the future of humanity. As they say, “Legal pushback by individuals and consumer groups against the large digital corporations is surely coming.”

As If is well worth a read. It’s short and high level but does have examples (and includes the point that seems obvious but I have seen too rarely made, that the insurance industry is steadily putting itself out of business by progressively reducing risk pooling through data use).

Screenshot 2024-04-03 at 09.46.09

 

AI and us

Code Dependent: Living in the shadow of AI by Madhumita Murgia is a gripping read. She’s the FT’s AI Editor, so the book is well-written and benefits from her reporting experience at the FT and previously Wired. It is a book of reportage, collating tales of people’s bad experiences either as part of the low-paid work force in low income countries tagging images or moderating content, or being on the receiving end of algorithmic decision-making. The common thread is the destruction of human agency and the utter absence of accountability or scope for redress when AI systems are created and deployed.

The analytical framework is the idea of data colonialism, the extraction of information from individuals for its use in ways that never benefit them. The book is not entirely negative about AI and sees the possibilities. One example is the use of AI on a large sample of knee-xrays looking for osteo-arthritis. The puzzle being tackled by the researcher concerned was that African American patients consistently reported greater pain than patients of European extraction when their X rays looked exactly the same to the human radiologists. The solution turned out to be that the X rays were scored against a scale developed in mid-20th century Manchester on white, male patients. When the researcher, Ziad Obermeyer, fed a database of X-ray images to an AI algorithm, his model proved a much better predictor of pain. Humans wear blinkers created by the measurement frameworks we have already constructed, whereas AI is (or can be) a blank slate.

However, this is one of the optimistic examples in the book, where AI can potentially offer a positive outcome for humans. It is outnumbered by the counter-examples – Uber drivers being shortchanged by the algorithm or falsely flagged for some misdemeanour and having no possibility of redress, women haunted by deepfake pornography, Kenyan workers traumatised by the images they need to assess for content moderation yet unable to even speak about it because of the NDAs they have to sign, data collected from powerless and poor humans to train medical apps whose use they will never be able to afford.

The book brought to life for me an abstract idea I’ve been thinking about pursuing for a while: the need to find business models and financing modes that will enable the technology to benefit everyone. The technological possibilities are there but the only prevailing models are exploitative. Who is going to find how to deploy AI for the common good? How can the use of AI models be made accountable? Because it isn’t just a matter of ‘computer says No’, but rather ‘computer doesn’t acknowledge your existence’. And behind the computers stand the rich and powerful of the tech world.

There are lots of new books about AI out or about to be published, including AI Needs You by my colleague Verity Harding (I’ll post separately). I strongly recommend both of these; and would also observe that it’s women in the forefront of pushing for AI to serve everyone.

Screenshot 2023-12-30 at 10.32.59Screenshot 2023-12-30 at 10.33.50