Keeping models in their place

The increasing use of algorithmic decision making raises some challenging questions, from bias due to societal biases being baked in to training data, to the loss of the space for compromise (due to the need to codify a loss or reward function in a machine learning system) that is so important in addressing conflicting aims and values in democratic societies. The broader role of models as a means of both understanding and shaping societys is one of the themes of my most recent book, Cogs and Monsters, in the domain of economics. In particular, I wanted to expose in economic modelling the reflexivity involved in being a member of a society analysing the society in order to try to change the society – when its other members may well alter (in often-unanticipated ways) the behaviour that was analysed – because they are subjects, not objects.

Well, all of this is the subject of Erica Thompson’s excellent book Escape from Model Land: how matehmatical models lead us astray and what we can do about it. It focuses on the use of algorithmic models, and has a wide range of examples, from health and epidemiological modelling to climate projections to financial markets. It covers as well as the reflexivity some familiar challenges such as performativity, non-linear dynamics, complex systems, structural breaks and risk vs ‘radical’ uncertainty. The ultimate conclusion is the need to be duly humble about what models can achieve. Alas, people – ‘experts’ – seem to all too often get caught up in the excitement about technical possibilities without the thoughtfulness needed to make decisions that will affect people’s lives in important ways.

So this is a much-needed and welcome book, and I’m looking forward to taking part in an event with the author at the LSE in January.

61OK4mevKDL._AC_UY436_QL65_-1

 

Our robot overlords?

I’ve chatted to Martin Ford about his new book Rule of the Robots for a Bristol Festival of Ideas event – the recording will be out on 6 October.

It’s a good read and quite a balanced perspective on both the benefits and costs of increasingly widespread use of AI, so a useful intro to the debates for anyone who wants an entry into the subject. There are lots of examples of applications with huge promise such as drug discovery. The book also looks at potential job losses from automation and issues such as data bias.

It doesn’t much address policy questions with the exception of arguing in favour of UBI. Regular readers of this blog will know I’m not a fan, as UBI seems like the ultimate Silicon Valley, individualist, solution to a Silicon Valley problem. I’d advocate policies to tilt the direction of automation, as there’s a pecuniary externality: individual firms don’t factor in the aggregate demand effects of their own cost-reduction investments. And also policies that address collective needs – public services, public transport, as well as a fair and sufficiently generous benefits system. No UBI in practice would ever be set high enough to address poverty and the lack of good jobs: if you want to pay everyone anything like average income, you’d have to collect taxes at a level more than average income.

But that debate is what the Bristol event is all about!

51V4pDtvYDL._SY291_BO1,204,203,200_QL40_ML2_

Are humans or computers more reasonable?

This essay, The Long History of Algorithmic Fairness, sent me to some of the new-to-me references, among them How Reason Almost Lost its Mind by Paul Erickson and five other authors. The book is the collective output of a six-week stint in 2010 at the Max Planck Institute for the History of Science in Berlin. That alone endeared it to meĀ  – just imagine being able to spend six weeks Abroad. And in Berlin, which was indeed my last trip Abroad in the brief period in September 2020 when travel was possible again. I started the book with some trepidation as collectives of academics aren’t known for crisp writing, but it’s actually very well written. I suspect this is a positive side-effect of interdisciplinarity: the way to learn each other’s disciplinary language is to be as clear as possible.

The book is very interesting, tracing the status of ‘rationality’ in the sense of logical or algorithmic reasoning, from the low status of human ‘computers’ (generally poorly-paid women) in the early part of the 20th century, to the high status of Cold War experts devising game theory and building ‘computers’, to the contestation about the meaning of rationality in more recent times: is it logical calculation, or is it what Herbert Simon called ‘procedural rationality’? This is a debate most recently manifested in the debate between the Kahneman/Tversky representation of human decision-making as ‘biased’ (as compared with the logical ideal) and the Gerd Gigerenzer argument that heuristics are a rational use of constrained mental resources.

How Reason… concludes, “The contemporary equivalents of Life and Business Week no longer feature admiring portraits of ‘action intellectuals’ or ‘Pentagon planners’, although these types are alive and well.” The arc of status is bending down again, although arguably it’s machine learning and AI – ur-rational calculators – rather than other types of humans gaining the top dog slot nowadays. As I’ve written in the economic methodology context, it’s odd that computers and also creatures from rats to pigeons to fungi are seen as rational calculators whereas humans are irrational.

Anyway, the book is mainly about the Cold War and how the technocrats reasoned about the existentially lethal game in which they were participants, and has lots of fascinating detail (and photos) about the period. From Schelling and Simon to the influence of operations research (my first micro textbook was Will Baumol’s Economic Theory and Operations Analysis) and shadow prices in economic allocation, the impact on economics was immense. (Philip Mirowski’s Machine Dreams covers some of that territory too, although I found it rather tendentious when I read it a while ago.) I’m interested in thinking about the implications of the use of AI for policy and in policy, and as it embeds a specific kind of calculating reason, thought How Reason Almost Lost its Mind was a very useful read.

41FbFUl8TlL._SX332_BO1,204,203,200_

Reading about AI

There have been quite a few general books about AI published recently. I’ve read a few and wrote about them here. The post also lists a number of books on the subject recommended by people on Twitter. Subsequently I also really enjoyed Janelle Shane’s hilarious You Look Like A Thing and I Love You – I literally cried with laughter. And actually there are many others that are less about AI itself and more about its consequences; Cathy O’Neill’s Weapons of Math Destruction, for instance, on AI bias.

This week I added to the roster also The Road to Conscious Machines: The Story of AI by Michael Wooldridge, which is another very good introduction. It is for the most part a chronological account of how AI developed to where it is today, with clear explanations of the successive approaches (with some appendices setting out more detail). The final chapters turn to possible risks and consequences. Wooldridge is impatient with the obsession about ethics and the trolley problem but alert to challenges like bias, the effect of automation on jobs, and safety of autonomous vehicles. He’s rather positive about the potential for AI alongside humans to make great achievements in some areas such as healthcare.

The Road to Conscious Machines is probably closest to Melanie Mitchell’s Artificial Intelligence: A Guide for Humans. Having read so many of these books now, quite a lot of it was pretty familiar. But I still enjoyed reading it. This is a clear and accessible history of AI, one that performs the useful service of debunking both the hype and the hysteria. If you’re only going to read one book about AI, this would be a good one to choose. But whichever you go for, you should read at least one book about AI: this is an important technology and it will still be there in the post-pandemic world.

41z7Y2txkNL._SX312_BO1,204,203,200_

 

Automation, the future of work and giraffes

Daniel Susskind’s A World Without Work: Technology, Automation and How We Should Respond is a very nice overview of the issues related to technological unemployment – will it happen, how will it affect people, and what policy responses might make sense. As the book notes, it is impossible to predict the number/proportion of jobs that might be affected, or how quickly, with detailed studies coming up with numbers ranging from about a tenth to about a half. But that there will be disruption, and that past policies have not dealt well with the consequences, is far less uncertain. Even if you believe that the economy will in time adjust the types and amount of work available – and so in that sense this time is *not* different from the past – the transition could be painful.

The book has three sections. The first looks at the history of technological unemployment and why we might expect AI to lead to a new wave. The second sets out the task-based analysis introduced by David Autor and others to sketch how the character of people’s work can change significantly. While dismissing the lump of labour fallacy, it argues that one of the main symptoms will be increased inequality. It predicts, gloomily, that this will get worse and that some people will be left with no capital and redundant human capital,, “leaving them with nothing at all.” I’m not sure that will be politically viable, judging from current events, but the logic is straightforward.

The final section turns to potential policy responses: improved education – heaven knows, we need that; ‘Big State’ – “a new institution to take the labour market’s place” – in effect more tax and a UBI; and tackling Big Tech through competition policy – yep, I’m definitely up for that. Finally, Susskind argues that part of the role for the Big State is to ensure we all have meaning in our working lives, replacing the job as the source of people’s identity, though I wasn’t sure how this should happen.

It’s a clearly-written book, covering concisely ground that will be familiar to economists working on this territory, and providing a useful overview for those not familiar with the debate. Although I’m not a fan of UBI, the other policy prescriptions seem perfectly sensible – perhaps too sensible to be inspiring.

41mVd8pmXCL._SX324_BO1,204,203,200_I must say that my other recent read has made me even more sceptical about the scope for AI to take over from humans. Recently I noted there has been a wave of terrific books on AI. Add to the list Janelle Shane’s You Look Like A Thing and I Love You. You’d be absolutely mad not to read this bok. It had me in hysterics, while making it super-clear what’s hype and what’s realistic about current and near-future AI. And explaining why image recognition AI is so prone to seeing giraffes – many giraffes – where there are none. An absolute must-read.

411Akr2eXqL._SX355_BO1,204,203,200_