Calculating the economy

One of the books I’ve read on this trip to the AEA/ASSA meetings in San Diego is The People’s Republic of Walmart by Leigh Phillips and Michael Rosworski. This is a very entertaining projection of the socialist calculation debate onto modern capitalism.

41JGcj2r26L._SX329_BO1,204,203,200_The starting point is the Simon/Coase realisation that big firms are internally planned economies – if it works for Walmart, why wouldn’t it work at larger scale? The authors’ hypothesis is that economic planning might work better now that we have so much more powerful computers and better data.

I’d recommend the book as an introduction to the socialist calculation debate for those unfamiliar with it (ideal for students). It cites some of my favourite books including Francis Spufford’s Red Plenty and Eden Medina’s Cybernetic Revolutionaries. Some chilling lines – about Stalin’s purges, for instance: “Anyone with any expertise was placed under suspicion.” It’s a great read.

Am I persuaded? Not entirely. Technology clearly will change organisational configurations, but it has just as much been decentralisation of firms and extended supply chains as it has been giant Walmart-type firms. I’m also sceptical that the data available is actually the information needed to plan an economy, or that it’s easy to access and join up. Still, it’s the right question, and a reminder that the boundary between market, state and other forms of organistaion is not set in stone but needs constant negotiation – in fact, I know a great book about this about to be published: Markets, State and People.

0FBA95F6-E1DB-4017-A90A-89A2D3C43EB7As seen at ASSA2020 in San Diego

 

 

Configuring the lumpy economy

Somebody at the University of Chicago Press has noticed how my mind works, and sent me Slices and Lumps: Division + Aggregation by Lee Anne Fennell. It’s about the implications of the reality that economic resources are, well, lumpy and variably slice-able. The book starts with the concept of configuration: how to divide up goods that exist in lumps to satisfy various claims on them, and how to put together ones that are separate to satsify needs and demands. The interaction between law (especially property rights) and economics is obvious – the author is a law professor. So is the immediate implication that marginal analysis is not always useful.

This framing in terms of configuration allows the book to range widely over various economic problems. About two thirds of it consists of chapters looking at the issues of configuration in specific contexts such as financial decisions, urban planning, housing decisions. The latter for example encompasses some physical lumpiness or indivisibilities and some legal or regulatory ones. Airbnb – where allowed – enables transactions over excess capacity due to lumpiness, as home owners can sell temporary use rights.

The book is topped and tailed by some general reflections on lumping and slicing. The challenges are symmetric. The commons is a tragedy because too many people can access resources (slicing is too easy), whereas the anti-commons is too because too many people can block the use of resources. Examples of the latter include redevelopment of a brownfield site where there are too many owners to get to agree to sell their land but also patent thickets. Property rights can be both too fragmented and not fragmented enough. There are many examples of the way policy can shape choice sets by changing them to be more or less chunky – changing tick sizes in financial markets, but also unbundling albums so people can stream individual songs. Fennell writes, “At the very least, the significance of resource segmentation and choice construction should be taken into account in thinking innovatively about how to address externalities.” Similarly, when it comes to personal choices, we can shape those by altering the units of choice – some are more or less binary (failing a test by a tiny margin is as bad as failing by a larger one), others involve smaller steps (writing a few paragraphs of a paper).

Woven through the book, too, are examples of how digital technology is changing the size of lumps or making slicing more feasible – from Airbnb to Crowd Cow, “An intermediary that enables people to buymuch smaller shares of a particular farm’s bovines (and to select desired cuts of meat as well),” whereas few of us can fit a quarter of a cow in the freezer. Fennell suggests renaming the ‘sharing economy’ as the ‘slicing economy’. Technology is enabling both finer physical and time slicing.

All in all, a very intriguing book.310j5Yg+swL._SX331_BO1,204,203,200_Slices and Lumps

 

The gorilla problem

I was so keen to read Stuart Russell’s new book on AI, Human Compatible: AI and the Problem of Control, that I ordered it three times over. I’m not disappointed (although I returend two copies). It’s a very interesting and thoughtful book, and has some important implications for welfare economics – an area of the discipline in great need of being revisited, after years – decades – of a general lack of interest.

The book’s theme is how to engineer AI that can be guaranteed to serve human interests, rather than taking control and serving the specific interests programmed into its objective functions and rewards. The control problem is much-debated in the AI literature, in various forms. AI systems aim to achieve a specified objective given what they perceive from data inputs including through sensors. How to control them is becoming an urgent challenge – as the book points out, by 2008 there were more objects than people connected to the internet, giving AI systems ever more extensive access, input and output, to the real world. The potential of AI is their scope to communicate – machines can do better than any number n of humans because they access information that isn’t kept in n separate brains and communicated imperfectly between them. Humans have to spend a ton of time in meetings, machines don’t.

Russell argues that the AI community has been too slow to face up to the probability that machines as currently designed will gain control over humans – keep us at best as pets and at worst create a hostile environment for us, driving us slowly extinct, as we have gorillas (hence the gorilla problem). Some of the solutions proposed by those recognising the problem have been bizarre, such as ‘neural lace’ that permanently connects the human cortex to machines. As the book comments: “If humans need brain surgery merely to survive the threat posed by their own technology, perhaps we’ve made a mistake somewhere along the line.”

He proposes instead three principles to be adopted by the AI community:

  • the machine’s only objective is to maximise the realisation of human preferences
  • the machine is initially uncertain about what those preferences are
  • the ultimate source of information about human preferences is human behaviour.

He notes that AI systems embed uncertainty except in the objective they are set to maximise. The utility function and cost/reward/loss function are assumed to be perfectly known. This is an approach shared of course with economics. There is a great need to study planning and decision making with partial and uncertain information about preferences, Russell argues. There are also difficult social welfare questions. It’s one thing to think about an AI system deciding for an individual but what about groups? Utilitarianism has some well-known issues, much chewed over in social choice theory. But here we are asking AI systems to (implicitly) aggregate over individuals and make interpersonal comparisons. As I noted in my inaugural lecture, we’ve created AI systems that are homo economicus on steroids and it’s far from obvious this is a good idea. In a forthcoming paper with some of my favourite computer scientists, we look at the implications of the use of AI in public decisions for social choice and politics. The principles also require being able to teach machines (and ourselves) a lot about the links between human behaviour, the decision environment and underlying preferences. I’ll need to think about it some more, but these principles seem a good foundation for developing AI systems that serve human purposes in a fundamental way, rather than in a short-term instrumental one.

Anyway, Human Compatible is a terrific book. It doesn’t need any technical knowledge and indeed has appendices that are good explainers of some of the technical stuff.  I also like it that the book is rather optimistic, even about the geopolitical AI arms race: “Human-level AI is not a zero sum game and nothing is lost by sharing it. On the other hand, competing to be the first to achieve human level AI without first solving the control problem is a negative sum game. The payoff for everyone is minus infinity.” It’s clear that to solve the control problem cognitive and social scientists and the AI community need to start talking to each other a lot – and soon – if we’re to escape the fate gorillas suffered at our hands.

41M3a7N6caL._SX323_BO1,204,203,200_

Human Compatible by Stuart Russell

 

Thinking strategically about platforms

Digital platforms have been very much a focus of policy attention of late, with reports on the problems and challenges they raise published by the UK (several, including the Furman Review), Australia, Germany, European Commission & others. The platforms these various reports discuss are the big ones, and the concerns range from competition policy to employment practices to online harms.

A terrific new book by Michael Cusumano, Annabelle Gawer and Devid Yoffie, The Business of Platforms, points out though that most of the digital platforms that are not big are dead: four in five fail. The book is aimed at people running or starting platforms, offering advice on (as the subtitle puts it). “Strategy in the Age of Digital Competition, Innovation and Power).” The book very nicely links business strategy to the underlying economic characteristics of digital, and I think is probably in this respect the best tech business book since Shaprio and Varian’s (now old, 1998) Information Rules.

It starts by pointing out that there is nothing inevitable about network effects (direct and indirect) kicking in: they have to be nurtured: “Companies and governments have to make the right srategic and policy decisions in order to drive strong network effects.” These can include technical standards, for example, or ensuring competition thrives at the right times and points. The book also distinguishes between two types of platform, requiring different strategies (although there are a groing number of hybrids). Innovation platforms create value by enabling third parties to develop products or service on top of the platform, while transaction platofrms create value by matching different sides of a market.

Key challenges for all, though, involve solving the ‘chicken and egg’ problem (because different sides of the platform depend on each other) by appropriate pricing and cross-subsidy, and figuring out a business model. (And in my view the dependence of so many on advertising is a major weakness & can’t be sustained). The book uses the framework to explore the many platform failures. It also has a chapter on how non-platform incumbents can respond to the digital challenge (it’s tough…), and looks briefly at issues such as the use and governance of data, and also the importance of working with regulators rather than against them and recognizing the responsibilities that come with (market and other) power. “Every major company we cited in this book has been the subjject of government investigations, local regulatory oversight, and intense media scrutiny.”

All in all, highly recommended. If you know the economics, the case studies and management literature covered will be informative, and if you know the business details, the economic framework should be useful. I very much enjoyed reading it.

41rS0XlCETL._SY346_

 

The Technology Trap

Anybody interested in the economic impact of digital and AI, in particular on jobs, will want to read Carl Frey’s new book, The Technology Trap: Capital, Labor and Power in the Age of Automation. He is probably best known for his rather gloomy work with Michael Osborne (original pdf version here) highlighting the vulnerability of many jobs – almost half in the US – to automation in the next couple of decades. The book expands on the issues that will determine the actual outcomes, and is – as the title indicates – still quite pessimistic.

The structure of the book is historical, with sections on pre-industrial technologies, the Industrial Revolution (which saw widening inequalities), the mass production era (which reduced inequalities and created an affluent middle class), the recent polarization in the era of globalisation and digital, and future prospects. The key distinction Frey draws in between technologies which substitute for labour and those which complement it. Whereas the 19th century and the present seem to involve the replacement of people with machines, the 20th century innovations needed increasingly skilled labour to work with them.

Although I am probably not as gloomy about future prospects for work and incomes, I really enjoyed reading the book, which covers a wide range of technological applications in addition to the well-known historical examples. It leaves open two questions. One is about the present conjuncture: what explains the combination of seemingly rapid technological change and adoption with – in at least some OECD economies – very low unemployment rates? The answer might just be ‘long and variable lags’ but the question surely needs addressing.

The broader question, or set of questions, is really about the interaction between technology and labour market and other economic institutions. Although automation is likely to have the same general effects everywhere, the outcomes for workers will be refracted through very different national job markets, education systems, tax systems and so on. How much can any individual country lean successfully against the wind? Frey is not (unlike Robert Gordon) US-centric but does not get into these issues.

And beyond the response to technological change, what is it that determines the direction of technical change in the first place? The book treats the labour substitution or complementing as exogenous. But why were electric unit drives in auto plants and internal combustion engines created as complementary and yet automation in today’s car industry seems like it will substitute for labour? It seems to me this must be an institutional story too, but I don’t think it’s been told yet.

51VabazLy7L._SX327_BO1,204,203,200_[easyazon_link identifier=”069117279X” locale=”UK” tag=”enlighteconom-21″]The Technology Trap[/easyazon_link]