Tech self-governance

The question of how to govern and regulate new technologies has long interested me, including in the context of a Bennett Institute and Open Data Institute report on the (social welfare) value of data, which we’ll be publishing in a few days’ time. One of the pressing issues in order to crystallise the positive spillovers from data (and so much of the attention in public debate only focuses on the negative spillovers) is the development of trustworthy institutions to handle access rights. We’re going to be doing more work on the governance of technologies, taking a historical perspective – more on that another time.

Anyway, this interest made me delighted to learn – chatting to him at the annual TSE digital conference – that Stephen Maurer had recently published Self-governance in Science: Community-Based Strategies for Managing Dangerous Knowledge. It’s terrifically interesting & I recommend it to anyone interested in this area.

The book looks at two areas, commerce and academic research, in two ways: historical case study examples; and economic theory. There are examples of success and of failure in both commercial and academic worlds, and the economic models summarise the characteristics that explain whether or not self-governance can be sustained.

So for instance in the commercial world, food safety and sustainable fisheries standards have been adopted and largely maintained largely through private governance initiatives and mechanisms, synthetic biology much less so, having an alphabet soup of competing standards. Competitive markets are not well able to sustain private standards, Maurer suggests: “Competitive markets can only address problems where society has previously addressed some price tag to the issue.” Externalities do not carry these price tags. Hence supply chains with anchor firms are better able to bear the costs of compliance with standards – the big purchasing firm can require its suppliers to adhere.

Similarly, in the case of academic science the issue is whether there are viable mechanisms to force dissenting minorities to adhere to standards such as moratoria on certain kinds of research. The case studies suggest it is actually harder to bring about self-governance in scientific research as there are weaker sanctions than the financial ones at play in the commercial world. Success hinges on the community having a high level of mutual trust, and sometimes on the threat of formal government regulation. The book offers some useful strategies for scientific self-governance such as building coalitions of the willing over time (small p politics), and co-opting the editors of significant journals – as the race to publish first is so often the reason for the failure of research moratoria to last.

The one element I thought was largely missing from the analytical sections was the extent to which the character of the technologies or goods themselves affect the likelihood of successful self-governance. This is one aspect that has come up in our preparatory work – the cost and accessibility of different production technologies. The analysis here focuses on the costs of implementing standards, and on monitoring and enforcement.

This is a fascinating book, including the case studies, which range from atomic physics to fair trade coffee. It isn’t intended to be a practical guide (& the title is hardly the airport bookstore variety) but anybody interested in raising standards in supply chains or finding ways to manage the deployment of new technologies will find a lot of useful insights here.

51lk-iF9V9L

Humans in the machine

There’s some very interesting insight into the human workforce making the digital platforms work in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary Gray and Siddarth Suri. The book as a whole doesn’t quite cohere, though, nor deliver on the promise of the subtitle. The bulk of the book draws on interviews and surveys of people who work via platforms like Amazon’s famous Mechanical Turk, but also the internal Microsoft equivalent, UHRS, and a smaller social enterprise version, Amara.

This is all extremely interesting, about how people work – in the US and Bangalore – their tactics for making money, dealing with stress, how many hours they have to work and when, how much or little agency they have, and so on. Not least, it reminds or informs readers that a lot of AI is based on the labelling done by humans to create training data sets. However, not all the ghost work described is of this kind and some, indeed, has little to do with Silicon Valley except that a digital platform mediates the employer and the seeker of work. As the authors note, this latter type is a continuation of the history of automation, the role of new pools of cheap labour in industrial capitalism, and the division of labour markets into privileged insiders and contingent – badly paid, insecure – outsiders. The new global underclass is just one step up from the old global underclass; at least they have a smartphone or computer and internet access.81uywR4bPoL._AC_UY218_ML3_The survey results confirm that some of the digital ghost workers value the flexibility they get reasonably highly – although with quite a high variance in the distribution. Not surprisingly, those with least pressing need for income most value the flexibility. Some of the women workers in India also valued the connection to the labour market when they were unable to work outside of their home because of childcare or family expectations. Similarly, with the Amara platform, “Workers can make ghost work a navigable path out of challenging circumstances, meeting a basic need for autonomy and independence that is necessary for pursuing other interests, bigger than money.”

The book’s recommendations boil down to recommending that platforms should introduce double bottom line accounting – in other words, find a social conscience alongside their desire for profit. Without a discussion of their (lack of) incentives to do so, this is a bit thin. Still, the book is well worth reading for fascinating anthropological insights from the field work, and for the reminder about the humans in the machine.

 

Automation, the future of work and giraffes

Daniel Susskind’s A World Without Work: Technology, Automation and How We Should Respond is a very nice overview of the issues related to technological unemployment – will it happen, how will it affect people, and what policy responses might make sense. As the book notes, it is impossible to predict the number/proportion of jobs that might be affected, or how quickly, with detailed studies coming up with numbers ranging from about a tenth to about a half. But that there will be disruption, and that past policies have not dealt well with the consequences, is far less uncertain. Even if you believe that the economy will in time adjust the types and amount of work available – and so in that sense this time is *not* different from the past – the transition could be painful.

The book has three sections. The first looks at the history of technological unemployment and why we might expect AI to lead to a new wave. The second sets out the task-based analysis introduced by David Autor and others to sketch how the character of people’s work can change significantly. While dismissing the lump of labour fallacy, it argues that one of the main symptoms will be increased inequality. It predicts, gloomily, that this will get worse and that some people will be left with no capital and redundant human capital,, “leaving them with nothing at all.” I’m not sure that will be politically viable, judging from current events, but the logic is straightforward.

The final section turns to potential policy responses: improved education – heaven knows, we need that; ‘Big State’ – “a new institution to take the labour market’s place” – in effect more tax and a UBI; and tackling Big Tech through competition policy – yep, I’m definitely up for that. Finally, Susskind argues that part of the role for the Big State is to ensure we all have meaning in our working lives, replacing the job as the source of people’s identity, though I wasn’t sure how this should happen.

It’s a clearly-written book, covering concisely ground that will be familiar to economists working on this territory, and providing a useful overview for those not familiar with the debate. Although I’m not a fan of UBI, the other policy prescriptions seem perfectly sensible – perhaps too sensible to be inspiring.

41mVd8pmXCL._SX324_BO1,204,203,200_I must say that my other recent read has made me even more sceptical about the scope for AI to take over from humans. Recently I noted there has been a wave of terrific books on AI. Add to the list Janelle Shane’s You Look Like A Thing and I Love You. You’d be absolutely mad not to read this bok. It had me in hysterics, while making it super-clear what’s hype and what’s realistic about current and near-future AI. And explaining why image recognition AI is so prone to seeing giraffes – many giraffes – where there are none. An absolute must-read.

411Akr2eXqL._SX355_BO1,204,203,200_

Calculating the economy

One of the books I’ve read on this trip to the AEA/ASSA meetings in San Diego is The People’s Republic of Walmart by Leigh Phillips and Michael Rosworski. This is a very entertaining projection of the socialist calculation debate onto modern capitalism.

41JGcj2r26L._SX329_BO1,204,203,200_The starting point is the Simon/Coase realisation that big firms are internally planned economies – if it works for Walmart, why wouldn’t it work at larger scale? The authors’ hypothesis is that economic planning might work better now that we have so much more powerful computers and better data.

I’d recommend the book as an introduction to the socialist calculation debate for those unfamiliar with it (ideal for students). It cites some of my favourite books including Francis Spufford’s Red Plenty and Eden Medina’s Cybernetic Revolutionaries. Some chilling lines – about Stalin’s purges, for instance: “Anyone with any expertise was placed under suspicion.” It’s a great read.

Am I persuaded? Not entirely. Technology clearly will change organisational configurations, but it has just as much been decentralisation of firms and extended supply chains as it has been giant Walmart-type firms. I’m also sceptical that the data available is actually the information needed to plan an economy, or that it’s easy to access and join up. Still, it’s the right question, and a reminder that the boundary between market, state and other forms of organistaion is not set in stone but needs constant negotiation – in fact, I know a great book about this about to be published: Markets, State and People.

0FBA95F6-E1DB-4017-A90A-89A2D3C43EB7As seen at ASSA2020 in San Diego

 

 

Configuring the lumpy economy

Somebody at the University of Chicago Press has noticed how my mind works, and sent me Slices and Lumps: Division + Aggregation by Lee Anne Fennell. It’s about the implications of the reality that economic resources are, well, lumpy and variably slice-able. The book starts with the concept of configuration: how to divide up goods that exist in lumps to satisfy various claims on them, and how to put together ones that are separate to satsify needs and demands. The interaction between law (especially property rights) and economics is obvious – the author is a law professor. So is the immediate implication that marginal analysis is not always useful.

This framing in terms of configuration allows the book to range widely over various economic problems. About two thirds of it consists of chapters looking at the issues of configuration in specific contexts such as financial decisions, urban planning, housing decisions. The latter for example encompasses some physical lumpiness or indivisibilities and some legal or regulatory ones. Airbnb – where allowed – enables transactions over excess capacity due to lumpiness, as home owners can sell temporary use rights.

The book is topped and tailed by some general reflections on lumping and slicing. The challenges are symmetric. The commons is a tragedy because too many people can access resources (slicing is too easy), whereas the anti-commons is too because too many people can block the use of resources. Examples of the latter include redevelopment of a brownfield site where there are too many owners to get to agree to sell their land but also patent thickets. Property rights can be both too fragmented and not fragmented enough. There are many examples of the way policy can shape choice sets by changing them to be more or less chunky – changing tick sizes in financial markets, but also unbundling albums so people can stream individual songs. Fennell writes, “At the very least, the significance of resource segmentation and choice construction should be taken into account in thinking innovatively about how to address externalities.” Similarly, when it comes to personal choices, we can shape those by altering the units of choice – some are more or less binary (failing a test by a tiny margin is as bad as failing by a larger one), others involve smaller steps (writing a few paragraphs of a paper).

Woven through the book, too, are examples of how digital technology is changing the size of lumps or making slicing more feasible – from Airbnb to Crowd Cow, “An intermediary that enables people to buymuch smaller shares of a particular farm’s bovines (and to select desired cuts of meat as well),” whereas few of us can fit a quarter of a cow in the freezer. Fennell suggests renaming the ‘sharing economy’ as the ‘slicing economy’. Technology is enabling both finer physical and time slicing.

All in all, a very intriguing book.310j5Yg+swL._SX331_BO1,204,203,200_Slices and Lumps