Law and technology, power and truth

I’d been looking forward to reading Between Truth and Power: The Legal Constructions of Informational Capitalism by Julie Cohen. I start out sympathetic to the key argument that the legal system is the product its society – the material economic and technological and also the political conditions. Therefore current legal contests reflect underlying changes in these same forces, and the final legal system for the digital era is still in the process of being shaped. Law and the implementation of technologies in society influence each other – the ‘rules of the game’ are not exogenous.

There are indeed some interesting insights in the book. I most enjoyed some of the earlier parts, which are more descriptive of the extension of the idea of intangibles, ideas, as “intellectual property”. This is not new – James Boyle for one has written superbly about it. But the detail here is interesting, plenty of nuggets about the US legal system and how truly, gobsmackingly awful it can be.

I also appreciated the chapter on regulation, and its basic point that many regulators are now having to “move into the software auditing business”, and indeed may have to evaluate software controls designed to evade regulation. Implications here for analysing regulation in the digital economy in terms of hyper-asymmetric information and algorithmic complexity.

On the whole though, I was disappointed. The book is almost entirely US focused – and is upfront about it – but the US system is distinctive even compared to other common law jurisdictions. Nowhere else, for instance, has its 1st Amendment fetishism. It would have been terrific to have some reflections on the extra-territoriality of US lawmaking and court judgements in the digital domain.

The book has some tantalising reflections about the limitations of law, based on concepts of individual rights, in the face of collective effects: as I’ve been arguing for a while eg here, digital power spells the end of individualism, including what we all now call the neoliberalism that gave birth to it. More on this would have been great.

There’s also just too much sub-Zuboffian rhetoric (rather than argument) about the ‘surveillance-industrial complex’. I’m all too willing to believe this exists, so all the more disappointed when the analysis is so vague. There’s also a lot of allusion to Foucault – “biopolitics”, “governmentality” –  in the book without it ever – as far as I spotted – actually citing and deploying The Birth of Biopolitics.

All in all, there’s a lot of detail in this book that didn’t, at least for me, cohere. Too many trees, not enough overview of the wood. Perhaps it’s time for me to try the highly-praised The Code of Capital by Katarina Pistor.

514dfgTAVfL._SX351_BO1,204,203,200_

Chaos, tools and thoughts

It has been unsurprisingly hard to concentrate this week, but I did finish Everyday Chaos: Technology, Complexity and How We’re Thriving in a New World of Possibility by David Weinberger. Publishers love thise long subtitles and with one so long you might think there was no need to read the book. This one does the author a disservice. The book is nothing like the giddy Silicon Valley techno-optimistic tract it seems to indicate (and how badly that would have dated in current circumstances). I’d bill it as a follow-up to two much earlier books – The Cluetrain Manifesto of 2000 (of which Weinberger was a co-author) and Kevin Kelly’s 1992 Out of Control.

Everyday Chaos is concerned with the implications of AI everywhere and the always-on internet. It’s broad hypothesis is that the business environment and world more broadly need to take complexity seriously: “At last we are moving from Chaos Theory to chaos practice.” (Chaos Theory being the flapping butterfly in one place causing a hurricane across the world thanks to the non-linear complex dynamics of weather systems.) That means expecting small interventions to sometimes have huge consequences. It implies organisations need to be ‘agile’ (ie flexible), open, less hung up on causality and more willing to live with (shifting) correlations.

It’s particularly interesting on the flexibility of the concept of interoperability, which can be made to build bridges between organisations in different ways. The book advocates a “networked, permeable” view of business rather than the hard boundaries we are used to thinking about. “The knocking down of old walls that were definitional of a business is better understood as a strategic and purposive commitment to increasing a business’s interoperability with the rest of the environment.” Rather than narrowing down to a small number of strategic options, Weinberger’s advice is: “In an interoperable world in which everything affects everything else, the strategic path forward may be to open as many paths as possible.”

The book also reminded me about the very interesting work by Andy Clark and his argument that our tools – pen and paper, whiteboard, screen, spreadsheet – determine how we think (this is a terrific New Yorker profile and here’s a famous paper with Dave Chalmers on the ‘extended mind’). Knowledge is a function of what’s outside our heads. As Weinberger concludes, we are neither an effect of thing (technodeterminism) or nor to we straightforwardly cause things. new tools – machine learning – will end up with us understanding the world in a different way.

Everyday Chaos is also really well written and engaging, so it’s well worth ignoring the airport bookshop packaging. Not that there will be many chances to buy in airport bookstores for quite a while…

61wQKDknEgL._SX330_BO1,204,203,200_

Tech self-governance

The question of how to govern and regulate new technologies has long interested me, including in the context of a Bennett Institute and Open Data Institute report on the (social welfare) value of data, which we’ll be publishing in a few days’ time. One of the pressing issues in order to crystallise the positive spillovers from data (and so much of the attention in public debate only focuses on the negative spillovers) is the development of trustworthy institutions to handle access rights. We’re going to be doing more work on the governance of technologies, taking a historical perspective – more on that another time.

Anyway, this interest made me delighted to learn – chatting to him at the annual TSE digital conference – that Stephen Maurer had recently published Self-governance in Science: Community-Based Strategies for Managing Dangerous Knowledge. It’s terrifically interesting & I recommend it to anyone interested in this area.

The book looks at two areas, commerce and academic research, in two ways: historical case study examples; and economic theory. There are examples of success and of failure in both commercial and academic worlds, and the economic models summarise the characteristics that explain whether or not self-governance can be sustained.

So for instance in the commercial world, food safety and sustainable fisheries standards have been adopted and largely maintained largely through private governance initiatives and mechanisms, synthetic biology much less so, having an alphabet soup of competing standards. Competitive markets are not well able to sustain private standards, Maurer suggests: “Competitive markets can only address problems where society has previously addressed some price tag to the issue.” Externalities do not carry these price tags. Hence supply chains with anchor firms are better able to bear the costs of compliance with standards – the big purchasing firm can require its suppliers to adhere.

Similarly, in the case of academic science the issue is whether there are viable mechanisms to force dissenting minorities to adhere to standards such as moratoria on certain kinds of research. The case studies suggest it is actually harder to bring about self-governance in scientific research as there are weaker sanctions than the financial ones at play in the commercial world. Success hinges on the community having a high level of mutual trust, and sometimes on the threat of formal government regulation. The book offers some useful strategies for scientific self-governance such as building coalitions of the willing over time (small p politics), and co-opting the editors of significant journals – as the race to publish first is so often the reason for the failure of research moratoria to last.

The one element I thought was largely missing from the analytical sections was the extent to which the character of the technologies or goods themselves affect the likelihood of successful self-governance. This is one aspect that has come up in our preparatory work – the cost and accessibility of different production technologies. The analysis here focuses on the costs of implementing standards, and on monitoring and enforcement.

This is a fascinating book, including the case studies, which range from atomic physics to fair trade coffee. It isn’t intended to be a practical guide (& the title is hardly the airport bookstore variety) but anybody interested in raising standards in supply chains or finding ways to manage the deployment of new technologies will find a lot of useful insights here.

51lk-iF9V9L

Humans in the machine

There’s some very interesting insight into the human workforce making the digital platforms work in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary Gray and Siddarth Suri. The book as a whole doesn’t quite cohere, though, nor deliver on the promise of the subtitle. The bulk of the book draws on interviews and surveys of people who work via platforms like Amazon’s famous Mechanical Turk, but also the internal Microsoft equivalent, UHRS, and a smaller social enterprise version, Amara.

This is all extremely interesting, about how people work – in the US and Bangalore – their tactics for making money, dealing with stress, how many hours they have to work and when, how much or little agency they have, and so on. Not least, it reminds or informs readers that a lot of AI is based on the labelling done by humans to create training data sets. However, not all the ghost work described is of this kind and some, indeed, has little to do with Silicon Valley except that a digital platform mediates the employer and the seeker of work. As the authors note, this latter type is a continuation of the history of automation, the role of new pools of cheap labour in industrial capitalism, and the division of labour markets into privileged insiders and contingent – badly paid, insecure – outsiders. The new global underclass is just one step up from the old global underclass; at least they have a smartphone or computer and internet access.81uywR4bPoL._AC_UY218_ML3_The survey results confirm that some of the digital ghost workers value the flexibility they get reasonably highly – although with quite a high variance in the distribution. Not surprisingly, those with least pressing need for income most value the flexibility. Some of the women workers in India also valued the connection to the labour market when they were unable to work outside of their home because of childcare or family expectations. Similarly, with the Amara platform, “Workers can make ghost work a navigable path out of challenging circumstances, meeting a basic need for autonomy and independence that is necessary for pursuing other interests, bigger than money.”

The book’s recommendations boil down to recommending that platforms should introduce double bottom line accounting – in other words, find a social conscience alongside their desire for profit. Without a discussion of their (lack of) incentives to do so, this is a bit thin. Still, the book is well worth reading for fascinating anthropological insights from the field work, and for the reminder about the humans in the machine.

 

Automation, the future of work and giraffes

Daniel Susskind’s A World Without Work: Technology, Automation and How We Should Respond is a very nice overview of the issues related to technological unemployment – will it happen, how will it affect people, and what policy responses might make sense. As the book notes, it is impossible to predict the number/proportion of jobs that might be affected, or how quickly, with detailed studies coming up with numbers ranging from about a tenth to about a half. But that there will be disruption, and that past policies have not dealt well with the consequences, is far less uncertain. Even if you believe that the economy will in time adjust the types and amount of work available – and so in that sense this time is *not* different from the past – the transition could be painful.

The book has three sections. The first looks at the history of technological unemployment and why we might expect AI to lead to a new wave. The second sets out the task-based analysis introduced by David Autor and others to sketch how the character of people’s work can change significantly. While dismissing the lump of labour fallacy, it argues that one of the main symptoms will be increased inequality. It predicts, gloomily, that this will get worse and that some people will be left with no capital and redundant human capital,, “leaving them with nothing at all.” I’m not sure that will be politically viable, judging from current events, but the logic is straightforward.

The final section turns to potential policy responses: improved education – heaven knows, we need that; ‘Big State’ – “a new institution to take the labour market’s place” – in effect more tax and a UBI; and tackling Big Tech through competition policy – yep, I’m definitely up for that. Finally, Susskind argues that part of the role for the Big State is to ensure we all have meaning in our working lives, replacing the job as the source of people’s identity, though I wasn’t sure how this should happen.

It’s a clearly-written book, covering concisely ground that will be familiar to economists working on this territory, and providing a useful overview for those not familiar with the debate. Although I’m not a fan of UBI, the other policy prescriptions seem perfectly sensible – perhaps too sensible to be inspiring.

41mVd8pmXCL._SX324_BO1,204,203,200_I must say that my other recent read has made me even more sceptical about the scope for AI to take over from humans. Recently I noted there has been a wave of terrific books on AI. Add to the list Janelle Shane’s You Look Like A Thing and I Love You. You’d be absolutely mad not to read this bok. It had me in hysterics, while making it super-clear what’s hype and what’s realistic about current and near-future AI. And explaining why image recognition AI is so prone to seeing giraffes – many giraffes – where there are none. An absolute must-read.

411Akr2eXqL._SX355_BO1,204,203,200_