Goliaths everywhere

James Bessen’s The New Goliaths is one of my books of the year so far (with a fashionably chatty subtitle). Indeed, I’d been looking forward to it because I liked his previous one, Learning By Doing, so much. Based on his impressive research on technology over a number of years, and on his prior experience as the founder of a successful early digital startup, the core of the argument is that a small number of (generally) large companies have built IT systems that can manage immense complexity in their operations. Sophisticated software and massive flows of data enable them to co-ordinate in previously unimaginable ways, delegating decisions to where the information can go. The complexity – say of a new model of software-laden car or a major retailer’s logistics system – increases the cost of entry for potential competitors. The Goliaths are to be found not just in ‘Big Tech’, but in many sectors of the economy.

What’s more, “The investment in software is only part of the total investment in these systems. The entire technology investment that firms make in these proprietary systems goes well beyond software code to include data, workforce skills and investments in alternative organizational structures.” An example used throughout the book is Walmart – which McKinsey found accounted for a substantial proportion of the US 1990s productivity boost. Somewhat counter-intuitively, at least for those who see Big Tech as the main competition problem, Bessen sees Walmart as the unassailable incumbent in US retail, whereas Amazon is the one example of successful entry, and one offering a platform to other retailers.

This dynamic, of superstar firms in many industries from retail to autos to finance with a widening productivity advantage, has consequences for income inequality: the workers in those firms are paid more because they gain invaluable experience simple by working in the superstar companies, so wages are dispersing within sectors. The skills are scarce because you have to work for a big, sophisticated complex firm to get the skills, which are thus in short supply. It has led to less dynamism – fewer entries and exits in many markets. Small firms simply can’t match the spending on R&D of the big ones: one example given is voice recognition software, where pioneer Nuance was a massive commercial success, but still couldn’t match the spending of big firms: Amazon (again) has more than 10,000 engineers working on Alexa products, more than ten times the number Nuance had at its peak. “Proprietary information technology is exacerbating economic and social devisions. It is widening the gaps between the pay of workers at different firms. It is leading to greater segregation of skill groups across firms and cities.”

The complexity dynamic has implications too for competion policy – which becomes challenging, because after all the superstars generally offer great services – and regulation more broadly – because the information asymmetry between company and regulator grows ever wider.

So what to do? The book advocates for mandating open standards, morecompulsory licensing, and for reforming IP law to tilt the incentives for big companies to do more voluntary unbundling of their services, clamping down on worker non-compete agreements to spread skills. All excellent, and ultimately inevitable policies, as the inequalities are socially and politically unsustainable. But there’s much devil in the detail, and there will be massive lobbying against change. So this is a political struggle rather than a technocratic one.

But that’s to wander off into the future. I highly recommend The New Goliaths. It synthesizes a growing body of research into how firms use technology, how that interacts with organisational structures and markets,  and what the consequences are. It’s also really well-written, with lots of examples and a grounded understanding of the realities and limits of technology policy.

71vCx-hln8L._AC_UY436_QL65_

Whose brain is it?

This weekend – despite having a small and demanding visitor – I polished off The Idea of the Brain: A History by Matthew Cobb. It has nothing to do with economics of course, but there are a number of things that resonated with me.

One was the way the Nobel-winning Cambridge brain scientist Edgar Adrian brought wartime experience of radio technology to thinking about how neurons respond to stimulus – and also how his commitment to public communication led him to “think about what nerves do in a rather different way from that expressed in his scientific papers. In hunting for terms that would help non-experts understand, “These concepts – messages, codes and information – now form the basis of our fundamental scientific ideas about how the brain works.” I’ve always been deeply committed to (a) explaining things clearly and in ways people can understand and (b) that you can’t explain something like this if you don’t properly understand it yourself.

There’s an interesting but brief discussion of the contrast between reductionist approaches to understanding the brain (which seems dominant) and others pointing to the emergence of complex phenomena from a few simple neural networks. I don’t know what to think of it in this context, but the path of reductionism hasn’t served economics all that well.

Then there’s an extraordinary story about an Australian patient who had an electrode implanted in her brain to manage her severe epilepsy. When she grew used to the device, it transformed her life: “With the device I felt like I could do anything …. nothing could stop me.” But the manufacturer went bust and the device had to be removed. The woman said: “I lost myself.” Horrific. How can the law and economic arrangements enable such a thing to happen? Cobb writes: “In a future world where companies are funding interfaces with our brains, we may lose control over our identity.” This goes to the heart of debates about health data as well, and the legal construct that allows data to be alienated as a piece of economic property. How can an electrode in a woman’s brain be corporate property? Well, in the way General Motors claims it still owns the car you bought because your car sends data back to GM. Watch out for ‘smart’ hip replacements or pacemakers….

Finally, another section that spoke to me points out that ‘The Brain Has a Body’ (the title of a 1997 article) and the body has an environment – “but neither the body nor the environment feature in modelling approaches that seek to understand the brain.” The input from the world is part of the system in which brains operate.

The book‘s history of thought approach is terrific, as is its linkage of technological  innovations (watches, computers, radio ….) with how scientists have thought about brains – I found this a really gripping and informative read.

81jc4udI7SL._AC_UY436_QL65_

 

Casting off the digital chains

The joy of travel to conferences – conversations with people you wouldn’t set up a Zoom to chat to, hearing new research you wouldn’t dial in to a webinar to hear, and the travel time for reading. It definitely (still) outweighs the aggravation of checking in. My recent Eurostar journeys saw me through Jamie Susskind’s The Digital Republic: On Freedom and Democracy in the 21st Century.

We held a workshop on a first draft of the book at the Bennett Institute, where Jamie is an affiliated researcher, so I suppose I’m bound to look kindly on it. It’s an ambitious and largely persuasive diagnosis of the political problems caused by the power of big tech, and a set of proposed solutions. It’s thoroughly argued – as you’d expect from a skilled lawyer –  and avoids the sweeping generalisations and analytical wooliness of some well-known attacks on the digital giants.

The book has many parts. It starts by setting out the fact of digital power and why this is a significant political problem – and one created by law: “There is nothing natural about the power of modern business corporations. They accrue power because the law allows them to.”  It follows this with an analysis of the more granular problems, such as dark patterns, the coding of significant ethical decisions, the pretence that ticking the T&C box is meaningful consent (“When we click ‘I agree’ we are usually surrendering, not agreeing”), and so on.

The book then moves on to set out four republican principles (NB not in the US party sense of the word, but in the engaged, free citizen sense):

  • the preservation principle: democracy must be protected
  • the domination principle: no power can be unaccountable
  • the democracy principle: powerful tech should reflect the moral and civic values of those it affects
  • the parsimony principle: governments shouldn’t constrain companies more than they have to.

All seem perfectly reasonable, at least in the abstract. Then, refreshingly for a book about the need for countervailing power to big tech, Jamie has a lot of proposals. Among them are:

  • deliberation and mini-publics to determine the limits of digital behaviours
  • tribunals to hear complaints about treatment by big tech
  • professional certification of software and ML engineers
  • an inspectorate of algorithmic decision making
  • a duty of openness on tech companies
  • more appropriate anti-trust policies
  • minimum standards for social media platforms eg to prevent harassment, foreign interference in elections etc

I don’t agree with all of the suggestions – some (eg compulsory licensing for anyone who codes software) seem to fall foul of the parsimony principle – but that isn’t the point; the point is that governments are not powerless in the face of digital power. Indeed there is an awful lot they could be doing to tackle the ills while recognising the great benefits the tech titans have brought.

Anyway, I enjoyed reading the book, and relished its can-do approach. I agree about the need for constraint on unaccountable digital power. Was I wholly persuaded? No, and mainly because there’s an assumption of beneficent governments doing all this on our behalf. It’s reasonable to doubt either their goodwill or their competence. More on this in my next post after I’ve finished reading Deirdre McCloskey’s latest offering.

61pyJfOI4FL._AC_UY436_QL65_Other reviews: Joahn Naughton in The Guardian, Andrew Lilico in The Telegraph….

What comes after the Knowledge Economy?

Nick O’Donovan’s Pursuing the Knowledge Economy: A Sympathetic History of High-Skill, High-Wage Hubris is an interesting evaluation of the policy consensus of the 1990s and early 2000s concerning the opportunities afforded by digital technology and globalisation for a transition to better jobs in the western economies. My own 1997 book The Weightless World (25 years ago this year!) features as one of several capturing that economic policy zeitgeist, what the book terms (following Peter Hall) a ‘growth regime’ – “a set of economic policy ideas and assumptions that are underwritten politically by a distinctive coalition of supporters.” In other words, a mental model of how the economy can prosper that becomes the policy zeitgeist. (Although I’d challenge the focus of this concept on just firms and governments, as the concept ignores the household and voluntary sector, both large and being affected by digital.)

The book traces the historical evolution of the policy ideas, mainly the centre left (Blair & Brown in the UK, Clinton in the US), and the way the financial crisis torpedoed any optimism about new opportunities to upskill the workforce and create satisfying new jobs. It also politely critiques the knowledge economy analysis in itself, essentially for unfounded optimism about the scope for new technologies to complement rather than substitute for labour and for ignoring the transition costs associated with globalisation. O’Donovan also points out the policy blind spot – until very recently – about increasing market concentration and rent extraction. Just a few people have long pointed to this as a source of economic problems, among them Brett Christophers.

There certainly was plentiful hype – Thomas Friedman’s books leap to mind. I’d defend The Weightless World on the basis that it predicted greater spatial and income inequalities and paid attention to social tensions, and would argue that I was prescient (much too soon…) about gig employment patterns and new digital currencies. But I also overlooked the costs for some people of the reordering of global production, or at least assumed policy would be sensible enough to look after them. At the same time, the Knowledge Economy paradigm did encapsulate some significant changes under way in the structure of the economy. I think O’Donovan somewhat underplays the insights while rightly pointing out the irrational exuberance.

The book ends by linking the failure of the Knowledge Economy growth regime to post-2008 political polarisation, and hoping that the current period of disillusion and volatility will in fact prove an opportunity for a new, better “growth regime” to develop. Let us hope….

Screenshot 2022-07-10 at 09.55.25

Decarbonising travel – room for optimism?

Our new Perspectives title, Good To Go: Decarbonising Travel After the Pandemic by David Metz is out. It looks at how the pandemic has affected pre-existing trends in travel – not as much as optimists might have hoped, is his conclusion, although recognising that it is probably too early to know whether commuting patterns will change permanently. Nevertheless, improved neighbourhood planning and flexible working could capture some of the benefits of the pandemic years even if there is a significant reversion to the old normal.

However, One of David’s points is that people relish mobility, with faster travel always having translated into travel further. That means that tackling the contribution of transport to solving the climate challenge will need technological contributions. The book holds out some hope for electric vehicles and digital tools contributing to decarbonising transport. But as David points out, the system is complex, involving economics, demography, technology, policy and human behaviour. There is a lot of wishful thinking about how easy it will be to change. Complexity means (as so often in policy) there is no easy solution. The book outlines a range of investments and policy interventions that will be needed to decarbonise travel.

Good to Go? is a terrific addition to our Perspectives roster on transport: David’s previous book, Travel Fast or Smart?; Transport for Humans: Are we nearly there yet? by Pete Dyson and Rory Sutherland; and Are Trams Socialist? Why Britain Has No Transport Policy and  Driverless Cars: On a Road to Nowhere  by Christian Wolmar. They complement each other wonderfully  – a great collection for transport nerds! As a special offer for readers of this blog, all five (£71 if bought individually) are available for £45 plus postage if you email sam@londonpublishingpartnership.co.uk.

9781913019624-e1655710170750