The future of the factory

I read my colleague Jostein Hauge’sThe Future of the Factory in proof and never got round to the finished book until now. It’s a very nice synthesis of the impact of four ‘megatrends’ on manufacturing. These four are the rise of the service sector as a share of output, digital automation, globalization and ecological crisis.

After an introductory chapter introducing industrial policy in historical perspective – opening with Alexander Hamilton in life as well as on stage – each trend gets a chapter on how it is shaping industrial activity. One conclusion is that the phenomenon of ‘servitisation’ in manufacturing –  including outsourcing associated high value services – is significant and can lead people to underestimate the importance of manufacturing. The book also argues that the impact of digital automation is exaggerated – it will displace some activities and tasks but there is a lot of hype. It also argues that the retreat from globalisation is similarly over-stated, and the debate disguises power asymmetries between western multinationals and firms in their low or middle income supply chain countries. And the environmental crisis is a further source of this economic and political asymmetry.

The conclusion is that, “in a world of technological change and disruptions, industrialization and factory-based production remains a cornerstone of economic prgoress. Jostein welcomes the recent revival of industrial policies but calls for a focus now on the global South and the place these countries have in the network of production. The book ends with a call for a fairer kind of capitalism than the current model.

All of this is packed into a compact and very readable book. And I’m glad I’m not the only person who saw Hamilton The Musical and wished there had been more economic policy in the show…Screenshot 2024-01-23 at 16.03.49

State-sponsored cyber-crime

The Larazus Heist by Geoff White is a fascinating exposition of North Korea’s role in unleashing large-scale hacking and cybercrime on the world, in its efforts to bypass sanctions and bring in money to the benighted (literally) country. The subtitle spells it out: From Hollywood to High Finance: Inside North Korea’s Global Cyber War. The author is an expert cybercrime journalist, which ensures the book is a cracking read. An early chapter describes the 2014 ransomware hack of Sony Studios, which had started trailing a movie portraying Kim Jong Un in unfavourable light. But the crimes reported are mainly about the intersection between hacking and the banking system, and organised crime, because the money has to be laundered and conveyed to North Korea or at least to places the regime can spend it. So as well as an elite computer hacking corps, the book describes the process of laundering cash through Macau casinos, or Sri Lankan charities, withdrawing notes from ATMs in central India, and trucking tonnes of cash around the Philippines. And then there’s crypto, the land where grift meets large-scale crime.

Apart from the book being a terrific read, what conclusions to take away? That too few people have really internalised the advice not to open email attachments or click on links. That the mesh of banking regulation increases the burden on the honest without much hindering the criminals. That economists/finance folks pay far, far too little attention to the criminal economy (one consequence of the profession’s laziness in studying only data that can easily been found online – looking at small questions with cool econometrics where the lamp happens to be shining, rather than the big, important questions). And that everybody should be very worried about cybersecurity. I learned so much from the book about the vulnerability of everyday life to online attacks from a hostile state like North Korea – and no doubt the other obvious potential attackers. The Wannacry impact on the NHS is a sobering example.

Finally, the book is co-published between Penguin and the BBC; the World Service hosted The Lazarus Heist podcast. In this maelstrom of misinformation we live in, the BBC is more important than it has ever been.

9780241554272-jacket-large

De-gilding the age

I’ve been reading Mordecai Kurz’s The Market Power of Technology: Understanding the Second Gilded Age, in between more summer-holiday type books (half way through Paul Murray’s excellent The Bee Sting now). Kurz’s underlying argument is one I find plausible: Technical innovation by corporations (on a platform of publicly-funded basic scientific research) drivers growth, but corporations translate innovation into monopoly power and rents. Policy alternates between lax and tough competition enforcement, the latter limiting the period of monopoly power. In between, there have been gilded ages.

The book distinguishes the return to capital productively employed from wealth, the accumulation of those rents. It argues that “all intangible assets are just different forms of monopoly wealth” – clearest for IP assets that explicitly guarantee firms’ monoplies. The book argues for prevention of tech mergers, break-up of vertically integrated parts of big corporations, and limitations on the granting of patents and copyright. Tech-based market power cannot be avoided but it should be contained.

The book combines economic and business history with an extended formal model of Kurz’s approach (and this means it is probably not a book for the general reader). The formal modelling is actually the part I found least compelling – particularly in Chapter 5, which for example assumes the monopoly producer has a constant returns to scale production function. This chapter estimates that monopoly power led to delays of 12-15 years in the diffusion of electricity in the US, but – unless I missed a key step –  the calculation seems not to take account of the impact of scale effects, which would shorten those estimates.

The previous chapter has an intriguing chart (4.9): the 50s-late 70s are reported as a period of high monopoly profits – like the 20s and the 2000s on – yet were obviously a period of strong productivity growth and rising living standards. Kurz explains these decades as not being designated a gilded age because policy ensured rising real wages and high employment. But actually if monopoly wealth brings about rapid growth through self-reinforcing technological innovation, it would be nice to have more of that. The policy lesson seems to be more about redistribution and labour market policies than about competition enforcement to limit the monopoly rents. The periods of low welath and low market power in this historical chart were periods of weak growth or worse.

I’d also like to have had more about countries other than the US, and indeed some other examples – is Walmart a tech monopoly? Or Nike? Few other countries span as much of the technology frontier as the US, so diffusion becomes the more important issue, and market power protected by IP and other tactics can be deployed anywhere. But wealth inequality is high in many countries – are all characterised by companies garnering monopoly rents and if so how?

Still, the book does set in a coherent theoretical framework the many recent books that have addressed the issue of market concentration and particularly big tech. It’s an interesting framing of current growth challenges, and one I broadly agree with. And Kurz’s call for tougher competition policy echoes many others. We will see whether it will translate into tougher enforcement and an ened to this second gilded age.

Screenshot 2023-08-13 at 15.55.51

The path not taken in Silicon Valley

The Philosopher of Palo Alto: Mark Weiser, Xerox PARC, and the original Internet of Things by John Tinnell is a really interesting read in the context of the latest developments in AI. I do have a boundless appetite for books about the history of the industry, and was intrigued by this as I’d never heard of Mark Weiser. The reason for that gap, even though he ran the computer science lab at Xerox PARC, is probably that his philosophy of computing lost out. In a nutshell, he was strongly opposed to tech whose smartness involved making people superfluous.

Based on his reading of philosophers from Heidegger and (Michael) Polanyi to Merleau-Ponty, Weiser opposed the Cartesian mind-body dualism involved in Turing’s (1950) paper and the subsequent development of late 20th century digital technologies focused on ‘machines that think’, electronic brains. He aimed to develop computing embedded in the environment to support humans in their activities, rather than computing via screens that aimed to bring the world to people but through a barrier of processing. In one talk, he gave the analogy of what makes words useful. Libraries gather many words in a central location and are of course very useful. But words that ‘disappear’ into the environment are also useful, like street signs and labelling on packages in the supermarket. Nobody would be able to shop efficiently if there were no words on the soup cans, and they had to go to a library to refer to a directory of shelf locations to find the tomato flavour.

Weiser emphasised also the role of the human body in knowledge and communication: “The human body, whatever form it took, was a medium not a machine.” In a dualist conception of mind and body it seems to be reasonable to think about a machine substituting for the activities of the mind. But the body’s senses are not information processors, and cannot be substituted by digital sensors. Embodied human experience in the world is part of human knowledge. Weiser became highly sceptical of the industry’s trajectory whereby software more and more “dictated what could and could not happen in a place.” Rather than mediating between the physical world and humans, tech should be looking to augment the material world in useful ways (hence the subtitle about the original Internet of Things).

Weiser died young, another possible reason why he is not better known. One can imagine though what he would have thought of generative AI. The book’s Introduction ends with a quote from Lewis Mumford: “The machine is just as much a creature of thought as the poem.” These AI products have been imagined as disembodied brains that get in the way of our direct experience of the world and indeed increasingly limit our ability to shape the world we want. A really interesting read, and one that will send me off to read other things – including the work of a PARC ethnographer who is really the second hero of this book, Lucy Suchman.

71swt7Fj0cL._AC_UY436_QL65_

 

Not engineering

This week I read Herbert Simon’s The Sciences of the Artificial, first published in 1969, and must confess to not knowing what to make of it. He contrasts ‘the artificial’ to ‘the natural’ but also more specifically applies it to complex systems designed to adapt to their environment, and compares arenas such as economics, management, and engineering systems. The point is then to analyse what properties they have in common, on the basis of information theory. The aim is evidently to present a common analysis of the various types of system considered. So far, so good. And as Professor Google tells me, the book was very influential in the design of computer systems and design theory more generally.

Beyond that? Well, I asked Bard, which said:

“The book argues that artificial systems can be studied scientifically, just like natural systems. Simon defines an artificial system as “a system that has been brought into being by human design.” He argues that artificial systems are not just imitations of natural systems, but are in fact a distinct class of systems with their own properties and laws.

Simon then goes on to discuss the various aspects of artificial systems that can be studied scientifically. He covers topics such as the design of algorithms, the nature of problem solving, and the role of heuristics in decision making. He also discusses the relationship between artificial systems and human intelligence.”

Very timely, then. But I’m hard pushed to summarise what the conclusions are, and allergic to the strong positivism of it. As readers of Cogs and Monsters will know, I think economic systems differ from engineering systems in important ways, and are not amenable to exactly the same kind of ‘scientific’ analysis. The ‘sciences of the artificial’ seem like they do well in analysis of algorithmic systems, but not so much – contrary to the claim in the book – for education, art and architecture, or indeed economics.

81P7m9unehL._AC_UY436_QL65_