State-sponsored cyber-crime

The Larazus Heist by Geoff White is a fascinating exposition of North Korea’s role in unleashing large-scale hacking and cybercrime on the world, in its efforts to bypass sanctions and bring in money to the benighted (literally) country. The subtitle spells it out: From Hollywood to High Finance: Inside North Korea’s Global Cyber War. The author is an expert cybercrime journalist, which ensures the book is a cracking read. An early chapter describes the 2014 ransomware hack of Sony Studios, which had started trailing a movie portraying Kim Jong Un in unfavourable light. But the crimes reported are mainly about the intersection between hacking and the banking system, and organised crime, because the money has to be laundered and conveyed to North Korea or at least to places the regime can spend it. So as well as an elite computer hacking corps, the book describes the process of laundering cash through Macau casinos, or Sri Lankan charities, withdrawing notes from ATMs in central India, and trucking tonnes of cash around the Philippines. And then there’s crypto, the land where grift meets large-scale crime.

Apart from the book being a terrific read, what conclusions to take away? That too few people have really internalised the advice not to open email attachments or click on links. That the mesh of banking regulation increases the burden on the honest without much hindering the criminals. That economists/finance folks pay far, far too little attention to the criminal economy (one consequence of the profession’s laziness in studying only data that can easily been found online – looking at small questions with cool econometrics where the lamp happens to be shining, rather than the big, important questions). And that everybody should be very worried about cybersecurity. I learned so much from the book about the vulnerability of everyday life to online attacks from a hostile state like North Korea – and no doubt the other obvious potential attackers. The Wannacry impact on the NHS is a sobering example.

Finally, the book is co-published between Penguin and the BBC; the World Service hosted The Lazarus Heist podcast. In this maelstrom of misinformation we live in, the BBC is more important than it has ever been.

9780241554272-jacket-large

De-gilding the age

I’ve been reading Mordecai Kurz’s The Market Power of Technology: Understanding the Second Gilded Age, in between more summer-holiday type books (half way through Paul Murray’s excellent The Bee Sting now). Kurz’s underlying argument is one I find plausible: Technical innovation by corporations (on a platform of publicly-funded basic scientific research) drivers growth, but corporations translate innovation into monopoly power and rents. Policy alternates between lax and tough competition enforcement, the latter limiting the period of monopoly power. In between, there have been gilded ages.

The book distinguishes the return to capital productively employed from wealth, the accumulation of those rents. It argues that “all intangible assets are just different forms of monopoly wealth” – clearest for IP assets that explicitly guarantee firms’ monoplies. The book argues for prevention of tech mergers, break-up of vertically integrated parts of big corporations, and limitations on the granting of patents and copyright. Tech-based market power cannot be avoided but it should be contained.

The book combines economic and business history with an extended formal model of Kurz’s approach (and this means it is probably not a book for the general reader). The formal modelling is actually the part I found least compelling – particularly in Chapter 5, which for example assumes the monopoly producer has a constant returns to scale production function. This chapter estimates that monopoly power led to delays of 12-15 years in the diffusion of electricity in the US, but – unless I missed a key step –  the calculation seems not to take account of the impact of scale effects, which would shorten those estimates.

The previous chapter has an intriguing chart (4.9): the 50s-late 70s are reported as a period of high monopoly profits – like the 20s and the 2000s on – yet were obviously a period of strong productivity growth and rising living standards. Kurz explains these decades as not being designated a gilded age because policy ensured rising real wages and high employment. But actually if monopoly wealth brings about rapid growth through self-reinforcing technological innovation, it would be nice to have more of that. The policy lesson seems to be more about redistribution and labour market policies than about competition enforcement to limit the monopoly rents. The periods of low welath and low market power in this historical chart were periods of weak growth or worse.

I’d also like to have had more about countries other than the US, and indeed some other examples – is Walmart a tech monopoly? Or Nike? Few other countries span as much of the technology frontier as the US, so diffusion becomes the more important issue, and market power protected by IP and other tactics can be deployed anywhere. But wealth inequality is high in many countries – are all characterised by companies garnering monopoly rents and if so how?

Still, the book does set in a coherent theoretical framework the many recent books that have addressed the issue of market concentration and particularly big tech. It’s an interesting framing of current growth challenges, and one I broadly agree with. And Kurz’s call for tougher competition policy echoes many others. We will see whether it will translate into tougher enforcement and an ened to this second gilded age.

Screenshot 2023-08-13 at 15.55.51

The path not taken in Silicon Valley

The Philosopher of Palo Alto: Mark Weiser, Xerox PARC, and the original Internet of Things by John Tinnell is a really interesting read in the context of the latest developments in AI. I do have a boundless appetite for books about the history of the industry, and was intrigued by this as I’d never heard of Mark Weiser. The reason for that gap, even though he ran the computer science lab at Xerox PARC, is probably that his philosophy of computing lost out. In a nutshell, he was strongly opposed to tech whose smartness involved making people superfluous.

Based on his reading of philosophers from Heidegger and (Michael) Polanyi to Merleau-Ponty, Weiser opposed the Cartesian mind-body dualism involved in Turing’s (1950) paper and the subsequent development of late 20th century digital technologies focused on ‘machines that think’, electronic brains. He aimed to develop computing embedded in the environment to support humans in their activities, rather than computing via screens that aimed to bring the world to people but through a barrier of processing. In one talk, he gave the analogy of what makes words useful. Libraries gather many words in a central location and are of course very useful. But words that ‘disappear’ into the environment are also useful, like street signs and labelling on packages in the supermarket. Nobody would be able to shop efficiently if there were no words on the soup cans, and they had to go to a library to refer to a directory of shelf locations to find the tomato flavour.

Weiser emphasised also the role of the human body in knowledge and communication: “The human body, whatever form it took, was a medium not a machine.” In a dualist conception of mind and body it seems to be reasonable to think about a machine substituting for the activities of the mind. But the body’s senses are not information processors, and cannot be substituted by digital sensors. Embodied human experience in the world is part of human knowledge. Weiser became highly sceptical of the industry’s trajectory whereby software more and more “dictated what could and could not happen in a place.” Rather than mediating between the physical world and humans, tech should be looking to augment the material world in useful ways (hence the subtitle about the original Internet of Things).

Weiser died young, another possible reason why he is not better known. One can imagine though what he would have thought of generative AI. The book’s Introduction ends with a quote from Lewis Mumford: “The machine is just as much a creature of thought as the poem.” These AI products have been imagined as disembodied brains that get in the way of our direct experience of the world and indeed increasingly limit our ability to shape the world we want. A really interesting read, and one that will send me off to read other things – including the work of a PARC ethnographer who is really the second hero of this book, Lucy Suchman.

71swt7Fj0cL._AC_UY436_QL65_

 

Not engineering

This week I read Herbert Simon’s The Sciences of the Artificial, first published in 1969, and must confess to not knowing what to make of it. He contrasts ‘the artificial’ to ‘the natural’ but also more specifically applies it to complex systems designed to adapt to their environment, and compares arenas such as economics, management, and engineering systems. The point is then to analyse what properties they have in common, on the basis of information theory. The aim is evidently to present a common analysis of the various types of system considered. So far, so good. And as Professor Google tells me, the book was very influential in the design of computer systems and design theory more generally.

Beyond that? Well, I asked Bard, which said:

“The book argues that artificial systems can be studied scientifically, just like natural systems. Simon defines an artificial system as “a system that has been brought into being by human design.” He argues that artificial systems are not just imitations of natural systems, but are in fact a distinct class of systems with their own properties and laws.

Simon then goes on to discuss the various aspects of artificial systems that can be studied scientifically. He covers topics such as the design of algorithms, the nature of problem solving, and the role of heuristics in decision making. He also discusses the relationship between artificial systems and human intelligence.”

Very timely, then. But I’m hard pushed to summarise what the conclusions are, and allergic to the strong positivism of it. As readers of Cogs and Monsters will know, I think economic systems differ from engineering systems in important ways, and are not amenable to exactly the same kind of ‘scientific’ analysis. The ‘sciences of the artificial’ seem like they do well in analysis of algorithmic systems, but not so much – contrary to the claim in the book – for education, art and architecture, or indeed economics.

81P7m9unehL._AC_UY436_QL65_

 

 

Internet empires – their rise and decline?

Has the American Empire simply moved online? That’s the argument made in an enjoyable polemic by Sean Ennis, Internet Empire: The Hidden Digital War. It’s a book with two strands. One about wars and empires through history: what motivates conflict, how empires grab territory when the economic advantages outweigh the costs of maintaining the colonies, why empires either collapse or survive.

This is braided with an account of how the US (and, thanks to protection of its domestic market, China) won near-global dominance of the internet and the money to be made from the internet for its own companies. Marvellous technology, an economic system favouring enterprise and investment, and active policy support from successive US governments have created the market-dominant players who shape modern life.

Hence, “The core thesis of this book is that the modern-day internet structure is economically equivalent to what, in prior times, would have been an empire acquired through aggression into new territories.” The aggression this time has involved weapons such as effective lobbying/political blackmail over tax and trade policies, control of domain names, non-enforcement of antitrust policies to enable the giants to grow, and so on. It’s an interesting analogy although I’m not persuaded that commercial and actual conflict/conquest are really similar.

The US succeeded where France’s earlier Minitel system did not, the book argues, because Minitel was a closed interface system run by a state-owned incumbent telco – whereas in the US, AOL tried this approach in the early internet days but dropped this when the attractions of the open internet to users became both evident and available through browsers and the web. “The unquestionable French lead in the release of digital technology was squandered by the country.”

This prompts two reflections. One is that the book – in asking with Europe has no internet giants – ignores the advantage of scale. When there are high fixed costs and netwrok effects, the bigger the addressable market the better.

The other is that if open beats closed in the end, are the current internet giants undermining their own success? For what they are trying to do is tie users in ever-more tightly, and exploiting this captive market to degrade their services – just think how much search results have deteriorated on Google or Amazon. Meanwhile Mr Musk is similarly degrading the attractiveness of Twitter. The width of the open goal they are presenting to newcomers with their own great technology is increasing by the day. Regulators and competition authorities can help by mandating more, much more, open data and interoperability – as the Bundeskartellamt seems to be doing.

Still, as the book concludes, we can all do something. It ends with a list of To Dos: use multiple platforms – click away from Google. Buy direct from sellers even if it’s a bit less convenient. Go to the local high street to shop. Pay for a newspaper. In short, give up just a bit of the convenience and cost-saving to keep the digital giants on their toes and maybe open the way for new ones to come along. I’m sceptical individual action will make a big difference although happy to encourage the use of Hirschman’s Exit and Voice disciplines. It’s going to take policy choices, and almost certainly by the EU, to reshape digital markets.

Internet Empire The Hidden Digital War