The future is multiple, not singular.

I’ve long enjoyed the blog posts by Richard Jones on economic productivity and growth – his perspective from physics is always interesting. As I met him in real life for the first time this past week, I also downloaded his free e-book Against Transhumanism (download here) – a brief, compelling demolition of the idea that digital technology is hurtling us towards a ‘singularity’. The most famous transhumanist is Ray Kurzweil, I suppose, of [amazon_link id=”0715635611″ target=”_blank” ]The Singularity is Near[/amazon_link]. Prof Jones points out that:

a) exponential growth (as per Moore’s Law) cannot deliver a singularity, as the value of expnential functions is finite – unless the rate of technological improvement is constantly increasing without limit. Seems a stretch, looking at either current productivity figures or any history at all.

b) transhumanism is an apocalyptic religion, not a scientific theory.

c) To quote the e-book: “The idea that history is destiny has proved to be an extremely
bad one, and I don’t think the idea that technology is destiny will necessarily work out that well either. I do believe in progress, in the sense that I think it’s clear that the material conditions are much better now for a majority of people than they were two hundred years ago. But I don’t think the continuation of this trend is inevitable. I don’t think the progress we’ve achieved is irreversible, either, given the problems, like climate change and resource shortages, that we have been storing up for ourselves in the future. I think people who believe that further technological progress is inevitable actually make it less likely – why do
the hard work to make the world a better place, if you think that these bigger impersonal forces make your efforts futile?”

It’s well worth a read, along with the Soft Machines blog.There is a super-clear explanation of the implications of nano-technology,as you might expect from the author of the [amazon_link id=”0199226628″ target=”_blank” ]Soft Machines [/amazon_link]book.

[amazon_image id=”0198528558″ link=”true” target=”_blank” size=”medium” ]Soft Machines: Nanotechnology and Life[/amazon_image]

There’s also a chapter on why it’s unlikely you’ll ever be able to upload your brain to the cloud. Above all, though, the book explains why transhumanism is a Dangerous Idea. The idea of a Singularity has been described as the ‘Rapture of the Nerds’ (attributed to [amazon_link id=”1857238338″ target=”_blank” ]Ken McLeod[/amazon_link]), which makes it sound like the lunatic fringe. But as Prof Jones points out, the Silicon Valley crowd are seriously influential; and their view that technology has its own irresistible dynamic – the techno-determinism – elbows aside the truth that the results of technological discovery are socially determined: “Why would you want to think of technology, not as something that is shaped by human choices, but as an autonomous force with a logic and direction of its own? Although people who think this way may like to think of themselves as progressive and futuristic, it’s actually a rather conservative position, which finds it easy to assume that the way things will be in the future is inevitable and always for the best.”

Written by a physicist but like a true social scientist. The future is multiple, not singular.

Robots, humans and other animals

John Markoff’s [amazon_link id=”0062266683″ target=”_blank” ]Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots[/amazon_link] ends with a reference to Thorstein Veblen’s [amazon_link id=”123033128X” target=”_blank” ]The Engineers and the Price System[/amazon_link] (not a book I’ve read – I’ve always found Veblen really heavy going). Apparently Veblen argued that the increasing technological complexity of society would give political power to the engineers. Markoff draws the analogy with the central role of algorithms in modern life: “Today the engineers who are designing the artificial intelligence-based prorams and robots will have tremendous influence over how we use them.”

[amazon_image id=”0062266683″ link=”true” target=”_blank” size=”medium” ]Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots[/amazon_image]  [amazon_image id=”1614273707″ link=”true” target=”_blank” size=”medium” ]The Engineers and the Price System[/amazon_image]

[amazon_link id=”0062266683″ target=”_blank” ]Machines of Loving Grace[/amazon_link] is a history of the tension between artificial intelligence (AI) research, which substitutes robots for human activity, and ‘intelligence augmentation’ (IA) complementing human skills. It is also a call for those engineers to ensure their work is human-centred. It’s all about the humans, not about the machines, Markoff concludes. The book dismisses what he calls the ‘Apocalyptic AI’ tradition embraced by people like Ray Kurzweil and Hans Moravec, looking forward to the Singularity, the [amazon_link id=”1503262421″ target=”_blank” ]Frankenstein[/amazon_link] moment when our machine intelligence creation becomes conscious and alive. Yet Markoff worries about the failure of the ‘AI’ (rather than ‘IA’) researchers to stay alert to the dangers of not writing people into the algorithmic script.

[amazon_image id=”0141439475″ link=”true” target=”_blank” size=”medium” ]Frankenstein: Or, the Modern Prometheus (Penguin Classics)[/amazon_image]  [amazon_image id=”1614275025″ link=”true” target=”_blank” size=”medium” ]Cybernetics: Second Edition: Or the Control and Communication in the Animal and the Machine[/amazon_image]  [amazon_image id=”0691168423″ link=”true” target=”_blank” size=”medium” ]The Butterfly Defect: How Globalization Creates Systemic Risks, and What to Do about It[/amazon_image]

The danger has always been apparent. Norbert Wiener’s [amazon_link id=”1614275025″ target=”_blank” ]Cybernetics[/amazon_link], “Posed an early critique of the arrival of machine intelligence: the danger of passing decisions on to systems that, incapable of thinking abstractly, would make decisions in purely utilitarian terms rather than in consideration of richer human values.” (A comment that struck me because economics is of course purely utilitarian and notorious for setting the ‘richer human values’ aside.) Another danger is pointed out later in the book, attributed here to Alan Kay: that relying on machines, “Might only recapitulate the problem the Romans faced by letting their Greek slaves do their thinking for them. Before long, those in power were able to think independently. ” Markoff cites evidence that reliance on GSP is eroding memory and spatial reasoning. There is also, surely, the problem Ian Goldin underlines in his book [amazon_link id=”B00SLUBSJ8″ target=”_blank” ]The Butterfly Defect[/amazon_link]: that greater reliance on complex networks means greater vulnerability when they go wrong, or are attacked.

[amazon_image id=”B00IIB2CUY” link=”true” target=”_blank” size=”medium” ]The Coming Of Post-industrial Society (Harper Colophon Books) by Bell, Daniel (1976) Paperback[/amazon_image]

To go back to the Veblen point, his was a political argument in the Progressive era. Accumulations of political power, via ownership of assets including technology and skills, always trigger political struggles. Daniel Bell made a similar point in [amazon_link id=”B00IIB2CUY” target=”_blank” ]The Coming of Post-Industrial Society[/amazon_link] – that the political faultline of the post-industrial age would be technocratic expertise versus populist demands. Perhaps he was too early: we seem to be deep into a populist backlash against the technologists right now. But for me the question isn’t so much whether the robots are human-friendly as whether the political and economic structures within which technological advance occurs are human-friendly. It isn’t looking promising.

Anyone prompted to mull over the question of what makes a silicon-based non-human being intelligent should read this wonderful article about carbon-based non-human intelligence. If it’s ever a case of us against the machines, we’ll have the dogs, dolphins and chimpanzees on our side.

Of robots and dogs

I’m part way through [amazon_link id=”0062266683″ target=”_blank” ]Machines of Loving Grace[/amazon_link] by John Markoff, which is about whether ‘robots’ spell automation (substitutes for humans) or augmentation (complements to humans), and the history of the tension within the field of AI between these strands. A review will follow in a couple of days. But one sentence early in the book stopped me short:

“Humans appear to want to believe they are interacting with humans even when they are conversing with machines. We are hardwired for social interaction.” [my italics]

[amazon_image id=”0062266683″ link=”true” target=”_blank” size=”medium” ]Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots[/amazon_image]

So given that we are building the machines, why can’t we hardwire them for social interaction too? I want Siri to love me as much as my dog loves me – and is inclined to love other humans apart from the postman.

Dog of loving grace

Dog of loving grace

The stuff beneath the cloud

The physical location of the Internet has always fascinated me, so I’ve been enjoying reading [amazon_link id=”0262029510″ target=”_blank” ]A Prehistory of the Cloud[/amazon_link] by Tung-Hui Hu (a network engineer turned English professor). The first half of the book is pretty much entirely about the physical infrastructure and the mismatch between our idea of ‘the cloud’ as something dematerialised operating to new social/political rules and its reality in cables and buildings in specific places. The introduction starts: “A multi-billion dollar industry that claims 99.999 percent reliability breaks far more often than you’d think because it sits on top of a few brittle fibres the width of a few hairs.” It goes on to say that the idea of the cloud as a metaphor for society or organising principle for the economy is sometimes uncomfortably at odds with the material, technological platform. Indeed, he points out, the cloud was responsible for 2 per cent of the world’s greenhouse gas emissions in 2008, since when there has been a rapid increase in the number of data centres.

[amazon_image id=”0262029510″ link=”true” target=”_blank” size=”medium” ]A Prehistory of the Cloud[/amazon_image]

The book then starts with the geography of US fibre optic cables, laid under old railroad tracks: an old-economy, centralised network beneath the cloud as “a vision os globalization that follows the dictates of a multinational corporation – a coalition of geographic arease that move capital and resources through the most efficient path.” We have the impression of the internet as a decentralized network, but it is not – the idealised distributed network described in a famous 1962 article by Paul Baran was never built. The idea that the network got its shape because of the threat of nuclear war is a kind of ‘how he leopard got its spots’ [amazon_link id=”1405279613″ target=”_blank” ]Just So[/amazon_link] story. “Virtually all traffic on the US Internet runs across the same routes established in the 19th century.” (The same is true in the UK, with the east and west coast spines.) Interoperability via IP has only increased the concentration of power, the book argues. Six telcos control the US internet; it is fewer in the UK. Yet there is, “A collective desire to keep the myth alive despite evidence to the contrary.” This chapter ends with an intriguing section on the inherent paranoia of seeing the world through a network lens. If the system is a logical construction overlaid on a physical network, anything and everything can become part of it: the cloud has nebulous edges. Therefore anything and everything – or everyone – can cause breaks or errors.

The second chapter discusses time sharing and virtualization in the cloud – the creation of the illusion that we have our own private part of it. The book presents this is part of a shift away from waged labour toward a flexible economy with a nebulous boundary between paid and unpaid: “By positioning users as intimate partners of the computer, time sharing yoked users to a political economy that made users synonymous with their usage and allowed them (or their advertising sponsors) to be tracked, rented or billed.” The concept of multi-tasking developed out of time share computers, and now refers to flexible working more generally. “Real time actually functions as an ideology of economic productivity.” I am intrigued by the link between time and productivity – not new of course; think of EP Thompson’s famous article about industrial time keeping. Still, time spent using the computer is the work time now to be tracked. “The underlying logic of freeware capitalism is consumption – of time.” The chapter goes on to discuss the privacy debate as the result of the transformation of what had been envisioned as a public utility into a set of private ones, or gated communities, albeit only brought about by virtualization software. The book argues that concerns about privacy contribute to the logic of (dread word) “neoliberalism.”

The n-word put me off quite a lot, as it seems a pretty empty concept, and the third chapter vanishes down the rabbit hole of critical theory, although I like (again) its preoccupation with the built structures of the internet, the data centres – some in old military bunkers. “Computers, like horses, overheat when worked hard.” However, the chapter is about the links between cloud computing and the surveillance state. The basic point is the re-emergence (as if it ever went away) of the claims of sovereign territories and state power over the internet.

With the final chapter the book re-emerges from its rabbit hole, opening with a section on the use of big data as a tool of power. He notes: “Targetted marketing came out of the Eisenhower era science of geodemography … GIS was a by-product of the military’s need to convert populations intro targettable spaces.” The chapter argues that opting out of the connected world is nearly impossible, so we ought to start by acknowledging its structural inequalities of power. “The cloud is a subtle weapon that translates the body into usable information.” Well, everything, perhaps. Airbnb turns housing into a housing cloud; we have car clouds drifting around major cities, and labour markets that deliver “humans as a service.” Although I certainly do not see the world through the prism of neoliberalism (which no doubt confirms me as a neoliberal, as if being an economist wasn’t enough to do so), I found this book a very thought-provoking essay about the economic and political underpinnings of our connected lives. We surely need to have more discussion about the ownership of ‘the cloud’, its physical reality and energy consumption, and the political power that flows from its roots in those old railroads.

Thinking, learning and doing

James Bessen’s book [amazon_link id=”0300195664″ target=”_blank” ]Learning By Doing: The Real Connection between Innovation, Wages and Wealth[/amazon_link] is excellent. It strikes a balance between meaty analysis and description of historical episodes of technical change, and is at the same time very accessible.

[amazon_image id=”0300195664″ link=”true” target=”_blank” size=”medium” ]Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth[/amazon_image]

The book argues that it is important to distinguish between ideas, which can be codified and transmitted and know-how attached to workers, which takes experience to accumulate. This is familiar – Paul Romer recently blogged about the role played by this distinction in his famous model. But Bessen adds that the distinction makes it important to consider the incentives for workers to invest in new skills so that new technologies can be implemented – and the part played by these incentives is usually overlooked and yet crucial for forming views about the “future of work” when there are ubiquitous robots.

He uses the historical examples to demonstrate that in the early stages of implementation of a technology, returns to workers with generally high skills will rise. They are able to make the adjustments and minor additional innovations that get the big innovation to work. During that period, the wages of ordinary workers stagnate – as in the famous Engels’ Pause (pdf). However, when the technology is thoroughly bedded in and the technical knowledge needed to work with it is standardised, ordinary workers have the incentive to invest in in gaining skills and experience. A thick labour market develops. Workers are able to threaten credibly to switch jobs. Their real wages rise and the high-level skill premium narrows.

“The specific skills associated with a major new technology are not standardized at first, which limits the market. Initially, these skills are always limited to specific employers.'”

He emphasises the need for the necessary technical knowledge to be standardised too – the example he uses is the periodic table’s invention, standardising the chemical knowledge workers in the growing industry needed, and making it easier to teach.

Bessen then uses some new examples to demonstrate that with digital technologies, this standardisation of technology, tasks and skills has not yet happened. His example is digital publishing, where the specifics of the technology are still changing, and so do the specific technical knowledge and experience needed.

In addition to the development of a standardised know-how labour market on demand and supply sides, Bessen points out that new technologies can also raise demand and employment in existing work. His example here is the continuing increase in the number of bank tellers (still going on) even as the number of ATMs grew rapidly, with the humans’ tasks changing to focus on customer relations, and the number of bank branches and transactions increasing. This is not the case with all technologies – the job market for people making oil lanterns is tiny – but the book suggests it happens more often than one would think.

The book ends with policy reflections, of which the most interesting concerns education. The reasoning about standardised knowledge, and the importance of experience, as a technology matures points to the need for skills-focused education rather than piling as many young people as possible through conventional academic tertiary education. Bessen argues that demands to make vocational jobs such as nursing or medical assistants require a university degree represent a form of job protectionism. He also – along with many other scholars – points to the dysfunctional nature of the patent/copyright system as it operates now, especially in the US.

This is a very US-focused book, but none the less interesting for that. This review has skimmed over the top of the argument; I’d strongly recommend the book to anyone interested in the automation/inequality/employment issues. It is a broadly optimistic perspective but does underline the length of the transition and the likely impact on individuals. All the more reason to pay attention to the policy implications.