John Markoff’s [amazon_link id=”0062266683″ target=”_blank” ]Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots[/amazon_link] ends with a reference to Thorstein Veblen’s [amazon_link id=”123033128X” target=”_blank” ]The Engineers and the Price System[/amazon_link] (not a book I’ve read – I’ve always found Veblen really heavy going). Apparently Veblen argued that the increasing technological complexity of society would give political power to the engineers. Markoff draws the analogy with the central role of algorithms in modern life: “Today the engineers who are designing the artificial intelligence-based prorams and robots will have tremendous influence over how we use them.”
[amazon_image id=”0062266683″ link=”true” target=”_blank” size=”medium” ]Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots[/amazon_image] [amazon_image id=”1614273707″ link=”true” target=”_blank” size=”medium” ]The Engineers and the Price System[/amazon_image]
[amazon_link id=”0062266683″ target=”_blank” ]Machines of Loving Grace[/amazon_link] is a history of the tension between artificial intelligence (AI) research, which substitutes robots for human activity, and ‘intelligence augmentation’ (IA) complementing human skills. It is also a call for those engineers to ensure their work is human-centred. It’s all about the humans, not about the machines, Markoff concludes. The book dismisses what he calls the ‘Apocalyptic AI’ tradition embraced by people like Ray Kurzweil and Hans Moravec, looking forward to the Singularity, the [amazon_link id=”1503262421″ target=”_blank” ]Frankenstein[/amazon_link] moment when our machine intelligence creation becomes conscious and alive. Yet Markoff worries about the failure of the ‘AI’ (rather than ‘IA’) researchers to stay alert to the dangers of not writing people into the algorithmic script.
[amazon_image id=”0141439475″ link=”true” target=”_blank” size=”medium” ]Frankenstein: Or, the Modern Prometheus (Penguin Classics)[/amazon_image] [amazon_image id=”1614275025″ link=”true” target=”_blank” size=”medium” ]Cybernetics: Second Edition: Or the Control and Communication in the Animal and the Machine[/amazon_image] [amazon_image id=”0691168423″ link=”true” target=”_blank” size=”medium” ]The Butterfly Defect: How Globalization Creates Systemic Risks, and What to Do about It[/amazon_image]
The danger has always been apparent. Norbert Wiener’s [amazon_link id=”1614275025″ target=”_blank” ]Cybernetics[/amazon_link], “Posed an early critique of the arrival of machine intelligence: the danger of passing decisions on to systems that, incapable of thinking abstractly, would make decisions in purely utilitarian terms rather than in consideration of richer human values.” (A comment that struck me because economics is of course purely utilitarian and notorious for setting the ‘richer human values’ aside.) Another danger is pointed out later in the book, attributed here to Alan Kay: that relying on machines, “Might only recapitulate the problem the Romans faced by letting their Greek slaves do their thinking for them. Before long, those in power were able to think independently. ” Markoff cites evidence that reliance on GSP is eroding memory and spatial reasoning. There is also, surely, the problem Ian Goldin underlines in his book [amazon_link id=”B00SLUBSJ8″ target=”_blank” ]The Butterfly Defect[/amazon_link]: that greater reliance on complex networks means greater vulnerability when they go wrong, or are attacked.
[amazon_image id=”B00IIB2CUY” link=”true” target=”_blank” size=”medium” ]The Coming Of Post-industrial Society (Harper Colophon Books) by Bell, Daniel (1976) Paperback[/amazon_image]
To go back to the Veblen point, his was a political argument in the Progressive era. Accumulations of political power, via ownership of assets including technology and skills, always trigger political struggles. Daniel Bell made a similar point in [amazon_link id=”B00IIB2CUY” target=”_blank” ]The Coming of Post-Industrial Society[/amazon_link] – that the political faultline of the post-industrial age would be technocratic expertise versus populist demands. Perhaps he was too early: we seem to be deep into a populist backlash against the technologists right now. But for me the question isn’t so much whether the robots are human-friendly as whether the political and economic structures within which technological advance occurs are human-friendly. It isn’t looking promising.
Anyone prompted to mull over the question of what makes a silicon-based non-human being intelligent should read this wonderful article about carbon-based non-human intelligence. If it’s ever a case of us against the machines, we’ll have the dogs, dolphins and chimpanzees on our side.