Another book of many out on AI is As If Human: Ethics and Artifical Intelligence by Nigel Shadbolt and Roger Hampson. I found this a very accessible book on AI ethics, possibly because neither author is an academic philosopher (sorry, philosophers….). Generally I’m a bit impatient with AI ethics, partly because it has dominated debate about AI at the expense of thinking about incentives and politics, and partly because of my low tolerance for the kind of bizarre thought experiments that seem to characterise the subject. Nevertheless, I found this book clear and pretty persuasive, with the damn trolley problem only appearing a small number of times.
The key point is reflected in the title: “AIs should be judged morally as if they were humans,” (although of course they are not). This implies that any decisions made by machines affecting humans should be transparent, accountable and allowing appeal and redress; we should treat AI systems as if they were humans taking the decisions. There may be contested accountability beyond that to other groups of humans – the car manufacturer and the insurer, say – but ‘the system’ can’t get away with no moral accountability. (Which touches on a fantastic new book, The Unaccountability Machine, by Dan Davies that I will review here soon).
Shadbolt and Hampson end with a set of seven principles for AIs, including ‘A thing should say what it is and be what it says’, and ‘Artificial intelligences are only ethical if they embody the best human values’. Also that private mega-corps should not be determining the future of humanity. As they say, “Legal pushback by individuals and consumer groups against the large digital corporations is surely coming.”
As If is well worth a read. It’s short and high level but does have examples (and includes the point that seems obvious but I have seen too rarely made, that the insurance industry is steadily putting itself out of business by progressively reducing risk pooling through data use).
“This implies that any decisions made by machines affecting humans should be transparent, accountable and allowing appeal and redress”
But how do you hold a machine accountable? It can’t be punished in any normal sense of the word. Jailing or executing AI makes no sense. Or does it?