Thinking about AI

There are several good introductions to AI; the three I’ve read complement each other well. Hannah Fry’s Hello World is probably the best place for a complete beginner to start. As I said in reviewing it, it’s a very balanced introduction. Another is Pedro Domingo’s The Master Algorithm, which is more about how machine learning systems work, with a historical perspective covering different approaches, and a hypothesis that in the end they will merge into one unified approach. I liked it too, but it’s a denser read.

Now I’ve read a third, Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. 41-m7+LkdHL._SX309_BO1,204,203,200_It gives a somewhat different perspective by describing wonderfully clearly how different AI applications actually work, and hence helps understand their strengths and limitations. I would say these are the most illuminating simple yet meaningful explanations I’ve read of – for example – reinforcement learning, convolutional neural networks, word vectors etc.I wish I’d had this book when I first started reading some of the AI literature.

One thing that jumps out from the crystal clear explanations is how dependent machine learning systems are on humans – from the many who spend hours tagging images to the super-skilled ‘alchemists’ who are able to build and tune sophisticated applications: “Often it takes a kind of cabbalistic knowledge that students of machine learning gain both from their apprenticeships and from hard-won experience.”

The book starts with AI history and background, then covers image recognition and similar applications. It moves on to issues of ethics and trust, and then natural language processing and translation. The final section addresses the question of whether artificial general intelligence will ever be possible, how AI relates to knowledge and to consciousness. These are open questions though I lean toward the view – as Mitchell does –  that there is something important about embodiment for understanding in the sense that we humans mean it. Mitchell argues that deep learning is currently hitting a ‘barrier of meaning’, while being superb at narrowly defined tasks of a certain kind. “Only the right kind of machine – one that is embodied and active in the world – would have human level intelligence in its reach. … after grappling with AI for many years, I am finding the embodiment hypothesis increasingly compelling.”

The book then ends with brief reflections on a series of questions – when will self-driving cars be common, will robots take all the jobs, what are the big problems left to solve in AI.

Together, the three books complement each other and stand as an excellent introduction to AI, cutting through both the hype and the scary myths, explaining what the term covers and how the different approaches work, and raising some key questions we will all need to be thinking about in the years ahead. The AI community is well-served by these thoughtful communicators. A newcomer to the literature could read these three and be more than well-enough informed; my recommended order would be Fry, Mitchell, Domingo. Others may know of better books of course – if so, please do comment & I’ll add an update.

UPDATE

The good folks on Twitter have recommended the following:

Human Compatible by Stuart Russell (I briefly forgot how much I enjoyed this.)

Rebooting AI by Gary Marcus

Parallel Distributed Processing by David Rummelhart & others (looks technical….)

The Creativity Code by Marcus Du Sautoy

Machine, Platform, Crowd by Andrew McAfee  Erik Brynjolfsson (also reviewed on this blog)

AI: A Modern Approach by Stuart Russell and Peter Norvig (a key textbook)

The gorilla problem

I was so keen to read Stuart Russell’s new book on AI, Human Compatible: AI and the Problem of Control, that I ordered it three times over. I’m not disappointed (although I returend two copies). It’s a very interesting and thoughtful book, and has some important implications for welfare economics – an area of the discipline in great need of being revisited, after years – decades – of a general lack of interest.

The book’s theme is how to engineer AI that can be guaranteed to serve human interests, rather than taking control and serving the specific interests programmed into its objective functions and rewards. The control problem is much-debated in the AI literature, in various forms. AI systems aim to achieve a specified objective given what they perceive from data inputs including through sensors. How to control them is becoming an urgent challenge – as the book points out, by 2008 there were more objects than people connected to the internet, giving AI systems ever more extensive access, input and output, to the real world. The potential of AI is their scope to communicate – machines can do better than any number n of humans because they access information that isn’t kept in n separate brains and communicated imperfectly between them. Humans have to spend a ton of time in meetings, machines don’t.

Russell argues that the AI community has been too slow to face up to the probability that machines as currently designed will gain control over humans – keep us at best as pets and at worst create a hostile environment for us, driving us slowly extinct, as we have gorillas (hence the gorilla problem). Some of the solutions proposed by those recognising the problem have been bizarre, such as ‘neural lace’ that permanently connects the human cortex to machines. As the book comments: “If humans need brain surgery merely to survive the threat posed by their own technology, perhaps we’ve made a mistake somewhere along the line.”

He proposes instead three principles to be adopted by the AI community:

  • the machine’s only objective is to maximise the realisation of human preferences
  • the machine is initially uncertain about what those preferences are
  • the ultimate source of information about human preferences is human behaviour.

He notes that AI systems embed uncertainty except in the objective they are set to maximise. The utility function and cost/reward/loss function are assumed to be perfectly known. This is an approach shared of course with economics. There is a great need to study planning and decision making with partial and uncertain information about preferences, Russell argues. There are also difficult social welfare questions. It’s one thing to think about an AI system deciding for an individual but what about groups? Utilitarianism has some well-known issues, much chewed over in social choice theory. But here we are asking AI systems to (implicitly) aggregate over individuals and make interpersonal comparisons. As I noted in my inaugural lecture, we’ve created AI systems that are homo economicus on steroids and it’s far from obvious this is a good idea. In a forthcoming paper with some of my favourite computer scientists, we look at the implications of the use of AI in public decisions for social choice and politics. The principles also require being able to teach machines (and ourselves) a lot about the links between human behaviour, the decision environment and underlying preferences. I’ll need to think about it some more, but these principles seem a good foundation for developing AI systems that serve human purposes in a fundamental way, rather than in a short-term instrumental one.

Anyway, Human Compatible is a terrific book. It doesn’t need any technical knowledge and indeed has appendices that are good explainers of some of the technical stuff.  I also like it that the book is rather optimistic, even about the geopolitical AI arms race: “Human-level AI is not a zero sum game and nothing is lost by sharing it. On the other hand, competing to be the first to achieve human level AI without first solving the control problem is a negative sum game. The payoff for everyone is minus infinity.” It’s clear that to solve the control problem cognitive and social scientists and the AI community need to start talking to each other a lot – and soon – if we’re to escape the fate gorillas suffered at our hands.

41M3a7N6caL._SX323_BO1,204,203,200_

Human Compatible by Stuart Russell

 

Made by Humans: the AI condition is very human indeed

A guest review by Benjamin Mitra-Kahn, Chief Economist, IP Australia

There is a lot of press about the coming – or going – of artificial intelligence, and in Made By Humans: The AI Condition  Ellen Broad has written a short but comprehensive account of the state-of-play which deserves to be read by anyone wanting to know what is happening in AI today, certainly if you want to get in on the conversation.

The book is very contemporary, and if you haven’t had the time to attend every conference and workshop on AI since 2015, then you’re in luck. Broad has been to them all, and this book will catch you up on all the developments. The book also offers a series of insights into the challenges that AI and big data present – because it is about both – and the questions we should ask ourselves. These are not the humdrum questions such as who a self-driving car should choose to crash into (although a randomized element is suggested), but some bigger and much more interesting questions about whether we need to be able to inspect the algorithm that made the decision. Does the algorithm need to be open source or does it need to be exposed to expert review to ensure best practice, and should the data that trained the algorithm be openly accessible or available for peer-review. Using every example about data and AI from the last three years, Broad steps through the issues under the hood that are only now being thought about.

This naturally brings up the question of government regulation. This is something Broad has changed her mind about, which she discusses openly in a book that moves between a technology story, personal discovery and ethical discussions. There is a role for regulation says Broad, and the fact that we don’t yet know what that regulation could be, or should be, is handled with some elegance. Technology is not a nirvana , computer code sometimes held together with “peanut butter and goblins” and written by people who are busy, under-funded or just average. Simply aiming to ‘regulate AI’ however is akin to wanting to regulate medicine: It is complex, dependent on who you impact, their ability to engage, and the risks as well as the situation. It is a human-to-human decision ultimately. Not perhaps the argument one expects in a book on AI by the ex-director of policy for the Open Data Institute and previous head of the Australian Digital Alliance. But it is about humans, and the AI condition is about humanity – about fairness, intelligibility, openness and diversity according to Broad.

The book finishes with US Senators questioning Facebook about Cambridge Analytica, and the recent implementation of the GDPR (data governance, not a new measure of GDP), which quickly dates the book, but that is a choice the author makes explicitly. This book is about the current conversation on big data and AI, and it is about participating in that conversation. It is not about the last 50 years of ethics and the history of computers. There is an urgency to the writing, and as someone interested in this, I found myself updated in places, and challenged in others. Reading this book will allow anyone to particpate in the AI debate, knowing what Rahimi’s warning about Alchemy and AI is, being able to discuss the problems around the COMPAS sentencing software, or seeing why Volkswagen’s pollution scandal was a data and software scandal first. If this is a conversation you want to engage with, Broad’s book is an excellent starting point and update.

[amazon_link asins=’B07FXTGMGN’ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’af46feca-98b8-11e8-a10d-1d686ce27dc3′]