Science fiction authors have been warning us for decades of the dangers technology poses. Recently, advances in artificial intelligence and robotics seem to suggest that they have been right all along. Computers are now driving cars and piloting automated weapons systems. They are beating grandmasters at chess and Go, recognising faces and writing articles. They are even in our homes and on our phones, answering our queries, sending texts and setting alarms and reminders, and generally helping us to organise our lives. Surely then it won’t be long before they can do everything – and what then is to stop them from taking over?
But, as Steven Shwartz argues in this fine book, the dangers of AI are vastly overstated. Despite the impressive advances, there are numerous reasons why we should not worry about “evil robots” or “killer computers”. Chief of these is that the recent developments in AI are all “narrow”. This means that, astounding as it is that IBM’s Watson DeepQA program was able to beat two top quizzing champions on the TV programme Jeopardy!, that’s about all it can do. Its expertise is “narrow”, confined to the type of pattern matching that allowed it to trounce its human opponents at this specific activity. The same program could not drive a car. But not only is Watson DeepQA limited to its specific field (language processing), it is limited within that field. For instance, it cannot answer simple comprehension questions. Similar may be said of facial visual recognition systems, that can pick out an individual from among thousands of others (mostly…), but cannot tell a cat from a dog. And if such systems lack even full competence within their specific field, then what hope is there that they will develop the sort of artificial “general” intelligence (AGI) that will allow them to function at a human level in a broad range of tasks? It is AGI, Shwartz argues, that the sci-fi authors worried about, but which is something we are nowhere near to developing, and which we have good reasons for thinking may in fact never arrive.
As a former academic specialising in AI, and the founder of several companies developing AI-driven applications, Shwartz is well placed to make such a claim. He walks the reader through each of the new developments – self-driving cars, natural language processing, facial recognition, and so on – explaining how each works, noting both its achievements and limitations. All, he argues, lack the sort of “common sense” possessed by the average young child, and which would be a necessary requirement for AGI. This common sense is something we often take for granted. We’re not even talking, here, about the sort of general wisdom often embodied in proverbs – “a stitch in time saves nine”, “many hands make light work” – but a much more basic grasp of the world and the way it works. A young child will at some point learn “object permanence” and “intuitive physics”, which allows it to hold presumptions and make predictions: when its parent hides a ball, that the ball still exists; when the parent drops the ball, that it will bounce. It is this vast stock of basic knowledge that we pick up almost unthinkingly, but that with a computer would have to be deliberately programmed in or in some other way acquired – and it is this difficulty that has so far evaded AI researchers. GPT-2, the much vaunted AI program considered “too dangerous to release”, can produce articles and answer natural language questions, but will nonetheless flounder when asked to apply the most basic reasoning and inference to the texts it deals with. This is because it has not been designed to reason or infer, but merely to compile and parrot out phrases based on how often such word combinations are commonly found together. In short, it has no reason, nor any knowledge, and certainly no common sense.
This is not to say that there are no dangers associated with AI, and Shwartz lays these out, too. For example, we need to better understand the potential biases inherent within the data we use to train AI when using it to suggest criminal sentences, or assess loan applications (both of which are current practices). But the real dangers of AI are not that they will one day out-perform humans in all fields, let alone become sentient; it is that humans will come to rely on them without fully understanding the processes they embody, or grant them autonomy which may lead to unforeseen harms. But such things are not inevitable, and it is ultimately down to people and governments to regulate and limit the applications of AI.
However, Evil Robots, Killer Computers, and Other Myths does more than debunk prevalent myths. It is a clear and concise account of recent developments in artificial intelligence, and as such serves as an excellent lay primer to the field. Given the complexity of the subject matter, technical explanations cannot be completely avoided, but these are conveyed with a minimum of jargon, and Shwartz does an excellent job of introducing the central concepts with admirable clarity, making the book an enjoyable and informative read. Highly recommended.
[Disclaimer: The above review was based on a complimentary review copy provided by the author]
(NOTE: Some of these are affiliate links – more here)