It seems that everyone with a huge IQ has been busy recently warning about the terrible and dire consequences of unleashing Artificial Intelligence (AI) software onto an unsuspecting world. Stephen Hawking says it will be the end of mankind (ref), Elon Musk says that AI is an existential risk (ref) and Bill Gates just calls it a threat (ref). Who am I to argue?During the early 1980’s I was lucky enough to be working at the Turing Institute in Glasgow (ref). We won a contract to solve a problem for NASA. The project was to build an autopilot for their Space Shuttle. We were told that it had the ‘aerodynamics of a brick’ and the space engineers were concerned about it getting bent on landing. We ended up solving the problem relatively easily by using an approach originally used to get a computer to play naughts and crosses (ref).
This automatic rule induction may sound complicated but the way it works is remarkably simple. What was good for us was that NASA thought it was fabulous - which gave us a great excuse to increase the size of the bill we gave them. It’s worth noting that while other systems associated with the Space Shuttle may have failed, to the best of our knowledge, the code written in Glasgow never made a mistake. An important point is that we didn’t actually write the code. The code was written automatically by a computer. All we did was to provide the computer with the training examples.To understand how simple the approach is, let’s imagine an industrial sorting problem where three types of vegetable (peas, carrots and sprouts) are coming down a conveyor and you want a machine that will spot which vegetable is which.Peas and sprouts are of different size but both are round while carrots tend to be long and thin. So the first step might be to backlight the vegetables and capture silhouette images of each. Next, use a digital camera and a computer to calculate the roundness of each object. Here you might try fitting a circle to the image of the object and seeing how well it fits. If it fits well without much error then it’s round, else it’s not. Then compute ‘elongatedness’ which might be calculated as the ratio of the longest diameter to the shortest diameter. For each vegetable example you end with one measure of how round it is and one for how elongated it is along with size. If you then told the computer which example vegetable was which then the rule induction algorithm would crunch the data and automatically generate a rule such as: If elongated then it’s a carrot
If round and small then it’s a pea
Else it’s a sprout
… and then use that to sort unknown vegetables.
This basic principle is the basis for most industrial vision systems that do everything from screening for cancer cells to classifying fish or … sorting vegetables. It’s really nothing more than very simple technology.Rule Induction is just one approach to the topic of Machine Learning that lies at the heart of many ‘AI’ systems. Other methods include Neural Networks, Reinforcement Learning, Logistic Regression, etc, etc (ref). What every technique has in common is that they are all based on simple Baysian Statistics where the key idea is that the world isn’t random, but rather, it’s content is clumped and clustered. It’s possible to give each cluster a name. Each cluster is then a ‘generalisation’ of all the training examples that contributed. If you have a pile of shoes in the bottom of a wardrobe and some item gets added then, statistically speaking, it’s most likely to be yet another shoe on top of the existing shoe pile. This principle of similar stuff naturally clustering together can be applied to anything from advanced robotics to self-driving cars or predicting the weather. Gillian Mowforth (one of the owners at INDEZ) used machine learning technology to score credit cards in the first example of automated credit card scoring for banks (Ref: Fintech before it was called Fintech).The problem occurs when humans observe impressive results that appear to ape some aspect of human behaviour. Without appreciating exactly how the results were achieved, they invariably imagine that some ‘intelligent’ magic is at work. This happened to me. I remember being introduced to UNIX in the early 1980’s and was fascinated to discover a programme called ‘Doctor’ You could ask it questions and it would come back with remarkably and seemingly intelligent replies. I was somewhat surprised to find that the programme was only a hundred lines or so long. Importantly, once I read it and understood it, the replies no longer seemed ‘intelligent’.
The one characteristic shared by Hawkins, Musk and Gates is that while they may well be hugely talented experts within their own fields, it’s worrying when each uses their achievement platforms to reflect on a subject whose technical implementation is not their bag. Perhaps it’s all just a fear of the unknown. From my own viewpoint, so called ‘AI’ is just part of the technology mix that we use to solve practical problems where it’s invariably the simplest of solutions that work best (ref). It’s likely that much of the problem is not to do with the technology but to the name we have given it.