Pupils vs machine
5#Pupils vs Machine
5.6 More on this topic
Nim is a relatively simple game with an obvious winning strategy. If the number of pens used to play the game is a multiple of 4, the second player has a winning strategy. The mathematical theorem would be as follows:
If you play the game Nim with n pens, where n is a multiple of 4, the second player has a winning strategy.
This is not very difficult to prove. Let’s assume you play Nim with only 4 pens. The first player must take away at least 1 pen, so that 1, 2 or 3 pens remain. The second player can now take the remaining pens and win the game. So, if there are only 4 pens on the table and it is the first player’s turn, the second player wins.
Let’s now assume that there are 8 pens on the table. The first player must take away at least 1 pen, so there are 5, 6 or 7 pens left. The second player can now keep taking pens away until there are only 4 left. With 4 pens on the table, it is now the first player’s turn. As seen before, this gives the second player a winning strategy.
The proof can now be continued indefinitely in this way. With 12 pens, you can get to 8 pens; with 8 pens, we already know that the second player has a winning strategy. With 16 pens, you can get to 12 pens, etc. This type of proof is known in mathematics as proof by induction.
Artificial intelligence also works in other games. The basic method is the same, except that Nim is a very simple and small game. It is simple in the sense that there is a clear winning strategy that can be mathematically structured and proven (see previous section). By ‘small game’, we mean that there aren’t many possibilities. If you only play with 8 pens, you have 8 states (the cups) with three different moves each (the tokens). Compared to other games, this is not much.
In 1961, AI researcher Donald Michie built an AI from 304 matchboxes that learned how to play noughts and crosses. He called it MENACE (Machine Educable Noughts and Crosses Engine). The AI consists of 304 matchboxes which are stuck together to form what looks like a set of drawers, as shown here in the picture:
Source: https://www.mscroggs.co.uk/blog/19
Each box shows a noughts and crosses configuration (of course, there are many more than just 304 noughts and crosses configurations, but symmetries make it possible to limit yourself to 304) and each box contains different coloured beads. Unlike with the Nim AI, beads aren’t just removed – beads with a positive effect can be added (to make these positive strategies more likely). The CNRS webpage Images des mathématiques has published an excellent article about MENACE. The American mathematician Martin Gardner has also written a very interesting and reader-friendly article about MENACE and similar AI machines (Gardner, 1962).
In 2017, MENACE was recreated at the Manchester Science Festival. After just a short time, it managed to win or at least draw against the visitors.
The French mathematician Aline Parreau has written a book that plays noughts and crosses and never loses.
AI can also be used in a similar way to program more complex games such as chess or Go. These games are more complex for several reasons. On the one hand, the number of possible states and moves is much higher than in Nim or noughts and crosses. If you were to program chess with a cup system, you would need approximately 1050 cups. But Go is even more complex than chess. Computer scientist Claude Shannon has calculated that there can be around 10120 unique games in chess. For Go, this number increases to 10300 (du Sautoy, 2019). On the other hand, there is no winning strategy for Go and chess, unlike for Nim or noughts and crosses. This means an AI cannot be taught in exactly the same way how to play these games. It calls for more complex AI methods that find patterns and rules without running through all of the possibilities. This is the case with neural networks, for example.
Back in 1997, the computer ‘Deep Blue’ beat the then world chess champion. Although this was a breakthrough, it was Go that presented the real challenge. Many scientists were convinced that there would never be a computer that could beat a human at Go. The British team DeepMind developed an AI that was capable of learning Go. It beat a professional player in 2015. Then in 2016, AlphaGo won against one of the best Go players in the world, before going on to beat Go world champion Ke Jie in 2017. A particularly exciting description of this journey – from the idea for AlphaGo through to the victory in 2017 – can be found in ‘The Creativity Code: Art and Innovation in the Age of AI’ by Marcus du Sautoy (du Sautoy, 2019). This book provides a good introduction to the world of AI. A more detailed course on AI is offered by the Inria MOOC (Massive Open Online Course) L’Intelligence Artificielle… avec intelligence !. For a more detailed understanding of neural networks and deep learning, we recommend the first chapter of the online book Neural Networks and Deep Learning (Nielsen, 2015).
Advances in artificial intelligence are giving rise to many benefits. But, unfortunately, there are also many negative aspects to it. One problem with artificial intelligence is bias, which – in the worst case – leads to discrimination on the basis of ethnicity, gender, education, etc. Examples of this can be found in du Sautoy (2019). In the financial sector, AIs are used for things such as assessing the creditworthiness of loan applicants. This involves an algorithm identifying patterns associated with past loan defaults. If customers only match these patterns by coincidence, the AI may arrive at the wrong conclusion. Artificial intelligence is also used in the legal field. In the US, AIs make suggestions for the early release of prisoners. To learn the criteria for assessing their eligibility, past decisions by judges are fed into the AI. However, in this case, the judges’ decisions may already be tainted by bias, since black people tend to get harsher sentences on average. The AI therefore learns on the basis of data that is unfair, and this bias will then be reflected in its own suggestions.
These issues are currently a hot topic in the media and are making some people afraid of artificial intelligence. However, despite the omnipresence of AI in our lives and the media, there is still very little understanding of what exactly artificial intelligence is and does. Irrational fear comes from not understanding something and can only be eliminated through education.
It is not about using AI indiscriminately or about glorifying it and quashing all criticism. Rather, it means developing a critical understanding of AI. Only by truly understanding AI will we be able to distinguish between the negative (or even dangerous) consequences and the positive aspects of AI programs.
As the article ‘Why it is best not to let the computer decide’ in PITT context(s) explains, you should definitely not let AI manage your entire life. But instead of rejecting it outright due to irrational panic, having a critical understanding can also help to recognise positive aspects.
This is a goal being pursued by Rediet Abebe, computer scientist and co-founder of Black in AI, who is using artificial intelligence to solve real-world problems (Crowell, 2021). She is also a co-founder of Mechanism Design for Social Good, an organisation that brings together researchers from different disciplines, policy-makers and business representatives to promote justice and social well-being for marginalized groups, particularly in education, employment and housing.
AI is also playing a bigger and bigger role in research itself. AI-driven research is producing some very positive results, especially in the areas of pattern recognition and medical diagnoses. In the following interview, Dr Laurent Mombaerts, a former PhD student at the University of Luxembourg, describes how he uses AI in his research.
References
Alexandre, Frédéric, Becker, Jade, Comte, Marie-Hélène, Lagarrigue, Aurélie, Liblau, Romain, Romero, Margarida & Viéville, Thierry. (2021). Why, What and How to help each Citizen to Understand Artificial Intelligence? KI – Künstliche Intelligenz, 2, 1610–1987.
Crowell, Rachel. (2021). A Computer Scientist Who Tackles Inequality Through Algorithms. Quanta Magazine. https://www.quantamagazine.org/rediet-abebe-tackles-inequality-with-computer-science-20210401/
Gardner, Martin. (1962). A Matchbox Game Learning-Machine, Scientific American, 3 1962, 138–144.
Nielsen, Michael. (2015). Neural Networks and Deep Learning, Determination Press, http://neuralnetworksanddeeplearning.com/
du Sautoy, Marcus. (2019). The Creativity Code: Art and Innovation in the Age of AI. Cambridge: Belknap of Harvard UP.