What was the first AI system in the world? The answer to this question depends on how AI is defined; however, if computer programs that play games can be considered as AI, it could be said that the history of AI began immediately after the invention of the computer. The first computer in the world, ENIAC, was completed in 1946, and by 1950, a research paper on chess programming had already been published. I almost feel sorry for the pioneering computer scientists eagerly programming their own chess opponents because they had no friends to play against. The goal of surpassing human performance was apparently accomplished in 2016, when the 9-dan-ranked Go player Lee Sedol, who was one of the best in the world, was defeated by AlphaGo, which was developed by Google.
Do game-playing AI systems have anything further to be proved? In fact, games such as chess, Shogi, and Go belong to a category known as "complete information games", in which all players have full access to all information. On the other hand, there are also "incomplete information games" where the players do not have access to all of the information. Computers are yet to completely outmatch humans at incomplete information games.
One such game is Werewolf, the card-based party game discussed in this book. Every college student in Japan is familiar with the Werewolf game; therefore, I believe little explanation is required here. Although the rules have local variations, in general, Werewolf is a party game in which the villagers (players) try to uncover who among them is secretly a bloodthirsty werewolf looking to kill them.
Why is it so hard to teach an AI system to play Werewolf? The answer to this question lies in the fact that Werewolf is more than just an incomplete information game: It is a game of communication that is made up of conversations and a game of cooperation between multiple players grouped into teams. In that sense, Werewolf presents more challenges to be solved than Go and Shogi. Among them, the art of convincing people is particularly important.
This book describes how the AI Werewolf Project is using Werewolf as a platform to tackle problems such as these that AI has hitherto not been able to solve.
Werewolf is often called a "lying game;" however, in reality, it is a game of judging whether or not to trust other players and of gaining their trust. Once we accept the possibility that the human and AI may be lying to each other, it becomes necessary to convince the other party that the information one holds is correct. In other words, once the possibility of lying is assumed, even if one wants to tell the truth, one’s opponent will require grounds to be convinced that one is not lying. In that sense, the day when AI defeats humans at Werewolf will arrive when humans come to trust the AI while knowing that it could be lying. One chapter of the book introduces the concept of "possible worlds" as a technique to simulate the emergence of interpersonal trust on the computer.
The book also describes how the Werewolf Intelligence Project, in addition to conducting its own AI research, provides a platform for the collective development of AI systems that play Werewolf. In 2019, the International Conference on Artificial Intelligence (IJCAI) even hosted an international workshop on AI Werewolf. IJCAI2020 is scheduled to be held in Kyoto, Japan (or online) and if you are a programmer who loves Werewolf, I encourage you to read this book and take part in the development of Werewolf intelligence.
(Written by TORIUMI Fujio, Associate Professor, School of Engineering / 2019)