“Machines are becoming ever more sophisticated in how they interact with us.”

The most important questions and answers about The Face Game

June 12, 2023

Their new multidisciplinary project The Face Game aims to better understand the interplay between humans and AI. Iyad Rahwan from the Max Planck Institute for Human Development, Jean-Francois Bonnefon from the Toulouse School of Economics, and Aythami Morales Moreno from the Universidad Autonoma de Madrid talk about the motivation for their project, the challenges they are facing, and the most pressing questions regarding the rapid development of AI. 

You recently launched the large-scale online experiment "The Face Game". What is it about?

 Jean-Francois Bonnefon: With The Face Game, we want to know how Artificial Intelligences will decide to appear to humans — and more specifically, how they will build a human face for themselves, depending on what goal they have and which humans they interact with.

Iyad Rahwan: Machines are becoming ever more sophisticated in how they interact with us. The recent meteoric rise of chatbots like ChatGPT shows that machines can now converse with us fluently in natural language. The next frontier for machines will be non-verbal communication. We humans communicate with one another using all kinds of non-verbal signals, from what our clothes say about us, to our facial expressions and body language. The Face Game is about the ‘first impression’ that AI will try to make when interacting with us.

What is the motivation for the project? What is new?

Jean-Francois Bonnefon: Profile pictures of human faces are everywhere online, and they play a crucial role in shaping the first impression we make on others. We all play a ‘face game’ with each other, deciding how we want to appear in order to produce specific impressions on others. Artificial Intelligences are watching us play this game, which means that they will learn the kind of face that produces a specific impression on one human or another. As a result, they will learn to give themselves a human face that is appropriate to the goal they pursue and the humans they interact with. We need to understand how they will achieve this and to which results. However, there are growing concerns about the capacity of AI to manipulate us through personalized communication strategies and growing concerns that AI may try to give itself human traits to bypass our mistrust of machines. The Face Game is at the intersection of these concerns

 Which challenges are you facing?

Jean-Francois Bonnefon: In order for AI to learn the face game, that is, giving itself a face that best achieves its goals when interacting with a specific human, it needs to observe a lot of humans playing this game among themselves. That means that we do not only face technical challenges in this project, we also face logistic challenges because we need a great many people to participate in the experiment, ideally from all over the world — we would not want our result to be narrowly focused on European faces, for example. Accordingly, we had to make the experiment fun and rewarding for people and to make our best effort to spread the word about it in as many countries as possible! Because we will make our data available to all scientists once we complete the project, we hope that the whole scientific community will help us spread the word in order to create a dataset that will benefit everyone. However, user privacy is very important to the researchers at Face Game. Following regulations such as GDPR and HIPAA, sensitive data such as faces will not be shared. The information shared for research purposes will be fully anonymized.

Aythami Morales: Regarding the development of AI in the project, as Jean-Francois suggests, we want the project to have a global impact, and for that, we need to develop AI capable of adapting to different cultural contexts worldwide. In the project, we are studying how to employ new generative models of images and do so in a responsible manner. We are analyzing different machine learning strategies and how they can affect people's behavior. An important objective of the project is to advance in the development of Responsible AI and methods that allow for the safe integration of AI into our society.

What technology is behind The Face Game

Aythami Morales Moreno: The AIs used in The Face Game are based on neural networks trained to produce predictions that maximize their rewards in each game. Neural networks were chosen for their excellent performance in supervised learning tasks and their widespread use in most AI technologies today. Although we utilized neural networks specifically trained for the context of The Face Game, we believe that the models used serve as good examples of automatic decision-making systems trained using machine learning algorithms.

What do you think are the most pressing questions for researchers regarding the rapid development of AI?

Iyad Rahwan: There is no simple answer to this question. But there are two broad challenges. On one hand, there are short- and medium-term questions about how AI will disrupt human society, from replacing tasks performed by human workers today to speeding up the spread of fake news to perpetuating biases in decision-making systems in business and government.

At the other end, we have questions about longer-term existential risks. These types of questions explore the possibility of machines becoming more competent than humans in all domains and possibly coming into an existential conflict with humans. 

It is possible to pay attention to both short- and long-term challenges, as they are not mutually exclusive. But it is also important to recognize that AI also offers tremendous opportunities to improve our lives, from increasing access to high-quality health and education to accelerating scientific and medical advances. 

Aythami Morales: AI has the potential to change our society in many fields with significant impact, such as health, sustainability, or the economy, but it also has its risks. In a very short time, this technology has transitioned from laboratories to our society. It is necessary to develop new regulations, processes, and technology that ensure the safe integration of AI into our daily lives. Just like when developing a new car model, the safety of its passengers and pedestrians is key, and the safety and rights of citizens should be at the forefront of new AI-based developments.

You have formulated a scientific research agenda on Machine behavior in a highly regarded Nature paper. What does this agenda look like, and why is it necessary? 

Iyad Rahwan: The Machine Behavior research agenda, outlined in our Nature paper, aims to scientifically study Artificial Intelligence (AI) and Machine Learning (ML) systems as agents with observable behaviors. Our approach views these systems akin to organisms in an ecosystem, with behaviors that can be analyzed and understood.

Given the increasing impact of AI and ML on society, we believe it's essential to understand not just their technical underpinnings but also their real-world behavior, interactions, and effects. The agenda calls for an interdisciplinary approach, combining insights from computer science, psychology, sociology, and economics to examine how these systems act in varied conditions, interact with humans and other machines, and influence their environment.

This research is critical for several reasons. Firstly, it addresses the growing autonomy and complexity of these systems, which often leads to unpredictable and impactful behaviors. Secondly, it acknowledges the profound influence these systems can have on human decisions, social dynamics, and societal structures. Lastly, it recognizes that these systems can interact in complex ways with each other and their environment, potentially leading to emergent phenomena that are not easily predictable from the behavior of individual systems.

By adopting the Machine Behavior framework, we aim to foster a deeper, more nuanced understanding of the role of AI and ML in society. Such knowledge can inform better design, regulation, and governance of these systems, ensuring they are used ethically and beneficially. This is not just a technical challenge but also a social and behavioral science challenge, requiring us to bridge the gap between disciplines to fully understand and navigate the AI-infused future.

About the persons

Iyad Rahwan is the managing director of the Max Planck Institute for Human Development in Berlin, where he founded and directs the Center for Humans & Machines. He is an honorary professor of Electrical Engineering and Computer Science at the Technical University of Berlin. Until June 2020, he was an Associate Professor of Media Arts & Sciences at the Massachusetts Institute of Technology (MIT). Rahwan holds a Ph.D. from the University of Melbourne, Australia.

Jean-François Bonnefon is a Research Director at the Toulouse School of Economics and holds the Moral AI chair at the Artificial and Natural Intelligence Toulouse Institute. He conducts behavioral research on machine ethics and human-AI cooperation.

Aythami Morales is an Associate Professor at Universidad Autonoma de Madrid and a member of BiDA-Lab research group. He conducts research on machine learning applications with a special interest in responsible AI.

Other Interesting Articles

Go to Editor View