Cooperative Artificial Intelligence
What mechanisms facilitate successful human–machine cooperation and machine-mediated human cooperation?
Zero-sum interactions like board and computer games have attracted much interest from Artificial Intelligence (AI) research as AI surpassed humans’ performance. Yet, much of human sociality consists of non-zero-sum interactions that involve cooperation and coordination. Throughout its evolutionary past, the success of the human species has largely depended on its unique cooperation abilities. Introducing AI agents as cooperation partners to social life bears immense potential but also presents the challenge of equipping AI systems with compatible capabilities to cooperate with humans. Such optimistic views go back to early thinkers like Norbert Wiener, who envisioned a symbiosis between humans and machines.
Thus, this research area studies cooperative human–machine interactions—or in short, Cooperative AI.
Developing and measuring key Cooperative AI concepts relies on machine behavior research. Indeed, recent behavioral studies show that dynamic reinforcement learning algorithms can establish and sustain cooperation with humans across various economic games. Interest across various disciplines, such as behavioral economics, human–computer interaction, and psychology, in settings where people and machines can cooperate is growing. A recent review counts more than 160 behavioral studies. However, when taking a closer look at these studies, a fundamental disagreement about a key methodological feature becomes apparent: How to implement the payoffs for the machine?