Behavioral AI Safety & Ethics

What risks emerge out of the interaction between humans and intelligent machines, and how can they be mitigated?

Today, we increasingly interact with machines powered by artificial intelligence. AI is implemented even in the toys kids play with. AI powers home assistants like Amazon’s Alexa, which increasingly manages the lives of its over 100 million users. AI also engages in a growing range of tasks on behalf of humans, ranging from setting prices in online markets to interrogating suspects.  

Key questions that the research area Behavioral AI Safety & Ethics tackles are: Could machines be bad apples and corrupt human behavior? How should we design AI systems to avoid ethical and safety risks? And how do people around the world perceive these risks? 

Sample Projects

While people disregard AI advice that promotes honesty, they willingly follow dishonesty-promoting advice, even when they know it comes from an AI.
The Moral Machine experiment was a crowdsourced online survey aimed at understanding how people prioritize different ethical dilemmas involving autonomous vehicles.
This project tries to identify the causes of aversion to AI products, to explore when consumer fears are warranted, and to identify how to overcome consumer irrational aversion to AI products and services.

Explore other research themes

Go to Editor View