Humans and Machines

The Center for Humans & Machines conducts interdisciplinary science, to understand, anticipate, and shape major disruptions from digital media and Artificial Intelligence to the way we think, learn, work, play, cooperate and govern.

Click here for further information

Future of Work

As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: How will automation affect employment in different cities and economies?

The Nightmare Machine

Creating a visceral emotion such as fear remains one of the cornerstones of human creativity. This challenge is especially important in an age in which we wonder what the limits of AI are - in this case, can machines learn to scare us?

DARPA Balloon Challenge

Which role can the Internet and social networking play in the timely communication, wide-area team-building, and urgent mobilization? Exploring this questions a MIT team took part in the 2009 Balloon Challenge - and won.

Machine Behaviour

Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms.


Beeme is a massive social experiment, using augmented reality. An agent gives up their free will to save humanity from an evil AI, called Zookd. This person agrees to let the Internet pilot their every action and lets the public control the human avatar by suggesting actions.

Moral Machine

Adoption of self-driving promises to reduce the number of traffic accidents. But what happens if an accident is inevitable? If an Autonomous Vehicle needs to make a decision about potential risk to pedestrians on the road versus risk to the passenger in the car - how should it decide?


Shelley is a deep-learning powered AI trained on 140,000 eerie stories from r/nosleep. Shelley takes a bit of inspiration in the form of a random seed, or a short snippet of text, and starts creating stories. But what Shelley truly enjoys is to work collaboratively with humans.


A central idea in machine learning is that the data we use to teach an algorithm can significantly influence its behavior. But what happens if an algorithm is fed with biased data? What would an image captioning algorithm see in an inkblot image, being trained wrong data?

Go to Editor View