Sample Project: Human Detection of Machine Manipulated Media

Recent advances in neural networks for content generation enable artificial intelligence (AI) models to generate high-quality media manipulations. Here we report on a randomized experiment designed to study the effect of exposure to media manipulations on over 15,000 individuals' ability to discern machine-manipulated media. We engineer a neural network to plausibly and automatically remove objects from images, and we deploy this neural network online with a randomized experiment where participants can guess which image out of a pair of images has been manipulated. The system provides participants feedback on the accuracy of each guess. In the experiment, we randomize the order in which images are presented, allowing causal identification of the learning curve surrounding participants' ability to detect fake content. We find sizable and robust evidence that individuals learn to detect fake content through exposure to manipulated media when provided iterative feedback on their detection attempts. Over a succession of only ten images, participants increase their rating accuracy by over ten percentage points. Our study provides initial evidence that human ability to detect fake, machine-generated content may increase alongside the prevalence of such media online.

Web site: http://deepangel.media.mit.edu/

 

Scientific writings

Groh, M., Epstein, Z., Obradovich, N., Cebrian, M., & Rahwan, I. (2019). Human detection of machine manipulated media. Communications of the ACM, October 2021, Vol. 64 No. 10, Pages 40-47.
[Published paper, Free pre-print]

 

 

Media

Article: New York Times
September, 2016

How Tech Giants Are Devising Real Ethics for Artificial Intelligence by John Markoff. more
Video: TEDx Talk
January 2018

TEDxCambridgeSalon
Talk given by Iyad Rahwan on Why We Need A New, Algorithmic Social Contract more

 

 

 

Go to Editor View