Artificial Intelligence in Art: A Simple Tool or Creative Genius?

Study Shows How Language Humanizes AI

September 30, 2020

Intelligent algorithms are used to create paintings, write poems, and compose music. According to a study by an international team of researchers from the Massachusetts Institute of Technology (MIT), and the Center of Humans and Machines at the Max Planck Institute for Human Development, whether people perceive artificial intelligence (AI) as the ingenious creator of art or simply another tool used by artists depends on how information about AI art is presented. The results were published in the journal iScience.

In October 2018, a work of art by Edmond de Belamie, which was created with the help of an intelligent algorithm, was auctioned for 432,500 USD at Christie’s Auction House. According to Christie's auction advertisement, the portrait was created by artificial intelligence (AI). The media often described this as the first work of art not created by a human but rather autonomously by a machine. The proceeds were not given to the machine but instead to the French artists’ collective Obvious. This collective had fed an algorithm with pictures of real paintings by human painters and trained it to create images autonomously. They then selected a certain picture, printed it, gave it a name, and marketed it. However, the programmers who developed the artificial neural networks and algorithms used were not mentioned, nor did they receive any of the proceeds from the sale of the painting.

“Many people are involved in AI art: artists, curators and programmers alike. At the same time, there is a tendency – especially in the media – to endow AI with humanlike characteristics. According to the reports you read, creative AI autonomously creates ingenious works of art. We wanted to know whether there is a connection between this humanization of AI and the question of who gets credit for AI art”, Ziv Epstein, PhD student at the MIT Media Lab and first author of the study, explained.

To this end, the researchers informed almost 600 participants about how AI art is created and asked who should receive recognition for the work of art. At the same time, they determined the extent to which each participant humanizes AIs. The individual answers varied greatly. But on average, people who humanized AI and did not perceive it merely as a tool also felt that AI should receive recognition for the AI art and not the people involved in the creation process. 

When asked which people deserve the most recognition in the process of creating AI art, recognition was initially given to the artists who provided the learning algorithms with data and trained them. Only then were curators named, followed by technicians who programmed the algorithms. And finally, the “crowd” (i.e. the mass of Internet users who produce the data material with which AIs are often trained) was mentioned. Respondents who humanized the AI gave more recognition to the technicians and the crowd, but proportionally less to the artists. A similar picture emerges when respondents are asked about who is responsible, for example when an AI artwork violates copyright. Here, too, the ones who humanized the AIs placed more responsibility on the AIs.

A key finding of the study is that it is possible to actively manipulate whether people humanize AIs by changing the language used to report on AI systems in art. The creative process can be described by explaining the fact that AI, supported only by an artistic collaborator, conceives and creates new works of art. Alternatively, the process can be described by explaining the fact that an artist conceives the artwork and that the AI executes simple commands given by the artist. The different descriptions changed the degree of humanization and thus also to whom the participants attributed recognition and responsibility for AI art from among the human actors.

“Because AI is increasingly penetrating our society, we will have to pay more attention to who is responsible for what is created with AI. In the end, there are humans behind every AI.  This is particularly relevant when the AI malfunctions and causes damage – for example, in an accident involving an autonomous vehicle. It is therefore important to understand that language influences our view of AI and that a humanization of AI leads to problems in assigning responsibility”, says Iyad Rahwan, director of the Center for Humans and Machine at the Max Planck Institute for Human Development and co-author of the study.

Original Publication:

Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who gets credit for AI-generated art? iScience, 23(9), Article 101515. https://doi.org/10.1016/j.isci.2020.101515

Other Interesting Articles

Go to Editor View