Interdisciplinary Perspectives on Human and Machine Creativity
2-4 July, 2025 | Max Planck Institute for Human Development
About the Event
If this question resonates with you, join us in Berlin this summer for an event exploring the evolving relationship between human creativity and artificial intelligence.
Over the course of three days, Interdisciplinary Perspectives on Human and Machine Creativity will bring together researchers from behavioural and computer science, philosophy and ethics, arts, communication, and media studies. The event will offer a space to exchange ideas, challenge assumptions, and envision new directions for understanding how both humans and machines shape creative expression.
The program will feature short presentations, moderated panel discussions, and opportunities for cross-disciplinary conversation. Beyond the formal sessions, participants will also be invited to take part in afternoon and evening visits to the Berlin Biennale for Contemporary Art and other art events, bridging research with cultural practice in meaningful ways.
The event is free to attend, and refreshments will be provided. Unfortunately, we are unable to cover travel or accommodation costs at this stage.

Program
Wednesday, July 2
View Abstract
This keynote will explore how artificial intelligence is forging a new sphere of cultural expression that transcends traditional boundaries. It will introduce the concept of "machine culture," culture generated and mediated by machines, and will uncover the implications and opportunities it presents for creative industries, public policy, and society at large.
View Abstract
Theorists of creativity often treat intention as a necessary condition for creativity. On this view, an agent must intentionally produce outputs that are both novel and useful in order to count as creative. When such outputs arise accidentally, these theorists claim that only pseudo-creativity has occurred. Since generative AI produces novel and useful outputs without intention, some argue it is, at most, pseudo-creative. In this presentation, we challenge that view and argue that the intention criterion should be abandoned. We offer three arguments in support of this claim. First, even in the case of human creativity, the intention requirement is unacceptably vague. Second, creativity has long been attributed to sources that lack intentional agency. And third, both general and expert usage of the term creativity (and related terms) has evolved in response to the rise of generative AI. Rather than dismissing this usage for failing to match existing definitions, we suggest it indicates that we need to revise the concept of creativity itself—specifically, by dropping intentional agency as a necessary condition.
View Abstract
Novelty is a key component of creativity, and it appears that (at least superficially) generative AI systems can produce novel outputs. However, critics of generative AI have noted the ‘generic’ or ‘samey’ nature of AI art. This raises the question of whether generative AI systems are really capable of producing adequate novelty to be considered truly creative. This paper argues that popular generative AI systems do indeed have a novelty problem. The perceived banal nature of AI outputs can be explained by a lack of ‘originality’. Originality is often seen as synonymous with novelty, but it can be distinguished as salient novelty (see Gaut). This paper puts forward an account of what would count as salient newness in AI images. Given the vagueness of ‘salience’ in this definition, we can utilise work by Sibley on originality to uncover three criteria for salience in novelty: it must be relevant; it must be attributable to the creator; and it must exhibit a moderate to significant degree of variation from prior works. In considering the future of machine creativity, I suggest an increased focus on this third criteria: mechanisms of variation. The requirement for variation in Sibley’s account not only thickens the account of originality but also meshes with the evolutionary account of creativity; however, variation in evolutionary processes is often minimal in the short term. For the more extreme or transformative kinds of creativity, we need larger variances. This paper therefore discusses the need for significant variation for originality in computational creativity.
View Abstract
Contemporary discourse on the ethics of AI-powered creativity focuses on demonstrable harm: technological deskilling, job displacement, authorship controversies. These concerns share a common structure: identifying clear winners and losers where groups suffer concrete disadvantages.
However, this harm-centric framework cannot capture subtler ethical challenges in ostensibly mutually beneficial ‘win-win’ arrangements. Consider FN Meka, the AI-generated rapper created using voice synthesis trained on human vocal data. While the source artist received compensation, this exemplifies wrongful exploitation despite mutual benefit. His creative labor—vocal patterns, stylistic elements, expressive nuances—generated millions for technology companies and record labels, while he received a fraction of that value.
Cases like this involve unfair advantage-taking as understood in the philosophical literature on exploitation. A popular, albeit not uncontroversial, philosophical position defines exploitation as one agent's disproportionate extraction of value from another's productive (or creative) labor. This problem has interesting, important implications beyond individual cases, thereby helping us adopt a more nuanced conceptualization of AI’s structural societal impact, where Big Tech companies leverage concentrated wealth, power, and information resources to secure disproportionately favorable terms with creators. Companies may point to ‘win-win’ benefit distributions to justify such arrangements. This concentration simultaneously results from exploitation, and could enable further exploitative arrangements, possibly creating self-reinforcing inequality cycles if left unchecked.
In this paper, I build on and extend the sophisticated yet underexplored theoretical resources that contemporary analytic political philosophy can offer for analyzing the subtle problem of wrongful ‘win-win exploitation’ cases. Recognizing exploitation as a distinct ethical category provides crucial analytical tools for evaluating AI creativity's moral landscape, revealing that win-win scenarios may nonetheless involve (at least pro tanto) ethical wrongs requiring normative scrutiny beyond simple harm-based metrics. My argument does not imply that AI creativity is ethically doomed; just that it requires more nuanced, and possibly surprising, guardrails.
View Abstract
Standard theories of creativity possess two features. First, they are monistic: for them, questions of the form: “Is x creative?” or “Does creativity matter?” are to be answered by appealing to a unique, fundamental conception of creativity. By this, I don’t mean that they don’t recognize several notions of creativity; most of them do. Rather, I mean that they are committed to the claim that there cannot more than one fundamental notion of creativity. Second, their conception of creativity is kainotic (from the Greek kainos, innovation). That is, they assume that the fundamental notion of creativity, the one in virtue of which all the others can be defined, must be understood in terms of unprecedented novelty. Margaret Boden’s canonical definition of creativity, for instance, clearly displays these two commitments: her notion of psychological creativity is both fundamental and defined in terms of psychological processes that, as a type, are responsible for the generation of unprecedented novelty. In my presentation, I will argue that we have both conceptual and empirical reasons to reject these two aspects of standard theories of creativity. A better theory of creativity should be a pluralistic one, where different fundamental notions of creativity collaborate with each other. In particular, I will distinguish between kainotic and non-kainotic fundamental conceptions of creativity and will sketch an account of how these could be combined to offer us a better account of what is creativity and of why it matters.
View Abstract
Through her work Nora Al-Badri will try to show how it is possible to subvert the dangers of AI into emancipatory moments - moments that can help us envision and create more anticolonial, positive and just futures. She will focus on a tradition of institutional critique in the arts as well as spaces such as museums and their digital collections where our collective (cultural) knowledge and histories are narrated and preserved in very contested ways.
View Abstract
As creative sectors increasingly adopt generative AI for tasks ranging from ideation to full-scale multimedia production, concerns about the long-term impact on human expertise and system reliability have intensified. Generative AI’s rapid integration into creative workflows poses two interrelated challenges: the erosion of specialized human skills (deskilling) and the progressive degradation of model performance (entropic collapse). Huang, Jin, and Li (2024) demonstrate that AI-assisted tools can substantially enhance novice outputs while marginalizing expert practitioners, leading to a narrowing of creative expression. Shumailov et al. (2024) reveal that models retrained on their own synthetic data suffer marked declines in output diversity and predictive accuracy, undermining trust in generative quality. Although these crises have been examined separately, there is a pressing need for unified operational guidance. Building on empirical evidence, this paper outlines targeted Skill-Retention Strategies including staged human oversight during ideation, structured iterative feedback loops, and modular competency-building exercises to preserve domain expertise alongside AI fluency. It also formulates Data Purity Guidelines that mandate regular provenance audits, transparent labeling distinguishing humans from synthetic inputs, and minimum thresholds of human-generated data in training corpora. By translating these research-based insights into actionable protocols for creative professionals, AI vendors, and policymakers, our work seeks to sustain the vitality of human artistry while safeguarding model robustness. Widespread adoption of these measures promises to foster a balanced creative ecosystem in which generative AI amplifies rather than eclipses human ingenuity, ensuring sustainable innovation and cultural diversity in the AI era.
Thursday, July 3
View Abstract
Generative machine learning models' increasing prevalence and capacities are transforming creative processes. We identify two commonly voiced threats to professional artists: i) Barrier Reduction, i.e. AI enabling laypeople to engage in creative work; and ii) Autonomous Creativity, i.e. a reduced need for human input in (semi-)autonomous AI systems. Has AI already leveled the playing field between professionals and laypeople? We address this question by experimentally comparing 50 professional artists and a demographically matched sample of laypeople. To this end, we designed two tasks that approximate artistic practice in both faithful and creative image creation: replicating a reference image, and moving as far away as possible from it. We developed a bespoke platform where participants used a modern text-to-image model to complete both tasks. Artists, on average, produced more faithful and creative outputs than their lay counterparts, highlighting the continued value of professional expertise --- even within the confined space of generative AI itself. We also explored how well an exemplary vision-capable large language model (GPT-4o) would complete the same tasks. It performed on par in copying and slightly better on average than artists in the creative task, although not above the top performers in either human group. These results highlight the importance of integrating artistic skills with AI training to prepare artists and other professionals for a technologically evolving landscape. We see a potential in collaborative synergy with generative AI, which could reshape creative industries and education in the arts --- just as other technologies like photography have done before.
View Abstract
Generative AI has swiftly and profoundly transformed the creative art landscape, rendering traditional tools seemingly obsolete and causing significant shifts in how artists approach their craft. This rapid evolution has led to widespread misconceptions, particularly among those who closely associate artistic value with the tools used, leading some to believe that AI might replace aspects traditionally considered exclusive to human creativity. However, generative AI fundamentally represents a radically different category of artistic tool—one capable of significantly expanding an artist's expressive potential rather than replacing creative talent itself. The confusion and anxiety around AI's role in art primarily arise from the current limitations in interacting with generative models, which predominantly rely on simplistic text prompts. The limited expressive capacity of text alone underscores the urgent necessity for more sophisticated interaction mechanisms, especially for detailed creative vision communication in image and video generation. Future developments must focus on creating interactive, multimodal tools that facilitate deeper and richer communication between artists and AI models. These tools will need to incorporate a thorough understanding of physical principles, human contexts, and three-dimensional spatial awareness. Additionally, artists will require innovative methods for capturing and integrating real-world contexts with AI-generated content, ensuring seamless blending between generative outputs and captured reality. This talk will explore recent advancements in interactive generative AI tools, highlighting promising directions toward enhanced, intuitive, and artistically empowering human-AI collaboration.
View Abstract
Generative AI is entering contemporary artistic practices. This talk explores how artists are harnessing these types of neural networks for creative expression across diverse media practices, including image, video, text, and form generation. Drawing on practice-based research, we examine both the potential and limitations of current generative AI technologies. However, we argue the importance of open small AI models hinges on the artist's creation of assemblages—unique configurations using AI components to realize their distinct voice and artistic visions. We also highlight the key importance of the open small AI models for artistic exploration in media art.
View Abstract
In this talk, we will present findings from our interdisciplinary study on how people collaborate with artificial agents that display social behavior during non-routine analytical tasks. Unlike routine tasks, these require key ingredients of creative teamwork such as flexibility, adaptation, and cooperative problem-solving. To investigate this, we developed an escape room game where participants solved a variety of challenges requiring different skills, either with other human partners or in a mixed team that included a socially responsive AI agent. We analyzed how team composition affected performance, measured by escape time and number of errors. We also examined the dynamics of their conversations to identify patterns of effective collaboration. In a follow-up phase, we extended the study by collecting physiological data, including heart rate and skin conductance, to better understand participants' unconscious emotional responses during collaboration. This multimodal approach allowed us to explore not only what people do in human-AI teams, but also how they feel. We will discuss how these insights contribute to the design of emotionally responsive AI partners that can adapt in real time to human cognitive and affective states. Our findings highlight the central role of emotional and social interaction in creative processes and point toward the development of AI systems that support and foster human creativity.
View Abstract
In an era dominated by simulated AI, the tangible and embodied dimensions of intelligence—rooted in the physical body, time, and materiality—are increasingly overlooked or forgotten. This talk focuses on the intersection of painting, computer graphics, and robotics as tools for exploring embodied knowledge and creativity in the age of AI. Through the presentation of selected art projects that merge biologically inspired robotic systems, deterministic robotic processes, and human-machine interactive painting methods, the talk will investigate new forms of hybrid human-machine creativity and the practical implications of this convergence. Emphasizing the growing divide between digital (simulated) and analog (physical) realms, it will question how this shift impacts a society becoming increasingly disconnected from embodied experience. The talk will feature artworks from the Embodied Agent in Contemporary Art project as a case study, showcasing how these emerging technologies challenge and redefine the very nature of creativity.
View Abstract
AI has become an umbrella term for a myriad of data driven technologies and forms of generative media production. As such, they have revealed forms of human exceptionalism to be – more than anything – narratives, framings, and constructions, which have real, material implications. As such, notions of creativity, intelligence, art, science and so on are prone to be reconfigured and performatively brought forth by human machine collaboration. Drawing on the experiences and collaborations of AI: Ancestral Immediacies, my contribution will reflect on artistic practice as media theory in times of AI.
View Abstract
Can generative AI be genuinely creative, or does it merely remix existing human ideas? While AlphaGo revolutionized the game of Go with strategies previously unimagined—and now adopted by expert players—the question remains open for more open-ended domains like art. In this talk, we present a framework for studying creativity and cultural evolution in the visual arts through generative AI. We introduce a large-scale dataset of 1,114 artists spanning the 1400s to 2000s, from which we derive “style embeddings” using textual inversion on Stable Diffusion’s CLIP component. Our analysis reveals that artists tend to cluster by historical era, while the convex hull of styles expands over time, suggesting the continual emergence of new stylistic directions. We further model each artist’s style as a linear combination of their predecessors, showing that many styles can be explained by a small subset of influential forerunners. However, we argue that numerous potentially compatible artistic concepts remain unexplored, not because they are fundamentally incompatible, but because of historical, social, and cognitive biases. Generative AI, despite its sweeping training data, inherits these biases from human culture. Inspired by research in algorithmic scientific discovery, we propose a system designed to counteract these biases in order to unveil “culturally inaccessible” concept combinations—such as Renaissance-style airplanes—that lie outside the historical record. Our results highlight the potential for AI to transcend cultural constraints, offering new avenues for examining and shaping the future of artistic innovation
View Abstract
Is human creativity the product of exceptional individuals, or does it emerge from the decentralized, cumulative processes of cultural evolution? The way we answer this question has deep implications for how we design school curricula—especially in an era where generative AI challenges traditional notions of originality and expertise. In this talk, I draw from the emerging field of Computational Curriculum Studies (CCS) to examine competing causal models of creativity and their influence on educational design. I begin by contrasting agential and selectionist accounts of creative innovation, positioning them within a broader evolutionary framework. To reconcile these perspectives, I introduce the concept of Integrated Causal Reasoning (ICR)—a systems-level approach that models creativity as arising from the dynamic interaction of cognitive, social, and technological processes. From this perspective, CCS serves as a framework for both designing curriculum to foster computational thinking—including pattern recognition, abstraction, generalization, and modeling—and for critically applying computational tools to analyze and iteratively improve curriculum structures themselves. These metaconceptual competencies are not only essential to programming or AI literacy, but lie at the heart of human-machine co-creative capacities. I explore how students can engage with creativity as a cultural and epistemic phenomenon by building causal models and curricular resources that trace the emergence of scientific and technological innovation across species and societies. This dual focus of CCS—on fostering creativity through computational thinking, metaconceptual competencies, and reimagining curriculum as a participatory design space—offers a powerful yet complex landscape for cultivating creativity at individual, collective, and global scales.
View Abstract
As artificial intelligence continues to reshape creative industries, the relationship between human and machine creativity demands deeper exploration. This presentation will examine key questions surrounding AI’s role in augmenting creative processes, drawing on insights from my research and artistic practice. My work spans AI-human collaboration in solo and team creativity, the neuroscience of creativity, and neurodesign—an approach that integrates cognitive science into AI system design.A central focus will be on how AI can enhance human creativity, not simply by automating tasks but by facilitating cognitive states such as Flow, a critical condition for peak creative performance. Through performance-based neuro-art, I use wearables to stream neurophysiological data, which is classified by AI and transformed into audio-visual outputs designed to encourage and sustain Flow states. This approach highlights how computational systems can dynamically interact with human cognition to expand creative potential. Beyond individual creativity, the talk will briefly explore how AI is reshaping artistic practice and cultural transmission, along with key challenges like deskilling, model collapse, and shifts in creative expertise. Finally, I will discuss how human behavior and cognitive insights can inform the development of future AI tools, ensuring that they amplify rather than diminish the richness of human creative expression. Through an interdisciplinary lens, this presentation will offer new perspectives on the evolving synergy between human ingenuity and machine intelligence.
View Abstract
After twenty years as a landscape painter, I encountered a profound creative crisis in 2015. Seeking new modes of abstract expression, I began experimenting with artificial intelligence, ultimately co-creating the AI Muse together with a data scientist, a custom algorithm trained on my paintings. This partnership transformed my practice—from treating AI-generated imagery as mere outputs to using them as catalysts for new, human-led compositions. By engaging in iterative cycles of painting, digitization, and machine feedback, I discovered how algorithmic processes could expand my visual vocabulary and invite collaborative decision-making. The new technology also opened other doors, especially one to IBM Research–Zurich, which enabled me to start new visual collaborations based on quantum computing.During the informal encounter at my Berlin studio, I will offer firsthand insight into how long-term engagement with AI and quantum tools can evolve an artistic practice beyond traditional boundaries. I will set up an exhibition featuring AI-inspired paintings and demonstrate how the AI Muse works, illuminating the dynamic interplay between human intuition and machine-generated suggestion. Together, we will explore how technology can both challenge and enrich the creative process.
Friday, July 4
View Abstract
Creativity is a political act. By taking creative actions we state the possibility of change, we verify the possibility of participating in the construction of the (social) world, to challenge the received views and practices, to create new forms of common life. To realise its political potential, creativity requires that persons make a direct, embodied experience of change, their potential role in it, the challenges that comes with it. All democratic education - claims French philosopher Jacques Rancière - is an aesthetic experience. It is an open question to what extent AI-mediated art and aesthetic experience can enhance, support, or rather endanger the political potential of art and creativity. That crucially depends on how much AI will support the embodied experience of injustice and the possibility of achieving direct participation in building new social practices and social change. In this philosophical talk, I will rely on literature at the intersection between radical pedagogy, democratic theory, and aesthetics (Dewey, Freire, Rancière) to start sketching a normative framework to assess the impact of AI on aesthetic experience and creativity. At the center of the framework is the question to what extent AI can improve our democratic skills or rather create “political deskilling”, as political skills crucially depend on our ability to make meaningful embodied experiences.
View Abstract
In Autonomous Technologies: Technics-out-of-Control … (1977), Langdon Winner notes that ‘technology is a source of concern because it changes in itself and because its development brings other kinds of changes in its wake’. Around the time that Winner was writing on autonomous technologies being ‘engines of change’ (1977: 44-100), Herbert A. Simon won the 1978 Nobel Prize in Economics for his theory of bounded rationality–a keystone concept in AI–known as ‘satisficing’. The term is a portmanteau of what is satisfactory and what will suffice. It describes a decision-making strategy that involves searching through the available alternatives until an acceptability threshold is met. The notion of a quantifiable acceptability threshold albeit within a fragile risk-laden environment, underwrites the logic of automated, data driven creative technologies. This paper examines the socio-technical tradeoffs embedded in the creation, consumption and interpretation of machine-made creative works and develops an analytical framework for thinking through the dialectically instrumental and political dimensions of human-machine creativity that takes into account elements including: nomenclature, categorisation, (de)formation, purpose (versus play), design (versus chance) and pataphysics (versus realism and objectivity). The overall argument in the paper is that once the logic of a visual system is established it is hard to work (and think) outside of it and this has deep epistemo-political consequences. The paper offers some pathways out of this that are attentive to aesthetic encounters that are caught between the embodied and instinctual on one side and the flow-based and computational on the other.
View Abstract
As artificial intelligence technologies reshape how language is structured, circulated, and commodified, they emphasize the fact that language is not merely a medium of communication but a primary instrument through which power is manifested, exercised and reinforced. SolidGoldMagikarp: Artistic Interventions in the Political Latent Space of Large Language Models paper presents artistic research into the political, epistemological, and ideological dimensions of Large Language Models (LLMs), through experimental misuse, critical coding, and speculative hacking. I approach LLMs as more than tools – they are complex, opaque, techno-social systems that encode worldviews, political ideologies, and sociotechnical imaginaries, often invisibly, subtly and seamlessly. The project, rooted in a critical art practice, engages with models like ChatGPT, Claude, Gemini, DeepSeek and YandexGPT as both media and material. I also work with text2image and text-to-3D diffusion models, based on LLMs. By pushing these systems beyond their intended use – generating images from abstract or politically charged prompts, forcing ideological consistency through recursive questioning, and testing censorship boundaries – I aim to map the latent space not only as a technical construct, but as a field of power. In this way, I examine how language is fragmented, tokenized, and reformatted for profit, control, and ideology.
View Abstract
As generative computational systems such as Generative Adversarial Networks (GANs) become increasingly integrated into creative artistic workflows, longstanding assumptions about authorship, originality, and creative control are being challenged. While current debates around intellectual property and AI-generated art often centre on input–output analysis, artistic traditions, theories of human action or legal attribution, this paper proposes a shift in methodological focus toward the practice of making generative art that focused on algorithm-artist interaction. Using an autoethnographic method combining aspects of traditional ethnography, qualitative inquiry and digital ethnography, “Follow the Artist”, we document the experience of re-creating a GAN-based artwork following the practice of an existing GAN artist, capturing embodied interactions involving the generation of datasets, model training, and the use of interfaces. Through the analysis of autoethnographic memos, we identify three areas of algorithmic opaqueness that shape the generation of GAN art: conception (human mental models of the algorithm), interaction (human gaps in understanding algorithmic logic), and workflow (step-by-step technical dependencies that structure decision-making). Our findings suggest that authorship in GAN art is governed not by a singular, static intent or purpose but from situated practices, interface mediation and the negotiation of opaqueness. By recentering the art-making process, this paper contributes a method for understanding authorship, originality and creative control in AI art, and offers an alternative foundation for future debates and the nature of agency.
View Abstract
Philosophical traditions linking vision and knowledge have long shaped our conceptions of humanity. Yet these are now challenged by neuroscience’s revelations about the bodily underpinnings of imagination, as well as by AI’s capacity to generate synthetic images from textual prompts. If vision once epitomized a direct route to knowledge, present-day technologies entangle sight with computation, prompting a thorough re-examination of originality, authorship, and the nature of understanding. This research investigates how the interplay between human cognition and AI-driven imagery reshapes both personal and collective imagination, thereby reframing centuries-old assumptions about visual perception as a privileged conduit to meaning. By bridging “individual” tastes with vast collective archives, generative AI unsettles how cultural artifacts are created, shared, and interpreted, fuelling philosophical inquiry into how our imaginative faculties may be subtly reconfigured through machine-driven “cognitive hacking.” Indeed, text-based interactions with algorithmic latent spaces challenge conventional boundaries of novelty, often blurring the line between genuinely new vision and recycled pattern-based recombinations. In this evolving synergy between human and machine, biases embedded in training data become potent creative constraints, raising ethical and epistemic questions around shifting norms of authorship and artistic intent. Ultimately, as neural models encapsulate and re-model culture, AI-driven imagery emerges not merely as a practical tool but as a transformative force that alters how we see, think, and know. By reordering the transmission of cultural meaning and recasting vision’s once-straightforward link to truth, generative AI compels us to reconsider the conditions under which knowledge is produced, unveiling fresh possibilities and unforeseen perils for creativity and perception alike.
Confirmed Speakers
View Bio
Prof. Iyad Rahwan is director of the Max Planck Institute for Human Development in Berlin, where he founded and directs the Center for Humans & Machines. He is also an honorary professor of Electrical Engineering and Computer Science at the Technical University of Berlin. Prior to moving to Berlin, he was an Associate Professor of Media Arts & Sciences at the Massachusetts Institute of Technology (MIT). A native of Aleppo, Syria, Rahwan holds a PhD from the University of Melbourne, Australia.
Rahwan's work lies at the intersection of computer science and human behavior, with a focus on the impact of Artificial Intelligence and digital media on the way we think, learn, work, play, cooperate and govern. His work appeared in the world’s leading academic journals, including Science and Nature, and features regularly in major media outlets, including the New York Times, The Economist, and the Wall Street Journal. His artistic and scientific work was also featured in some of the world’s leading cultural institutions, such as Ars Electronica, Science Museum London and Cooper Hewitt Smithsonian Design Museum.
View Abstract
This keynote will explore how artificial intelligence is forging a new sphere of cultural expression that transcends traditional boundaries. It will introduce the concept of ""machine culture,"" culture generated and mediated by machines, and will uncover the implications and opportunities it presents for creative industries, public policy, and society at large.
View Bio
James S. Pearson is a Marie Curie Fellow in Political Science at the University of Amsterdam. He primarily works in political theory, and his recent research examines how digital technologies impact democratic politics. James is also the PI of a research project on the social value of creativity at the Centre of Philosophy at the University of Lisbon. He has published in journals including Philosophy and Technology, Inquiry, and the Canadian Journal of Philosophy, and is the author of Nietzsche on Conflict, Struggle and War (Cambridge University Press, 2022).
View Abstract
Theorists of creativity often treat intention as a necessary condition for creativity. On this view, an agent must intentionally produce outputs that are both novel and useful in order to count as creative. When such outputs arise accidentally, these theorists claim that only pseudo-creativity has occurred. Since generative AI produces novel and useful outputs without intention, some argue it is, at most, pseudo-creative. In this presentation, we challenge that view and argue that the intention criterion should be abandoned. We offer three arguments in support of this claim. First, even in the case of human creativity, the intention requirement is unacceptably vague. Second, creativity has long been attributed to sources that lack intentional agency. And third, both general and expert usage of the term creativity (and related terms) has evolved in response to the rise of generative AI. Rather than dismissing this usage for failing to match existing definitions, we suggest it indicates that we need to revise the concept of creativity itself—specifically, by dropping intentional agency as a necessary condition.
View Abstract
Novelty is a key component of creativity, and it appears that (at least superficially) generative AI systems can produce novel outputs. However, critics of generative AI have noted the ‘generic’ or ‘samey’ nature of AI art. This raises the question of whether generative AI systems are really capable of producing adequate novelty to be considered truly creative. This paper argues that popular generative AI systems do indeed have a novelty problem. The perceived banal nature of AI outputs can be explained by a lack of ‘originality’. Originality is often seen as synonymous with novelty, but it can be distinguished as salient novelty (see Gaut). This paper puts forward an account of what would count as salient newness in AI images. Given the vagueness of ‘salience’ in this definition, we can utilise work by Sibley on originality to uncover three criteria for salience in novelty: it must be relevant; it must be attributable to the creator; and it must exhibit a moderate to significant degree of variation from prior works. In considering the future of machine creativity, I suggest an increased focus on this third criteria: mechanisms of variation. The requirement for variation in Sibley’s account not only thickens the account of originality but also meshes with the evolutionary account of creativity; however, variation in evolutionary processes is often minimal in the short term. For the more extreme or transformative kinds of creativity, we need larger variances. This paper therefore discusses the need for significant variation for originality in computational creativity.
View Bio
Annette Zimmermann’s research interests cover a range of topics within the philosophy of AI and machine learning, political philosophy, moral philosophy, social and moral epistemology, philosophy of law and philosophy of science. As an Assistant Professor of Philosophy at UW-Madison, Zimmermann serves as a RISE-AI Thought Leader, is a member of the University’s interdisciplinary cluster in the ethics of computing, data, and information, and an Affiliate Professor at the Department of Statistics. Zimmermann also co-leads the Uncertainty & AI group at the Institute for Research in the Humanities.
Before joining UW-Madison’s Department of Philosophy, Zimmermann was a 2020-2023 Technology and Human Rights Fellow at the Carr Center for Human Rights Policy at Harvard University. In addition, Zimmermann was a permanent Lecturer (US equivalent: Assistant Professor) at the Department of Philosophy at the University of York in the United Kingdom (2020-22), and a postdoctoral fellow at Princeton University (2018-20), with a joint appointment at the Center for Human Values and the Center for Information Technology Policy. Zimmermann holds a DPhil (PhD) from the University of Oxford (2018).
Zimmermann's first book is titled Democratizing AI (forthcoming 2025). Zimmermann’s research has been published in the Canadian Journal of Philosophy and in Philosophy and Public Affairs, and their recent public writing has appeared in the New Statesman and in the Boston Review. Zimmermann frequently advises policy-makers and technologists working on contemporary ethical and political issues surrounding AI and other forms of technology, including UNESCO, the OECD, the Australian Human Rights Commissioner, the German Aerospace Center, the German Federal Ministry for Economic Affairs and Energy, the UK Parliament, and the UK government’s Centre for Data Ethics and Innovation. Zimmermann has previously held visiting fellowships at the ANU, Yale University, and the Weizenbaum Institute in Berlin, and has received the American Philosophical Association's Public Philosophy Award as well as winning the Hastings Center's Science, Ethics, and Society Essay Prize.
https://www.annette-zimmermann.com/
View Abstract
Contemporary discourse on the ethics of AI-powered creativity focuses on demonstrable harm: technological deskilling, job displacement, authorship controversies. These concerns share a common structure: identifying clear winners and losers where groups suffer concrete disadvantages.
However, this harm-centric framework cannot capture subtler ethical challenges in ostensibly mutually beneficial ‘win-win’ arrangements. Consider FN Meka, the AI-generated rapper created using voice synthesis trained on human vocal data. While the source artist received compensation, this exemplifies wrongful exploitation despite mutual benefit. His creative labor—vocal patterns, stylistic elements, expressive nuances—generated millions for technology companies and record labels, while he received a fraction of that value.
Cases like this involve unfair advantage-taking as understood in the philosophical literature on exploitation. A popular, albeit not uncontroversial, philosophical position defines exploitation as one agent's disproportionate extraction of value from another's productive (or creative) labor. This problem has interesting, important implications beyond individual cases, thereby helping us adopt a more nuanced conceptualization of AI’s structural societal impact, where Big Tech companies leverage concentrated wealth, power, and information resources to secure disproportionately favorable terms with creators. Companies may point to ‘win-win’ benefit distributions to justify such arrangements. This concentration simultaneously results from exploitation, and could enable further exploitative arrangements, possibly creating self-reinforcing inequality cycles if left unchecked.
In this paper, I build on and extend the sophisticated yet underexplored theoretical resources that contemporary analytic political philosophy can offer for analyzing the subtle problem of wrongful ‘win-win exploitation’ cases. Recognizing exploitation as a distinct ethical category provides crucial analytical tools for evaluating AI creativity's moral landscape, revealing that win-win scenarios may nonetheless involve (at least pro tanto) ethical wrongs requiring normative scrutiny beyond simple harm-based metrics. My argument does not imply that AI creativity is ethically doomed; just that it requires more nuanced, and possibly surprising, guardrails.
View Bio
Patrik Engisch is a post-doctoral researcher at the University of Geneva. His main research interests are the philosophy of mind, aesthetics, and the philosophy of food. He has published papers on creativity, fiction, empathy, and food. He is the co-editor of two volumes, A Philosophy of Recipes: Making, Experiencing, and Valuing (Bloomsbury, 2022) and The Philosophy of Fiction: Imagination and Cognition (Routledge, 2023). He is also the current director of the Association for the Philosophical Study of Creativity (www.aps-creativity.com).
View Abstract
Standard theories of creativity possess two features. First, they are monistic: for them, questions of the form: “Is x creative?” or “Does creativity matter?” are to be answered by appealing to a unique, fundamental conception of creativity. By this, I don’t mean that they don’t recognize several notions of creativity; most of them do. Rather, I mean that they are committed to the claim that there cannot more than one fundamental notion of creativity. Second, their conception of creativity is kainotic (from the Greek kainos, innovation). That is, they assume that the fundamental notion of creativity, the one in virtue of which all the others can be defined, must be understood in terms of unprecedented novelty. Margaret Boden’s canonical definition of creativity, for instance, clearly displays these two commitments: her notion of psychological creativity is both fundamental and defined in terms of psychological processes that, as a type, are responsible for the generation of unprecedented novelty. In my presentation, I will argue that we have both conceptual and empirical reasons to reject these two aspects of standard theories of creativity. A better theory of creativity should be a pluralistic one, where different fundamental notions of creativity collaborate with each other. In particular, I will distinguish between kainotic and non-kainotic fundamental conceptions of creativity and will sketch an account of how these could be combined to offer us a better account of what is creativity and of why it matters.
View Bio
Nora Al-Badri is a multi-disciplinary and conceptual media artist with a German-Iraqi background. Her works are research-based as well as paradisciplinary and as much post-colonial as post-digital. She lives and works in Berlin. She graduated in political sciences at Johann Wolfgang Goethe University in Frankfurt/Main and is now a lecturer at the ETH in Zurich. Her practice focuses on the politics and the emancipatory potential of new technologies such as machine intelligence or data sculpting. Al-Badri’s artistic material is a speculative archaeology from fossils to artefacts or performative interventions in museums and other public spaces, that respond to the inherent power structures.
View Abstract
Through her work Nora Al-Badri will try to show how it is possible to subvert the dangers of AI into emancipatory moments - moments that can help us envision and create more anticolonial, positive and just futures. She will focus on a tradition of institutional critique in the arts as well as spaces such as museums and their digital collections where our collective (cultural) knowledge and histories are narrated and preserved in very contested ways.
View Bio
Shruti Kakade is an Master of Data Science student at the Hertie School. Having a background in Computer engineering, she is interested in AI Ethics, Digital Governance, Responsible AI & related regulations.
View Abstract
As creative sectors increasingly adopt generative AI for tasks ranging from ideation to full-scale multimedia production, concerns about the long-term impact on human expertise and system reliability have intensified. Generative AI’s rapid integration into creative workflows poses two interrelated challenges: the erosion of specialized human skills (deskilling) and the progressive degradation of model performance (entropic collapse). Huang, Jin, and Li (2024) demonstrate that AI-assisted tools can substantially enhance novice outputs while marginalizing expert practitioners, leading to a narrowing of creative expression. Shumailov et al. (2024) reveal that models retrained on their own synthetic data suffer marked declines in output diversity and predictive accuracy, undermining trust in generative quality. Although these crises have been examined separately, there is a pressing need for unified operational guidance. Building on empirical evidence, this paper outlines targeted Skill-Retention Strategies including staged human oversight during ideation, structured iterative feedback loops, and modular competency-building exercises to preserve domain expertise alongside AI fluency. It also formulates Data Purity Guidelines that mandate regular provenance audits, transparent labeling distinguishing humans from synthetic inputs, and minimum thresholds of human-generated data in training corpora. By translating these research-based insights into actionable protocols for creative professionals, AI vendors, and policymakers, our work seeks to sustain the vitality of human artistry while safeguarding model robustness. Widespread adoption of these measures promises to foster a balanced creative ecosystem in which generative AI amplifies rather than eclipses human ingenuity, ensuring sustainable innovation and cultural diversity in the AI era.
View Bio
With a background in cognitive psychology, Thomas completed his PhD studying the evolution of language via behavioral experiments. Currently, he is a Research Scientist at the Center for Humans and Machines at MPIB, where he is investigating how artificial intelligence might affect human cultural evolution.
View Abstract
Generative machine learning models' increasing prevalence and capacities are transforming creative processes. We identify two commonly voiced threats to professional artists: i) Barrier Reduction, i.e. AI enabling laypeople to engage in creative work; and ii) Autonomous Creativity, i.e. a reduced need for human input in (semi-) autonomous AI systems. Has AI already leveled the playing field between professionals and laypeople? We address this question by experimentally comparing 50 professional artists and a demographically matched sample of laypeople. To this end, we designed two tasks that approximate artistic practice in both faithful and creative image creation: replicating a reference image, and moving as far away as possible from it. We developed a bespoke platform where participants used a modern text-to-image model to complete both tasks. Artists, on average, produced more faithful and creative outputs than their lay counterparts, highlighting the continued value of professional expertise --- even within the confined space of generative AI itself. We also explored how well an exemplary vision-capable large language model (GPT-4o) would complete the same tasks. It performed on par in copying and slightly better on average than artists in the creative task, although not above the top performers in either human group. These results highlight the importance of integrating artistic skills with AI training to prepare artists and other professionals for a technologically evolving landscape. We see a potential in collaborative synergy with generative AI, which could reshape creative industries and education in the arts --- just as other technologies like photography have done before.
View Bio
Hassan Abu Alhaija is a Senior Research Engineer at NVIDIA Toronto AI Lab based in Heidelberg, Germany. He earned his doctorate in Machine Learning and Computer Vision from Heidelberg University. His research intersects Machine Learning and Graphics, primarily focusing on the development of ML tools to streamline 3D creation and rendering and making these technologies more accessible and intuitive for users in various fields.
View Abstract
Generative AI has swiftly and profoundly transformed the creative art landscape, rendering traditional tools seemingly obsolete and causing significant shifts in how artists approach their craft. This rapid evolution has led to widespread misconceptions, particularly among those who closely associate artistic value with the tools used, leading some to believe that AI might replace aspects traditionally considered exclusive to human creativity. However, generative AI fundamentally represents a radically different category of artistic tool—one capable of significantly expanding an artist's expressive potential rather than replacing creative talent itself. The confusion and anxiety around AI's role in art primarily arise from the current limitations in interacting with generative models, which predominantly rely on simplistic text prompts. The limited expressive capacity of text alone underscores the urgent necessity for more sophisticated interaction mechanisms, especially for detailed creative vision communication in image and video generation. Future developments must focus on creating interactive, multimodal tools that facilitate deeper and richer communication between artists and AI models. These tools will need to incorporate a thorough understanding of physical principles, human contexts, and three-dimensional spatial awareness. Additionally, artists will require innovative methods for capturing and integrating real-world contexts with AI-generated content, ensuring seamless blending between generative outputs and captured reality. This talk will explore recent advancements in interactive generative AI tools, highlighting promising directions toward enhanced, intuitive, and artistically empowering human-AI collaboration.
View Abstract
Generative AI is entering contemporary artistic practices. This talk explores how artists are harnessing these types of neural networks for creative expression across diverse media practices, including image, video, text, and form generation. Drawing on practice-based research, we examine both the potential and limitations of current generative AI technologies. However, we argue the importance of open small AI models hinges on the artist's creation of assemblages—unique configurations using AI components to realize their distinct voice and artistic visions. We also highlight the key importance of the open small AI models for artistic exploration in media art.
View Bio
Caterina Giannetti is an economist working in human-robot interaction, with a focus on trust, collaboration, and emotional responses in human-AI teams. Her interdisciplinary research investigates how artificial agents that display emotions and high levels of embodiment influence social dynamics and joint decision-making, especially in contexts that require creativity and adaptability.
View Abstract
In this talk, we will present findings from our interdisciplinary study on how people collaborate with artificial agents that display social behavior during non-routine analytical tasks. Unlike routine tasks, these require key ingredients of creative teamwork such as flexibility, adaptation, and cooperative problem-solving. To investigate this, we developed an escape room game where participants solved a variety of challenges requiring different skills, either with other human partners or in a mixed team that included a socially responsive AI agent. We analyzed how team composition affected performance, measured by escape time and number of errors. We also examined the dynamics of their conversations to identify patterns of effective collaboration. In a follow-up phase, we extended the study by collecting physiological data, including heart rate and skin conductance, to better understand participants' unconscious emotional responses during collaboration. This multimodal approach allowed us to explore not only what people do in human-AI teams, but also how they feel. We will discuss how these insights contribute to the design of emotionally responsive AI partners that can adapt in real time to human cognitive and affective states. Our findings highlight the central role of emotional and social interaction in creative processes and point toward the development of AI systems that support and foster human creativity.
View Bio
Liat Grayver is a Berlin-based cross-disciplinary painter and media artist, investigating methods to redefine one of the primitive forms of art- painting- within the current technology-based era. Grayver is the Artistic Director and works as an Artistic Researcher within the project EACVA- Embodied Agents in Contemporary Visual Art- a multidisciplinary collaboration between artists, philosophers, psychologists and computer/ robotics engineers dedicated to exploring the use of robots as an interactive painterly tool and their influence on creativity, authorship and agency in artificial systems. Since January 2016 she has been exploring various approaches to integrate robotic and computer languages in the processes of painting and creative image- making. She is an active member of SALOON — Network for Women of Berlin’s Art Scene and an associate artist researcher at the Epistemologien ästhetischer Praktiken programme at the ETH Zürich.
View Abstract
In an era dominated by simulated AI, the tangible and embodied dimensions of intelligence—rooted in the physical body, time, and materiality—are increasingly overlooked or forgotten. This talk focuses on the intersection of painting, computer graphics, and robotics as tools for exploring embodied knowledge and creativity in the age of AI. Through the presentation of selected art projects that merge biologically inspired robotic systems, deterministic robotic processes, and human-machine interactive painting methods, the talk will investigate new forms of hybrid human-machine creativity and the practical implications of this convergence. Emphasizing the growing divide between digital (simulated) and analog (physical) realms, it will question how this shift impacts a society becoming increasingly disconnected from embodied experience. The talk will feature artworks from the Embodied Agent in Contemporary Art project as a case study, showcasing how these emerging technologies challenge and redefine the very nature of creativity.
View Abstract
AI has become an umbrella term for a myriad of data driven technologies and forms of generative media production. As such, they have revealed forms of human exceptionalism to be - more than anything - narratives, framings, and constructions, which have real, material implications. As such, notions of creativity, intelligence, art, science and so on are prone to be reconfigured and performatively brought forth by human machine collaboration. Drawing on the experiences and collaborations of AI: Ancestral Immediacies, my contribution will reflect on artistic practice as media theory in times of AI.
View Abstract
Can generative AI be genuinely creative, or does it merely remix existing human ideas? While AlphaGo revolutionized the game of Go with strategies previously unimagined—and now adopted by expert players—the question remains open for more open-ended domains like art. In this talk, we present a framework for studying creativity and cultural evolution in the visual arts through generative AI. We introduce a large-scale dataset of 1,114 artists spanning the 1400s to 2000s, from which we derive “style embeddings” using textual inversion on Stable Diffusion’s CLIP component. Our analysis reveals that artists tend to cluster by historical era, while the convex hull of styles expands over time, suggesting the continual emergence of new stylistic directions. We further model each artist’s style as a linear combination of their predecessors, showing that many styles can be explained by a small subset of influential forerunners. However, we argue that numerous potentially compatible artistic concepts remain unexplored, not because they are fundamentally incompatible, but because of historical, social, and cognitive biases. Generative AI, despite its sweeping training data, inherits these biases from human culture. Inspired by research in algorithmic scientific discovery, we propose a system designed to counteract these biases in order to unveil “culturally inaccessible” concept combinations—such as Renaissance-style airplanes—that lie outside the historical record. Our results highlight the potential for AI to transcend cultural constraints, offering new avenues for examining and shaping the future of artistic innovation.
View Bio
Dustin Eirdosh is the co-founder of the OpenEvo educational innovation lab within the Department of Comparative Cultural Psychology at the Max Planck Institute for Evolutionary Anthropology. His work bridges educational research, evolutionary science, and sustainability, with a focus on how human behavior is taught and learned across cultures. He leads the development of Computational Curriculum Studies (CCS), an emerging field exploring how AI and cultural evolution shape curriculum design and the future of creativity. His work spans curriculum development, conceptual learning research, and participatory approaches to reimagining education as a dynamic knowledge system.
View Abstract
Is human creativity the product of exceptional individuals, or does it emerge from the decentralized, cumulative processes of cultural evolution? The way we answer this question has deep implications for how we design school curricula—especially in an era where generative AI challenges traditional notions of originality and expertise. In this talk, I draw from the emerging field of Computational Curriculum Studies (CCS) to examine competing causal models of creativity and their influence on educational design. I begin by contrasting agential and selectionist accounts of creative innovation, positioning them within a broader evolutionary framework. To reconcile these perspectives, I introduce the concept of Integrated Causal Reasoning (ICR)—a systems-level approach that models creativity as arising from the dynamic interaction of cognitive, social, and technological processes.
From this perspective, CCS serves as a framework for both designing curriculum to foster computational thinking—including pattern recognition, abstraction, generalization, and modeling—and for critically applying computational tools to analyze and iteratively improve curriculum structures themselves. These metaconceptual competencies are not only essential to programming or AI literacy, but lie at the heart of human-machine co-creative capacities. I explore how students can engage with creativity as a cultural and epistemic phenomenon by building causal models and curricular resources that trace the emergence of scientific and technological innovation across species and societies. This dual focus of CCS—on fostering creativity through computational thinking, metaconceptual competencies, and reimagining curriculum as a participatory design space—offers a powerful yet complex landscape for cultivating creativity at individual, collective, and global scales.
View Abstract
As artificial intelligence continues to reshape creative industries, the relationship between human and machine creativity demands deeper exploration. This presentation will examine key questions surrounding AI’s role in augmenting creative processes, drawing on insights from my research and artistic practice. My work spans AI-human collaboration in solo and team creativity, the neuroscience of creativity, and neurodesign—an approach that integrates cognitive science into AI system design.A central focus will be on how AI can enhance human creativity, not simply by automating tasks but by facilitating cognitive states such as Flow, a critical condition for peak creative performance. Through performance-based neuro-art, I use wearables to stream neurophysiological data, which is classified by AI and transformed into audio-visual outputs designed to encourage and sustain Flow states. This approach highlights how computational systems can dynamically interact with human cognition to expand creative potential. Beyond individual creativity, the talk will briefly explore how AI is reshaping artistic practice and cultural transmission, along with key challenges like deskilling, model collapse, and shifts in creative expertise. Finally, I will discuss how human behavior and cognitive insights can inform the development of future AI tools, ensuring that they amplify rather than diminish the richness of human creative expression. Through an interdisciplinary lens, this presentation will offer new perspectives on the evolving synergy between human ingenuity and machine intelligence.
View Bio
Filippo is a professor of ethics of technology at Eindhoven University of Technology, the Netherlands. He obtained his PhD in philosophy at the University of Torino, focusing on moral and legal responsibility. Since moving to the Netherlands in 2012, he has specialized in the ethics of technology, particularly AI ethics. From 2012 to 2024, Filippo worked as a researcher and lecturer at TU Delft, before joining TU Eindhoven as a professor in 2024. His recent work addresses AI in the workplace, meaningful human control, responsibility gaps, and the design of technology for democracy. He is the author of Human Freedom in the Age of AI (Routledge, 2024) and co-editor of the Research Handbook of Meaningful Human Control over AI Systems (Elgar, 2024), a multidisciplinary volume in philosophy, law, and engineering.
View Abstract
Creativity is a political act. By taking creative actions we state the possibility of change, we verify the possibility of participating in the construction of the (social) world, to challenge the received views and practices, to create new forms of common life. To realise its political potential, creativity requires that persons make a direct, embodied experience of change, their potential role in it, the challenges that comes with it. All democratic education - claims French philosopher Jacques Rancière - is an aesthetic experience. It is an open question to what extent AI-mediated art and aesthetic experience can enhance, support, or rather endanger the political potential of art and creativity. That crucially depends on how much AI will support the embodied experience of injustice and the possibility of achieving direct participation in building new social practices and social change. In this philosophical talk, I will rely on literature at the intersection between radical pedagogy, democratic theory, and aesthetics (Dewey, Freire, Rancière) to start sketching a normative framework to assess the impact of AI on aesthetic experience and creativity. At the center of the framework is the question to what extent AI can improve our democratic skills or rather create “political deskilling”, as political skills crucially depend on our ability to make meaningful embodied experiences.
View Bio
Suneel Jethani is a Senior Lecturer in Digital Media at the University of Technology Sydney. His research focuses on the politics of data-driven technologies, critical data studies, design ethics and the relationship between media theory and the body. His work has been published in journals including Continuum, Persona Studies, Communication, Politics & Culture, Cultural Studies, Body, Space & Technology, Griffith Review, Conjunctions: Transdisciplinary Journal of Cultural Participation and un Magazine.
View Abstract
In Autonomous Technologies: Technics-out-of-Control … (1977), Langdon Winner notes that ‘technology is a source of concern because it changes in itself and because its development brings other kinds of changes in its wake’. Around the time that Winner was writing on autonomous technologies being ‘engines of change’ (1977: 44-100), Herbert A. Simon won the 1978 Nobel Prize in Economics for his theory of bounded rationality–a keystone concept in AI–known as ‘satisficing’. The term is a portmanteau of what is satisfactory and what will suffice. It describes a decision-making strategy that involves searching through the available alternatives until an acceptability threshold is met. The notion of a quantifiable acceptability threshold albeit within a fragile risk-laden environment, underwrites the logic of automated, data driven creative technologies. This paper examines the socio-technical tradeoffs embedded in the creation, consumption and interpretation of machine-made creative works and develops an analytical framework for thinking through the dialectically instrumental and political dimensions of human-machine creativity that takes into account elements including: nomenclature, categorisation, (de)formation, purpose (versus play), design (versus chance) and pataphysics (versus realism and objectivity). The overall argument in the paper is that once the logic of a visual system is established it is hard to work (and think) outside of it and this has deep epistemo-political consequences. The paper offers some pathways out of this that are attentive to aesthetic encounters that are caught between the embodied and instinctual on one side and the flow-based and computational on the other.
View Bio
Helena Nikonole is a new media artist, independent curator, researcher and educator currently based between Berlin and Istanbul. Her field of interests embraces AI, hacktivism, hybrid art and bio-semiotics. One part of her practice is dedicated to utopian scenarios of post-human future while another is focused on dystopian present and critical approach to technology. She presents talks, lectures and workshops in the field of Art & Science and AI & Art at different institutions including transmediale festival (Berlin), Paris College of Arts, Art Laboratory Berlin, Mutek Festival (Montreal and Tokyo), Leiden University, IMAL (Brussels) and many others. In 2025 she received 1 year fellowship from Berlin Senate for her artistic research about AI & political ideologies.
View Abstract
As artificial intelligence technologies reshape how language is structured, circulated, and commodified, they emphasize the fact that language is not merely a medium of communication but a primary instrument through which power is manifested, exercised and reinforced. SolidGoldMagikarp: Artistic Interventions in the Political Latent Space of Large Language Models paper presents artistic research into the political, epistemological, and ideological dimensions of Large Language Models (LLMs), through experimental misuse, critical coding, and speculative hacking. I approach LLMs as more than tools – they are complex, opaque, techno-social systems that encode worldviews, political ideologies, and sociotechnical imaginaries, often invisibly, subtly and seamlessly. The project, rooted in a critical art practice, engages with models like ChatGPT, Claude, Gemini, DeepSeek and YandexGPT as both media and material. I also work with text2image and text-to-3D diffusion models, based on LLMs. By pushing these systems beyond their intended use – generating images from abstract or politically charged prompts, forcing ideological consistency through recursive questioning, and testing censorship boundaries – I aim to map the latent space not only as a technical construct, but as a field of power. In this way, I examine how language is fragmented, tokenized, and reformatted for profit, control, and ideology.
View Bio
Dorothy Yuan is an undergraduate researcher in Information Systems (Computing) at the National University of Singapore and currently appointed at the Education wing of SMU’s Institute of Innovation & Entrepreneurship. Her work explores the intersections of science and technology studies (STS), digital culture, and computing ethics. Recent projects include an autoethnographic study of authorship in GAN art and ongoing research on how embedded ethics pedagogy shapes computing student identity and ethical reasoning. Dorothy has been involved in research projects at Stanford University, NUS School of Computing, and Tembusu College, and was presented at the University of York’s Science Technology and Society (STS) and Responsible AI seminar). Her publication in the Yale-NUS Journal examined sociopolitical critique through Dadaism and internet memes. Dorothy is a published poet and former visual artist.
View Abstract
As generative computational systems such as Generative Adversarial Networks (GANs) become increasingly integrated into creative artistic workflows, longstanding assumptions about authorship, originality, and creative control are being challenged. While current debates around intellectual property and AI-generated art often centre on input–output analysis, artistic traditions, theories of human action or legal attribution, this paper proposes a shift in methodological focus toward the practice of making generative art that focused on algorithm-artist interaction. Using an autoethnographic method combining aspects of traditional ethnography, qualitative inquiry and digital ethnography, “Follow the Artist”, we document the experience of re-creating a GAN-based artwork following the practice of an existing GAN artist, capturing embodied interactions involving the generation of datasets, model training, and the use of interfaces. Through the analysis of autoethnographic memos, we identify three areas of algorithmic opaqueness that shape the generation of GAN art: conception (human mental models of the algorithm), interaction (human gaps in understanding algorithmic logic), and workflow (step-by-step technical dependencies that structure decision-making). Our findings suggest that authorship in GAN art is governed not by a singular, static intent or purpose but from situated practices, interface mediation and the negotiation of opaqueness. By recentering the art-making process, this paper contributes a method for understanding authorship, originality and creative control in AI art, and offers an alternative foundation for future debates and the nature of agency.
View Bio
Dr Connor Graham is a Senior Lecturer and the Director of Studies at Tembusu College and a Research Fellow at the Science, Technology and Society Research Cluster at the National University of Singapore. He has published two co-authored books, ten peer-reviewed special issues and over 30 peer-reviewed articles and seven book chapters with Routledge and Springer. His research interests are currently in AI and society and smart cities. His current work centres on a people-based approach to understanding AI and smart cities and their narratives and futures.
View Abstract
As generative computational systems such as Generative Adversarial Networks (GANs) become increasingly integrated into creative artistic workflows, longstanding assumptions about authorship, originality, and creative control are being challenged. While current debates around intellectual property and AI-generated art often centre on input–output analysis, artistic traditions, theories of human action or legal attribution, this paper proposes a shift in methodological focus toward the practice of making generative art that focused on algorithm-artist interaction. Using an autoethnographic method combining aspects of traditional ethnography, qualitative inquiry and digital ethnography, “Follow the Artist”, we document the experience of re-creating a GAN-based artwork following the practice of an existing GAN artist, capturing embodied interactions involving the generation of datasets, model training, and the use of interfaces. Through the analysis of autoethnographic memos, we identify three areas of algorithmic opaqueness that shape the generation of GAN art: conception (human mental models of the algorithm), interaction (human gaps in understanding algorithmic logic), and workflow (step-by-step technical dependencies that structure decision-making). Our findings suggest that authorship in GAN art is governed not by a singular, static intent or purpose but from situated practices, interface mediation and the negotiation of opaqueness. By recentering the art-making process, this paper contributes a method for understanding authorship, originality and creative control in AI art, and offers an alternative foundation for future debates and the nature of agency.
View Abstract
Philosophical traditions linking vision and knowledge have long shaped our conceptions of humanity. Yet these are now challenged by neuroscience’s revelations about the bodily underpinnings of imagination, as well as by AI’s capacity to generate synthetic images from textual prompts. If vision once epitomized a direct route to knowledge, present-day technologies entangle sight with computation, prompting a thorough re-examination of originality, authorship, and the nature of understanding. This research investigates how the interplay between human cognition and AI-driven imagery reshapes both personal and collective imagination, thereby reframing centuries-old assumptions about visual perception as a privileged conduit to meaning. By bridging “individual” tastes with vast collective archives, generative AI unsettles how cultural artifacts are created, shared, and interpreted, fuelling philosophical inquiry into how our imaginative faculties may be subtly reconfigured through machine-driven “cognitive hacking.” Indeed, text-based interactions with algorithmic latent spaces challenge conventional boundaries of novelty, often blurring the line between genuinely new vision and recycled pattern-based recombinations. In this evolving synergy between human and machine, biases embedded in training data become potent creative constraints, raising ethical and epistemic questions around shifting norms of authorship and artistic intent. Ultimately, as neural models encapsulate and re-model culture, AI-driven imagery emerges not merely as a practical tool but as a transformative force that alters how we see, think, and know. By reordering the transmission of cultural meaning and recasting vision’s once-straightforward link to truth, generative AI compels us to reconsider the conditions under which knowledge is produced, unveiling fresh possibilities and unforeseen perils for creativity and perception alike.
View Bio
Jasmin Pfefferkorn is a Melbourne Postdoctoral Research Fellow in the School of Culture and Communication at the University of Melbourne. Her current research project is 'The Impact of Generative Technologies on Museums' Practice'. Jasmin is an Executive Member of the Research Unit in Public Cultures, on the steering committee for the Art, AI and Digital Ethics research collective at the Centre for Artificial Intelligence and Digital Ethics, and the co-founder and director of the research group CODED AESTHETICS. Previously, Jasmin has worked on two ARC projects, 'Digital Photography: mediation, memory and visual communication', and 'Creating The Bilbao Effect: Museum of Old and New Art (MONA) and the Social and Cultural of Urban Regeneration Through Arts Tourism'. From 2014-2023 she was a tutor, lecturer and subject coordinator for subjects within the Media and Communication program. She holds a PhD from the University of Melbourne on emergent museum practice and is the author of 'Museums as Assemblage'. Her interdisciplinary research spans museum studies, critical AI, visual culture, and human-machine aesthetics.
View Bio
Matthew J. Dennis is an Assistant Professor in Ethics of Technology at TU Eindhoven. His research focuses on how emerging technologies, such as artificial intelligence, challenge our notions of creativity, autonomy, and well-being. He also works on how intercultural perspectives on human flourishing can guide the design of emerging technologies. He was a Marie Sklodowska-Curie Research Fellow at TU Delft (2019–21) and an Early Career Innovation Fellow at University of Warwick (2019). He currently co-directs the Eindhoven Center for Philosophy of Artificial Intelligence, and is a Senior Fellow of the Ethics of Socially Disruptive Technologies research consortium. He received his Joint Monash-Warwick PhD in 2019.
Program Committee
Max Planck Institute for Human Development, Berlin
The University of Melbourne
Max Planck Institute for Human Development, Berlin
Review Committee
The University of Melbourne
Max Planck Institute for Human Development, Berlin
Max Planck Institute for Human Development, Berlin