OpinionGPT: Exploring the Impact of Biases on Large Language Models

  • Date: Sep 28, 2023
  • Time: 11:00 AM (Local Time Germany)
  • Speaker: Alan Akbik, Humboldt-Universität zu Berlin
  • Location: Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin
  • Room: ARC meeting room (199)
  • Host: Center for Adaptive Rationality (ARC)

Instruction-tuned large language models (LLMs) such as ChatGPT have recently showcased remarkable ability to generate fitting answers to human questions. However, one question is how biases in the training data affect the LLM. To investigate, we asked a simple question: What if we trained an LLM only with texts written by women? And what if we trained another LLM only with texts written by men? Will the two models give different answers? To explore this question, we identified 11 different biases, derived bias-specific training data, and fine-tuned 11 LLMs with the goal of making them "as biased as possible".

In this talk, I introduce OpinionGPT and connect it to current machine learning research in my chair. In the first part, I discuss our work in fine-tuning LLMs, and present the OpinionGPT web demo where users can ask questions and compare answers from differently biased models. In the second part, I present a concurrent line of research centered around the task of entity linking, a core information extraction task relevant to many industrial use cases, and present our recent results.

Attend at the Max Planck Institute for Human Development or join online.

Zoom link

Meeting number: 686 5912 2236

Password: 045540

Go to Editor View