Humanizing Language Models: Exploring behavioral and brain data as language model inputs

  • Date: Sep 7, 2023
  • Time: 11:00 AM (Local Time Germany)
  • Speaker: Zakir Hussain, University of Basel
  • Location: Max-Planck-Institut für Bildungsforschung, Lentzeallee 94, 14195 Berlin
  • Room: ARC meeting room (199)
  • Host: Forschungsbereich Adaptive Rationalität

Language models are traditionally trained on massive digitized text corpora. However, leading AI labs have recently started incorporating more explicit forms of human data into LM training pipelines with the goal of building models that have a better "understanding" of their users. Predominantly, this is done via human ratings of model outputs; however, there exist other sources of human data that could be utilized in different ways. I will present two projects investigating alternative sources of data as inputs to language models.

The first project aims to understand differences in the content of language representations ("embeddings") trained from text, behavioral data (e.g., free associations), and brain data (e.g., fMRI). Using a method from neuroscience known as "representational similarity analysis", we show that embeddings derived from behavioral and neuroimaging data encode different information than their text-derived cousins. Furthermore, using an interpretability method that we term "representational content analysis", we find that, in particular, behavioral embeddings better encode dimensions relating to affect, perception, and socialness, which we view as critical for language models to have good models of human beings.

The second project aims to leverage the superior psychological content of behavior-based embeddings to improve the prediction of risk perception. We again find the distinct information encoded in behavioral embeddings to be useful, demonstrating just one application of such models of many in behavioral science and natural language processing applications more generally.

Go to Editor View