Open Source Software


Open source software is developed at the Max Planck Institute for Human Development. The projects listed below are available for reuse free of charge:

 

Castellum
 

Castellum offers a centralised and secure solution for the data protection-compliant management of participant data throughout the entire study life cycle.

The core functions include the recruitment of participants using study-specific filters, the management of session dates, the generation of study-specific pseudonyms and the targeted verification of legal bases.

As an important building block in research data management, Castellum forms part of the library and scientific information service portfolio.

Information about Castellum can be found here:

 

Ωnyx

Ωnyx is an open-source software dedicated to the visual creation and estimation of structural equation models (SEM). It features a user-friendly graphical interface that simplifies the model creation process, alongside a robust computational engine for performing maximum likelihood estimation of model parameters. Furthermore, Ωnyx supports seamless integration with leading SEM software, including OpenMx, lavaan, and Mplus by offering automated syntax generation.

Website
Github

 

reproducibleRchunks

Statistical and computational results are at the heart of the quantitative approach to empirical research. Their credibility hinges on reproducibility — the ability to obtain identical outcomes across different computers or the same computer later in time. In an influential study, Artner and colleagues could reproduce only 70% of published results from a sample of empirical studies in Psychology (https://lnkd.in/emA_yBA5).

Various strategies have been proposed to strengthen reproducibility, with R Markdown standing out as a versatile language for generating reproducible research assets (such as reports, posters, or presentations), often used in combination with approaches to recreate a virtual computational environment (e.g., Docker or the renv package). But how do we test whether analyses reproduce without changing R users' workflows?

The R package reproducibleRchunks was designed to address this challenge. It introduces a novel code chunk type within R Markdown documents. These "reproducibleR" chunks automatically store metadata about original computational results and verify later reproduction attempts. With minimal workflow adjustments, researchers can increase transparency and trustworthiness in their digital research assets.

Find our latest PsyArXiv preprint including examples and documentation here: https://t.co/J4pSSaD0N2 and if we got you interested, download the R package from here: https://lnkd.in/eriRs98z

 

Transformer Heads

This library aims to be an allround toolkit for attaching, training, saving and loading of new heads for transformer models.

A new head could be:

  • A linear probe used to get an understanding of the information processing in a transformer architecture
  • A head to be finetuned jointly with the weights of a pretrained transformer model to perform a completely different kind of task.
    • E.g., a transformer pretrained to do causal language modelling could get a sequence classification head attached and be finetuned to do sentiment classification.
    • Or one could attach a regression head to turn a large language model into a value function for a reinforcement learning problem.

On top of that, attaching multiple heads at once can make multi-task learning easy, making it possible to train very general models.

Github

Go to Editor View