Tilman Räuker
AI Alignment Research
Some of my projects:
Understanding search in Transformers - unsearch.org
Paper: A Configurable Library for Generating and Manipulating Maze Datasets
Paper: Linearly Structured World Representations in Maze-Solving Transformers was accepted at UniReps! (published soon)
Notes on AI safety papers I read: @safe_paper
Technical AI safety Research Manager at ERA 2023
Paper: Toward Transparent AI accepted at SaTML 2023
Participated at ARENA, an Alignment Research Engineer Accelerator
Co-organized the EAGxBerlin 2022
Master Thesis
On Temporally-Extended Reinforcement Learning in Dynamic Algorithm Configuration
Supervised by Theresa Eimer and Prof. Marius Lindauer
As a hobby, I like designing logos.