Tilman Räuker
AI Alignment Research
Some of my projects:
I am the Cause Area Lead for technical AI safety at ERA this summer: Apply here - the deadline is 5th April.
Our paper Toward Transparent AI was accepted at SaTML 2023!
Participated at ARENA, a new Alignment Research Engineer Accelerator
Co-organized the EAGxBerlin 2022
Thesis: On Temporally-Extended Reinforcement Learning in Dynamic Algorithm Configuration