We are Apart. Facilitating and growing ML safety research.

Through the Alignment Jam hackathons, we give a fun and engaging environment for researchers, students and engineers to experiment with new ideas in machine learning and language modeling.

We also develop the AI Safety Ideas platform and release weekly updates for ML safety research.
January 2023
Largest interpretability hackathon yet
December 2022
Presenting at NeurIPS and running the AI testing jam
November 2022
The first multi-location hackathon launches
September 2022
The first Alignment Jam launches in Aarhus

Our Team

Esben Kran
Head Organizer
Thomas Steinthal
Head of Operations
Fazl Barez
Researcher & Judge
Sabrina Zaki
Research & Communications

Contact Us

Send us an email in the form on the right or directly through the channels below.
Visit Us
Dokk21, Filmbyen 23, 2. tv, 8000 Aarhus, Denmark
Regus, 48 Charlotte St., London W1T 2NS
We never share your details with third parties.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.