Alignment Jams
Alignment Jams
Check out previous hackathons, locations and what the idea behind them is.
Previous Research
See earlier participants' projects by hackathon and locations.
About & Contact
Check out who is behind the Alignment Jams and contact us.
Blog
Read our blog on what the hackathons are about and what some of the results are.
Quick Links
Getting Started
For local organizers
Running a hackathon
Frequently asked questions
Media kit & marketing
Why run a hackathon?
Links
For participants & teams
Next steps
Become a mentor
AI Safety Ideas
Search
No items found.
We are Apart. Facilitating and growing ML safety research.
Through the Alignment Jam hackathons, we give a fun and engaging environment for researchers, students and engineers to experiment with new ideas in machine learning and language modeling.
We also develop the
AI Safety Ideas platform
and release
weekly updates
for ML safety research.
January 2023
Largest interpretability hackathon yet
December 2022
Presenting at NeurIPS and running the AI testing jam
November 2022
The first multi-location hackathon launches
September 2022
The first Alignment Jam launches in Aarhus
Our Team
Esben Kran
Head Organizer
Thomas Steinthal
Head of Operations
Fazl Barez
Researcher & Judge
Sabrina Zaki
Research & Communications
Contact Us
Send us an email in the form on the right or directly through the channels below.
Visit Us
Dokk21, Filmbyen 23, 2. tv, 8000 Aarhus, Denmark
Regus, 48 Charlotte St., London W1T 2NS
Get in touch
operations@apartresearch.com
+45 60 73 61 97
Github
Twitter
LinkedIn
Youtube
We never share your details with third parties.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.