Check out the results from the latest hackathon! View more here
Supported by
Alignment Jam
A weekend of intense, fun, and collaborative research on the most interesting questions of our day from machine learning safety. With speakers from top institutions and companies in artificial intelligence development, you will get a chance to work on impactful research.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Supported by
AI safety research sprints!
A weekend of intense, fun and collaborative research on the most interesting questions of our day from machine learning & ML safety!
With collaborators and previous keynote speakers from...
Rewatch the keynote
Interpretability 2.0
AJ7 repeated the success of previous interpretability hackathons in collaboration with Neel Nanda! Participants got a chance to work on some of the hottest technical work in AI safety, mechanistic interpretability; the research to reverse-engineer what neural networks learn.
The Alignment Jam #6 was about AI governance. With 6-8 specific cases to work from, participants had the opportunity of a lifetime to engage with some of the major strategies to ensure that machine learning systems continue to be a positive technology for humanity!
Mechanistic Interpretability and Scalable Oversight
The latest two alignment jams from January and February 2023 were about mechanistic interpretability and scalable oversight! Participants reverse-engineered how neural networks understand information and developed novel ways to monitor AI with the guidance of our team and the wonderful Neel Nanda, Gabriel Recchia and Ruiqi Zhong. Check out the keynotes and winning presentations below!
🎥 1st 🏆 We Found " an" Neuron
🎥 1st 🏆 Automated testing of AI system deception
🎥 Keynote talk for Mechanistic Interpretability by Neel Nanda
🎥 Keynote talk for ScaleOversight by Gabriel Recchia
Hack away in teams to learn & have fun!
In-Person & Online
Join events on the GatherTown and Discord or at our in-person locations around the world!
Live Mentorship Q&A
Our central team will be available to help with any questions and theory on the hackathon Discord.
For Everyone
You can join in the middle if you don't find time and we provide code starters, ideas and inspiration; see an example.
Awards & Next Steps
We will help you take the next steps in your research journey; publishing, programmes, mentorship, etc.
Organize a local hackathon with help from us
The in-person hubs for the Alignment Jams are run by passionate individuals just like you! We organize the schedule, speakers, starter templates and funding, and you can focus on engaging your local research and engineering community. If you check out the hackathon pages at the top, you can sign up with new jam sites
See what our great hackathon participants have said
Jason Hoelscher-Obermaier
Interpretability hackathon
The hackathon was a really great way to try out research on AI interpretability and getting in touch with other people working on this. The input, resources and feedback provided by the team organizers and in particular by Neel Nanda were super helpful and very motivating!
Luca De Leo
AI Trends hackathon
I found the hackaton very cool, I think it lowered my hesitance in participating in stuff like this in the future significantly. A whole bunch of lessons learned and Jaime and Pablo were very kind and helpful through the whole process.
Alejandro González
Interpretabiity hackathon
I was not that interested in AI safety and didn't know that much about machine learning before, but I heard from this hackathon thanks to a friend, and I don't regret participating! I've learned a ton, and it was a refreshing weekend for me.
Alex Foote
Interpretability hackathon
A great experience! A fun and welcoming event with some really useful resources for starting to do interpretability research. And a lot of interesting projects to explore at the end!
Sam Glendenning
Interpretability hackathon
Was great to hear directly from accomplished AI safety researchers and try investigating some of the questions they thought were high impact.