Alignment Jams
Alignment Jams
Check out previous hackathons, locations and what the idea behind them is.
Previous Research
See earlier participants' projects by hackathon and locations.
About & Contact
Check out who is behind the Alignment Jams and contact us.
Blog
Read our blog on what the hackathons are about and what some of the results are.
Quick Links
Getting Started
For local organizers
Running a hackathon
Frequently asked questions
Media kit & marketing
Why run a hackathon?
Links
For participants & teams
Next steps
Become a mentor
AI Safety Ideas
Search
No items found.
All sprints
Check out all the results from previous hackathons and participate in future ones!
No items found.
Official Alignment Jam
Currently happening!
February 9, 2024
Finished! Check out the results
Multi-Agent Security AI Research Sprint
Join us to investigate and understand multi-agent security (MASec) in frontier AI!
Official Alignment Jam
Currently happening!
January 5, 2024
Finished! Check out the results
AI Governance Sprint
Join us to work with engaged individuals across the globe on the most important questions of AI governance.
Official Alignment Jam
Currently happening!
November 24, 2023
Finished! Check out the results
Model Evaluations Hackathon
Join us with Apollo Research to expose the ways artificial intelligence can be high-risk in the real world!
Official Alignment Jam
Currently happening!
November 11, 2023
Finished! Check out the results
AI Safety Entrepreneurship Hackathon
Will you be one of the 40 most exceptional AI/ML engineers, researchers or students passionate about this field to take part in our next AI Safety Hackathon in the Netherlands?
Official Alignment Jam
Currently happening!
September 29, 2023
Finished! Check out the results
Multi-Agent Safety Hackathon
As AI systems proliferate and become increasingly agent-like, they will interact with each other and with humans in new ways. These new multi-agent systems will create entirely new risk surfaces.
Official Alignment Jam
Currently happening!
September 8, 2023
Finished! Check out the results
Agency Foundations Challenge
The agency foundations challenge is hosted by agencyfoundations.ai and explores how IRL/RL, game theory and mechanistic interpretability can help preserve human agency in the quest for and in the presence of superhuman intelligent AI systems. Join for the kickoff hackathon happening the 8th to 10th of September.
Official Alignment Jam
Currently happening!
August 25, 2023
Finished! Check out the results
Distillation Write-a-thon 2.0
Join us for the Distillation Hackathon where you are tasked with working on more digestible versions of existing literature in AI safety research and theory!
Official Alignment Jam
Currently happening!
August 18, 2023
Finished! Check out the results
Evals hackathon
Evaluating large models is becoming more and more important. In this research sprint, we find more and better ways of evaluating large models before they are deployed. Join us!
Official Alignment Jam
Currently happening!
July 14, 2023
Finished! Check out the results
Interpretability Hackathon 3.0
Interpretability is becoming a more and more promising field of research to understand and reverse-engineer what the black boxes of neural networks learn. Join us with some of the top researchers in mechanistic interpretability!
Official Alignment Jam
Currently happening!
June 30, 2023
Finished! Check out the results
Safety Benchmarks Hackathon
It is becoming more and more important to understand the safety of state-of-the-art machine learning systems, especially language models. Join us for a weekend to come up with ideas and demos for new benchmarks for safety.
Official Alignment Jam
Currently happening!
June 16, 2023
Finished! Check out the results
Distillation Write-a-thon
Join us for the Distillation Writathon where you are tasked with writing more digestible versions of existing literature in AI safety research and theory!
Official Alignment Jam
Currently happening!
June 9, 2023
Finished! Check out the results
ARENA Interpretability Hackathon
The ARENA research training program is hosting an interpretability hackathon in London.
Official Alignment Jam
Currently happening!
May 26, 2023
Finished! Check out the results
Verifiable Safety Hackathon
Official Alignment Jam
Currently happening!
April 14, 2023
Finished! Check out the results
Interpretability Hackathon 2.0
Official Alignment Jam
Currently happening!
March 24, 2023
Finished! Check out the results
AI Governance Hackathon
Official Alignment Jam
Currently happening!
March 1, 2023
Finished! Check out the results
EAG Bay Area Thinkathon
Official Alignment Jam
Currently happening!
February 10, 2023
Finished! Check out the results
ScaleOversight
Official Alignment Jam
Currently happening!
January 20, 2023
Finished! Check out the results
Mechanistic Interpretability Hackathon
Official Alignment Jam
Currently happening!
January 10, 2023
Finished! Check out the results
EAGx LatAm Epoch AI Hackathon
Join the AI safety research hackathon happening right after EAGx LatAm! The topic will be about researching and understanding trends within AI development. Jaime Sevilla and the Epoch team will join as judges and mentors for the duration of the event.
Official Alignment Jam
Currently happening!
December 16, 2022
Finished! Check out the results
AI Testing
Official Alignment Jam
Currently happening!
November 11, 2022
Finished! Check out the results
Interpretability Hackathon
Official Alignment Jam
Currently happening!
September 30, 2022
Finished! Check out the results
Language Model Hackathon