The hackathon is happening right now! Join by signing up below and be a part of our community server.
← Alignment Jams

The AI Governance Research Sprint

--
Youssef Benhachem
Mubarak Adebayo
Aylin Haas
Épiphanie Gédéon
Salsabila Mahdi
Patrick
Tushar Chandra
Lexley Villasis
Angelica Casuela
Geanina Papa
Srishti Dutta
Christopher Chitimbwa
David Stinson
Vedant Arora
Faizan
Jas K
Peter Francis
Peter
Hrishikesh Yadav
Javel Rowe
Bart Jaworski
Pedro Moreira
Jord
Giorgio Michele Scolozzi
Heramb Podar
Ardy Haroen
Ilham Nugraha
Shauna Dowling
Adebayo Mubarak
Miko Planas
Tushar Chandra
Vansh
Vansh
Aksinya
Ayushi Raj Bhatt
Blaise Konya
Dmitri
Chetan Talele
Nyasha Duri
Creagh Factor
Akash Dutta
Srishti Dutta
Sarveshwari Singh
Vanshita Garg
Jan Llenzl Dagohoy
Armin Hamrah
Srishti Dutta
Olin Thakur
Tim Sankara
Aditya Aswani
Himadri Mandal
Narcisse Mbunzama
Michelle Nie
Mariam Osmani
Ian Reyes
Arpit Gupta
Lucie Philippon
Komal Saini
Gabriel Mukobi
Akash Kundu
Gediminas Dauderis
Luc Brun
Utkarsh Upadhyay
rick goldstein
Heramb Podar
Jaime
Sam Watts
jonathan claybrough
Esben Kran
Signups
--
Example Documentation of Implementation Guidance for the EU AI Act: a draft proposal to address challenges raised by business and civil society actors
AI Safeguard: Navigating Compliance and Risk in the Era of the EU AI Act
Boxing AIs - The power of checklists
2030 - The CEO Dilemna
Trust and Power in the Age of AI
Obsolescent Souls
The EU AI Act: Caution against a potential "Ultron"
Model Cards for AI Algorithm Governance
AI Safety risks: An Infographic Analyis
Entries
Friday January 5th 19:00 UTC to Sunday January 7th 2024
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign up

This event ran from Friday January 5th 19:00 UTC to Sunday January 7th 2024

See all entries under the "Entries" tab.

Dive into technical governance and storytelling for AI security!

The development of artificial intelligence presents vast opportunities alongside significant risks. During this intensive weekend-long sprint, we will concentrate on governance strategies to mitigate and secure against the gravest risks along with narrative explorations of what the future might look like.

Sign up to collaborate with others who are delving into some of today's most critical questions: How can we effectively safeguard the security of both current and future AI systems? And what are the broader societal implications of this transformative technology?

Follow along live for the keynote on our YouTube.

Watch the recording of the keynote here. Access Esben's slides here.

Below, you can see each case along with reading material for each. Notice also the live collective review pages under each case.

Case 1

Implementation Guidance for the EU AI Act

Following the European AI Act, implementation guidance will be used to implement legislation. Your task is to make an example of such documentation.

The EU AI Act is one of the most ambitious legislations for AI at the moment. During the implementation stages of the next two years, it is important that we understand how the legislation affects various actors in the space of AI.

Go to the shared notes and ideas document for this case.

Case 2

Technical Governance Tooling

Create demonstrations of technical frameworks, benchmarks or technical tools to make specific governance initiatives possible.

One of the biggest problems of successful societal-scale AI legislation compared to legislation for other high-risk technologies is that it is difficult to properly control compute and AI development. One bad actor has a significant amount of power.

Go to the shared notes and ideas document for this case.

Case 3

Explainers for AI Concepts

Create an explainer about concepts in AI risk for policymakers in whichever format you prefer (video, infographic, article, etc.)

Produce resources that can help decision-makers, such as policymakers, learn about a particular aspect of AI safety. This could include one-pagers, short videos, info-graphics, and more.

Go to the shared notes and ideas document for this case.

Case 4

Vignette Story-Telling

Tell a story about how a future world with AI will look like and use it to inform new questions to ask in AI governance.

Write a story about how a future world with AI looks and how we got there. You may either base it on 1) things going well, 2) things going bad or 3) a place in-between. We encourage you to be creative! As a starting prompt, you can tell a story of a small part of the world in 2040 from the future perspective.

Go to the shared notes and ideas document for this case.

Prizes

Besides the amazing opportunity to dive into an exciting topic, we are delighted to present our prizes for the top projects. We hope that this can support you your work on AI safety, possibly with our fellowship program, the Apart Lab!

Our jury will review the projects submitted to the research sprint and the top 3 projects will receive prizes according to the jury's reviews of each criterion!

  • 🥇 First Prize: $600
  • 🥈 Second Prize: $300
  • 🥉 Third Prize: $100

Besides the top project prizes, we work to identify projects and teams that seem ready to continue their work towards real-world impact in the Apart Lab fellowship where you receive mentorship and peer review towards your eventual publication or other output.

A big thank you to Straumli for sponsoring the $1,000 prize!

Schedule

The schedule is still to be finalized but you can expect the times to be within 3 hours of the final schedule. All times are updated to your local time zone.

  • Friday 13:00 UTC to Friday 15:30: Study and team session - spend time with other participants to read the study materials, discuss ideas, and set up teams.
  • Friday 19:00 UTC: Keynote talk - introduction to the topic, cases and logistics. Afterwards, you are provided 30 minutes to check through the resources and ideas. Then join us for a team matching event for anyone who is missing a team.
  • Saturday afternoon: Office hour - come discuss your projects with active researchers in AI governance.
  • Saturday afternoon: Office hour - discuss your Case 2 projects with practiced researchers on technical tooling for governance.
  • Saturday evening: Project talks - two talks from researchers in AI governance with a 15 minutes break in-between. Get a chance to chat with the speakers during Q&A.
  • Sunday afternoon: Office hour - come discuss your projects with active researchers in AI governance.
  • Sunday afternoon: Office hour - discuss your Case 2 projects with practiced researchers on technical tooling for governance.
  • Sunday evening: Virtual social - we finish off the weekend with a virtual social for any final questions and answers.
  • Monday morning: Submission deadline - see instructions in the "Submit" tab.
  • The event lasts from Friday until Sunday
  • Following Thursday: Project presentations - winner announcements and thank you to everyone involved.
  • Two weeks later: Apart Lab selection - After the sprint, projects will be selected for the Apart Lab and teams are invited to join.
Dmitri
,
Arpit Gupta
,
Aylin Haas
,
and others!
You have successfully been signed up! You should receive an email with further information.
Oops! Something went wrong while submitting the form.

Apart Sprints

The Apart Sprints are weekend-long challenges hosted by Apart to help you get exposure to real-world problems and develop object-level work that takes the field one step closer to more secure artificial intelligence! Read more about the project here.

Submissions

Your team's submission must follow a template provided at the start of the Sprint. The template for cases 1 and 2 follow a traditional research paper structure. The template for cases 3 and 4 is more free-form. The instructions will be visible in each template.

Your team will be between 1 and 5 people and you will submit your project based on the template in addition to a title, a description and private contact information. There will be a team-making event right after the keynote for anyone who is missing a team.

You are allowed to think about your project before the hackathon starts (and we recommend reading all the resources for the cases!) but your core research work should happen in the duration of the hackathon.

Evaluation criteria

The jury will review your project according to a series of criteria that are designed to help you develop your project.

  1. Well-defined Scope [all cases]: Make sure your project is narrowed down to something very concrete. For example, instead of “AI & Compute Control”, we expect projects like "Demo implementation of HIPCC firmware logging of GPU memory” or "GPT-2030".
  2. Creativity [all cases]: Is the idea original compared to public research on governance? We emphasize that you think outside our current boxes of AI safety and governance, possibly draw in learnings from other fields (sci-fi, cybersec, Digital Services Act, etc.).
  3. Reasoning Transparency [cases 1 and 2]: Do not defend your project. Make sure to share the advantages that are not obvious along with any limitations to your method (read Open Philanthropy's guide). If you have code, this also includes making that code reproducible.
  4. Believability [cases 3 and 4]: Is it an accurate [case 3] or believable [case 4] explanation or narrative you wrote? We encourage creative and innovative submissions that are connected to reality. Examples include projecting current numbers into the future to ground your narrative or talking with one of the mentors during the weekend to make sure your explainer about an AI technology is accurate.

Host a local group

If you are part of a local machine learning or AI safety group, you are very welcome to set up a local in-person site to work together with people on this hackathon! We will have several across the world and you can easily sign up under the “Locations” tab above.

Keynote speakers

Charlotte Siegmann

PhD student in economics at the Massachusetts Institute of Technology and founding member of KIRA, a Berlin-based think tank
Keynote speaker & co-organizer

Esben Kran

Founder and director of Apart and previously lead data scientist and brain-computer interface researcher
Keynote speaker & co-organizer

Paul Bricman

Co-founder of Straumli and research on defensive AI auditing tooling with previous projects including computational philosophy for AI alignment.
Speaker

Jury

Charlotte Siegmann

PhD student in economics at the Massachusetts Institute of Technology and founding member of KIRA, a Berlin-based think tank
Judge

Paul Bricman

Co-founder of Straumli and research on defensive AI auditing tooling with previous projects including computational philosophy for AI alignment.
Judge

Peter Francis

Founded FluidStack and now works with alignment strategies for real-world systems.
Judge

Jason Hoelscher-Obermaier

Research Lead at Apart Research with a PhD in experimental quantum physics, focused on model evaluations and alignment
Judge

Fazl Barez

Research director at Apart Research with a PhD in AI and robotics from Edinburgh and Oxford, focused on interpretability and model evaluations
Judge

Esben Kran

Founder and director of Apart and previously lead data scientist and brain-computer interface researcher
Judge

Registered jam sites

Viennese ✱ Sprint
Join part of the Apart team for this weekend's governance sprint! Write to jason@apartresearch.com if you're interested in joining.
AI Governance Jam ⋅ Bucharest Meet Up
We'll meet up for an informal get-together at the Agora Robotics HQ, near the UPB campus. We'll hack together at our projects, discuss the suggested readings, and generally chat about how to ensure the safe deployment of AI.
Cambodia AI Governance Jam
design, develop, guide and predict ai governance
Visit event page
Phnom Penh, Cambodia
AI governance Jam with EffiSciences
Details of the location to be communicated by email after registering
Visit event page
ENS Ulm, Paris
Prague AI Governance Jam
Join us for a Governance jam in Prague - Vinohrady, Koperníkova 6, on 6.-7.1. 2024 to start your year with impactful research in AI Governance for safety.
Visit event page
Fixed Point Prague, Koperníkova 6

Register your own site

The in-person hubs for the Alignment Jams are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research and engineering community. Read more about organizing.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received! Your event will show up on this page.
Oops! Something went wrong while submitting the form.

Submit your project

For Case 1 and 2, please use this template – four pages along with the appendix that we do not guarantee judges will read. Case 3 and 4 are more free-form submissions but you will need to submit a PDF from this template. Instructions are in the templates as well.‍

If you wish to record a presentation of your work, you can use the recording capability of e.g. Keynote, Powerpoint, and Slides (using Vimeo).

Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
Oops! Something went wrong while submitting the form.

Thank you to everyone who joined!

Watch the live project presentations here (ignore the frozen screen at the beginning, the audio works fine). Big congratulations and great work to our winners!

  • 🏅 Model Cards for AI Algorithm Governance
  • 🏅 Obsolescent Souls
  • 🏅 2030 - The CEO Dilemma
  • 🏅 Boxing AIs - The Power of Checklists

With our honorable mentions:

  • The EU AI Act: Caution against a potential "Ultron"
  • AI Safeguard
  • EU AI Act example implementation guidance

4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
Model Cards for AI Algorithm Governance
Jaime Raldua Veuthey; Gediminas Dauderis; Chetan Talele
Card Modellers
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
Obsolescent Souls
Markov
team_name
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
2030 - The CEO Dilemna
Pierina Camarena, Leon Nyametso, Capucine Marteau
CAPILE
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
AI Safety risks: An Infographic Analyis
Papa Geanina-Mihaela
Ethics Engraver
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
The EU AI Act: Caution against a potential "Ultron"
Srishti Dutta
Srishti Dutta
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
Trust and Power in the Age of AI
David Stinson
David Stinson
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
Boxing AIs - The power of checklists
Charbel-Raphael SEGERIE, Quentin FEUILLADE-MONTIXI
Banger Team
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
AI Safeguard: Navigating Compliance and Risk in the Era of the EU AI Act
Heramb Podar
YudkowskyGotNoClout
Read more
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆
Late submission
Example Documentation of Implementation Guidance for the EU AI Act: a draft proposal to address challenges raised by business and civil society actors
Nyasha Duri
Read more

Send in pictures of you having fun hacking away!

We love to see the community flourish and it's always great to see any pictures you're willing to share uploaded here.

Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you for sharing !
Oops! Something went wrong while submitting the form.
No images submitted! We've either not started or people don't want to share their fun :,-)