See all entries under the "Entries" tab.
Dive into technical governance and storytelling for AI security!
The development of artificial intelligence presents vast opportunities alongside significant risks. During this intensive weekend-long sprint, we will concentrate on governance strategies to mitigate and secure against the gravest risks along with narrative explorations of what the future might look like.
Sign up to collaborate with others who are delving into some of today's most critical questions: How can we effectively safeguard the security of both current and future AI systems? And what are the broader societal implications of this transformative technology?
Follow along live for the keynote on our YouTube.
Watch the recording of the keynote here. Access Esben's slides here.
Below, you can see each case along with reading material for each. Notice also the live collective review pages under each case.
Implementation Guidance for the EU AI Act
The EU AI Act is one of the most ambitious legislations for AI at the moment. During the implementation stages of the next two years, it is important that we understand how the legislation affects various actors in the space of AI.
Go to the shared notes and ideas document for this case.
- The EU AI Act: A primer – a short synopsis of the current EU AI Act by Mia Hoffmann
- Implementing EU Law – an overview from the EU on how legislation is implemented
- Bertuzzi's trilogue overview – how foundation models are legislated under the new decisions made at the trilogue discussions
- [Additional reading] Anthropic Responsible Scaling Policy – Anthropic's self-imposed safety precautions based on model scaling
- [Additional reading] OpenAI Preparedness Framework – OpenAI's strategy for hazard preparedness
- [Additional reading] AI Standards Lab – a self-organized mission to accelerate standards-writing
- [Additional reading] AI Act Newsletter – FLI's newsletter on the latest news from the EU AI Act
Technical Governance Tooling
One of the biggest problems of successful societal-scale AI legislation compared to legislation for other high-risk technologies is that it is difficult to properly control compute and AI development. One bad actor has a significant amount of power.
Go to the shared notes and ideas document for this case.
- How to Catch a Chinchilla – a systemic overview on how we might audit compute usage for foundation models (Shavit, 2023)
- Compute Governance Introduction – an overview from Lennart Heim of how we can legislate AI by controlling GPUs (listen also to the podcast with Lennart Heim)
- [Additional reading] Representation Engineering – a technical framework to understand how capabilities are presented in various models
- [Additional reading] Export Controls of IaaS Controlled AI Chips – how the use of compute rental services affects export controls
Explainers for AI Concepts
Produce resources that can help decision-makers, such as policymakers, learn about a particular aspect of AI safety. This could include one-pagers, short videos, info-graphics, and more.
Go to the shared notes and ideas document for this case.
- CSET's AI Concepts Overview – a short overview of the key concepts in AI safety
- Visualizing the Deep Learning Revolution – an explainer about the speed of deep learning improvement and the successive risks
- [Additional reading] Here's What the Godfathers of AI Have to Say – a video overview of the large-scale risks as presented by Bengio, LeCunn and Hinton
- [Additional reading] Advanced AI Governance – a literature review of the problems, options and proposals of AI governance by Maas
- [Additional reading] Statement on AI Risk explainer video – AI Explained's overview of the Statement on AI Risk
- [Additional reading] World Models – an interactive explainer of how models learn world models
Vignette Story-Telling
Write a story about how a future world with AI looks and how we got there. You may either base it on 1) things going well, 2) things going bad or 3) a place in-between. We encourage you to be creative! As a starting prompt, you can tell a story of a small part of the world in 2040 from the future perspective.
Go to the shared notes and ideas document for this case.
- GPT-2030 – a Berkeley assistant professor's take on what the future AI might be capable of based on the current numbers
- Shulman on the Dwarkesh Podcast [1:02:00 to 1:33:00] – when asked the question "How does an intelligence explosion look like?", he responds with what it feels like
- What 2026 Might Look Like – impressively predicted the 2023 LLM hype in 2021
- The Next Decades Might be Wild – an overview of how the next decades might look with AI
- [Additional reading] Welcome to 2030. I own nothing, have no privacy, and life has never been better. – a short exposition of the life in 2030
- [Additional reading] EpochAI's research – a great resource for informing your perspective on how future AI development will look like
- [Additional reading] How we could stumble into AI catastrophe – a description of how we might accidentally cause a catastrophe
Prizes
Besides the amazing opportunity to dive into an exciting topic, we are delighted to present our prizes for the top projects. We hope that this can support you your work on AI safety, possibly with our fellowship program, the Apart Lab!
Our jury will review the projects submitted to the research sprint and the top 3 projects will receive prizes according to the jury's reviews of each criterion!
- 🥇 First Prize: $600
- 🥈 Second Prize: $300
- 🥉 Third Prize: $100
Besides the top project prizes, we work to identify projects and teams that seem ready to continue their work towards real-world impact in the Apart Lab fellowship where you receive mentorship and peer review towards your eventual publication or other output.
A big thank you to Straumli for sponsoring the $1,000 prize!
Schedule
The schedule is still to be finalized but you can expect the times to be within 3 hours of the final schedule. All times are updated to your local time zone.
- Friday 13:00 UTC to Friday 15:30: Study and team session - spend time with other participants to read the study materials, discuss ideas, and set up teams.
- Friday 19:00 UTC: Keynote talk - introduction to the topic, cases and logistics. Afterwards, you are provided 30 minutes to check through the resources and ideas. Then join us for a team matching event for anyone who is missing a team.
- Saturday afternoon: Office hour - come discuss your projects with active researchers in AI governance.
- Saturday afternoon: Office hour - discuss your Case 2 projects with practiced researchers on technical tooling for governance.
- Saturday evening: Project talks - two talks from researchers in AI governance with a 15 minutes break in-between. Get a chance to chat with the speakers during Q&A.
- Sunday afternoon: Office hour - come discuss your projects with active researchers in AI governance.
- Sunday afternoon: Office hour - discuss your Case 2 projects with practiced researchers on technical tooling for governance.
- Sunday evening: Virtual social - we finish off the weekend with a virtual social for any final questions and answers.
- Monday morning: Submission deadline - see instructions in the "Submit" tab.
- The event lasts from Friday until Sunday
- Following Thursday: Project presentations - winner announcements and thank you to everyone involved.
- Two weeks later: Apart Lab selection - After the sprint, projects will be selected for the Apart Lab and teams are invited to join.
Apart Sprints
The Apart Sprints are weekend-long challenges hosted by Apart to help you get exposure to real-world problems and develop object-level work that takes the field one step closer to more secure artificial intelligence! Read more about the project here.
Submissions
Your team's submission must follow a template provided at the start of the Sprint. The template for cases 1 and 2 follow a traditional research paper structure. The template for cases 3 and 4 is more free-form. The instructions will be visible in each template.
Your team will be between 1 and 5 people and you will submit your project based on the template in addition to a title, a description and private contact information. There will be a team-making event right after the keynote for anyone who is missing a team.
You are allowed to think about your project before the hackathon starts (and we recommend reading all the resources for the cases!) but your core research work should happen in the duration of the hackathon.
Evaluation criteria
The jury will review your project according to a series of criteria that are designed to help you develop your project.
- Well-defined Scope [all cases]: Make sure your project is narrowed down to something very concrete. For example, instead of “AI & Compute Control”, we expect projects like "Demo implementation of HIPCC firmware logging of GPU memory” or "GPT-2030".
- Creativity [all cases]: Is the idea original compared to public research on governance? We emphasize that you think outside our current boxes of AI safety and governance, possibly draw in learnings from other fields (sci-fi, cybersec, Digital Services Act, etc.).
- Reasoning Transparency [cases 1 and 2]: Do not defend your project. Make sure to share the advantages that are not obvious along with any limitations to your method (read Open Philanthropy's guide). If you have code, this also includes making that code reproducible.
- Believability [cases 3 and 4]: Is it an accurate [case 3] or believable [case 4] explanation or narrative you wrote? We encourage creative and innovative submissions that are connected to reality. Examples include projecting current numbers into the future to ground your narrative or talking with one of the mentors during the weekend to make sure your explainer about an AI technology is accurate.
Host a local group
If you are part of a local machine learning or AI safety group, you are very welcome to set up a local in-person site to work together with people on this hackathon! We will have several across the world and you can easily sign up under the “Locations” tab above.