AI Governance Resources

Join for the AI governance hackathon on making sure that artificial general intelligence and machine learning systems become a positive influence on humanity's future.

If you wish to use GPT-4 for your project, use the following API key (don't leave it in a public Github repository).

Use this template to write up your work and use this guide to upload the projects.

Project template document

Copy this document for your report to upload on the ideathon page. This helps the judges a lot in speeding up reviews.

Existential Risk Observatory

Policies for slowing down progress towards artificial general intelligence

Assuming full political and international support, which policy could successfully pause global progress towards artificial general intelligence? Proposals may include research regulation, software regulation, data regulation, and/or hardware (production) regulation. Consider the full supply chain to be yours.

A successful proposal might be promoted among experts, policymakers, and the general public. This is a chance to have an impact on real policy decisions.

🏛️ Policy
🤓 Sociology
🔧 Technical
More information
Centre for the Governance of AI

Investigate bottlenecks for the explosive growth of artificial intelligence

Will bottlenecks prevent AI from driving explosive growth? If not, will they at least prevent that AGI will develop very fast?

Click More information below to see an explanation for why this is important to analyze along with a few project examples:

  • Tell an inside-view story for how AGI causes explosive growth despite bottlenecks
  • Will AGIs give their controller a strong military advantage?
  • What % of software development tasks will super-codex automate?
🏛️ Policy
🌎 Risks
🔧 Technical
More information
Convergence Analysis

Redirect the energy of AI development

This case was added after keynote

There is a tremendous amount of energy flowing into AI development and pushing AI technology onwards as AI companies, nations, and individuals pursue profit and competitive edge. It may be hard to directly slow or pause progress towards AI.

Perhaps however it will be easier to redirect this energy: funnel the energy into AI technology and applications that are more aligned. For example, perhaps AI agents and lack of humans-in-the-loop could be prevented. Propose possible approaches to redirecting the energy of AI development, with governments as the main recipients in mind. The proposal needs to explain how the interests of companies and nations will be benefited by shaping development this way.

🔧 Technical
🏦 Corporate
🏛️ Policy
Richard Ngo

Considerations for the release of GPT-6

You are OpenAI and release GPT-6. What will your rollout process look like and which major considerations do you have to include for safety? OpenAI wants to release a significantly more powerful GPT-6. Which specific consequences and actions should they consider? Relate it to the strategic documents from OpenAI, "Planning for AGI and beyond" and "Our approach to alignment research".

🔧 Technical
🏦 Corporate
🏛️ Compliance
More information
Richard Ngo

Come up with scenarios where AI self-replicate and brainstorm solutions on how to protect against this risk

Which scenarios might we expect where AI will self-replicate and what are plausible solutions to prevent these scenarios?

In this case, we focus on analyzing the risk factors that might lead to scenarios where AI will replicate . We ask participants to imagine and model these scenarios using both trend extrapolation and critically engaging with existing literature while brainstorming the best solutions to specific risky scenarios.

👾 Cybersecurity
🧠 Psychology
🔧 Technical
More information

Categorize risks of future AI systems in an accessible way

The idea here is to categorize different long term risks in a way which is more accessible to policymakers. How can policy makers identify long-term risks as they arise and what measures track the important properties of different types of risk? For example, mass adoption of AI technology as a replacement to human labor might be one such measure. To what extent is there overlap between near-term and long-term risks?

🏛️ Policy
🌎 Risks

Which concrete technical directions should artificial intelligence companies work towards for the benefit of people across the world?

There are many directions within AI research and the largest artificial intelligence development companies in the world (DeepMind, Anthropic, OpenAI, etc.) care a lot about safety. Create an overview of the most promising directions they should work towards to make AI systems safer and as beneficial to as many as possible. Examples might include: RLHF, interpretability.

🔧 Technical
🏦 Corporate

Whose morals should AI follow?

How can we ensure that artificial intelligence follows the values of people from across the world and that the benefits of the technology are shared internationally? How should AI aggregate these preferences? How can it work around known impossibility results in social choice theory (Arrow 1950)? Would developing AI for public administrations be an area where it is possible to explore which values to align AI to? Potential areas of application could be regulation and determining taxation.

An example of a solution project could be simulated deliberation proposed by Jan Leike.

💵 Economics
💭 Ethics
🔧 Technical
Richard Ngo

US-China export ban goes wrong! What happened?

Right now, the US is banning specific AI hardware to China to avoid autonomous weapons development and human rights violations enabled by AI (e.g. face recognition in muslim populations for tracking). If we imagine 7 years from now, 2030, and we see that this has lead to antagonism and a race towards dangerous AI between the two countries, what can you imagine happened in those 7 years?

🌎 Global policy
🔧 Technical

Where will AI fit into the democratic system?

Identify points where AI might be incorporated into democratic systems and how we want to design them. You can use the below diagram as a guideline to the different parts of national democratic processes.

🏛️ Political
🤓 Sociology
🔧 Technical

Custom case

We recommend working with the other cases because they add great context and makes it easier to evaluate the different projects relative to each other.

However, you are welcome to choose to work on something in AI governance that interests you more! These ideathons are also an opportunity for us to explore completely new ideas, and in this custom case, we recommend that you think in ambitious and original terms to explore more options for sustainable governance.

Again, use the template for your report.