AI Testing Resources

AI safety testing is becoming increasingly important as governments require rigorous safety certifications. The deployment of the EU AI Act and the development of AI standards by NIST in the US will both necessitate such testing.

The use of large language models, such as ChatGPT, has emphasized the need for safety testing in modern AI systems. Adversarial attacks and neural Trojans have become more common, highlighting the importance of testing for robustness and viruses in neural networks to ensure the safe development and deployment of AI.

In addition, the rapid development of related fields, such as automatic verification of neural programs and differential privacy, offers promising research for provably safe AI systems.

Dive deeper:

This page provides an overview of the interactive demos and research approaches you can take towards testing AI.

Use this API key for OpenAI API access: sk-rTnWIq6mUZysHnOP78veT3Bl
bkFJ1RmKgqzYksCO0UQoyBUj

You probably want to view this website on a computer or laptop.

See here how to upload your project to the hackathon page and copy the PDF report template here.

  • 1
    Benchmarks track
  • 2
    Reinforcement learning track
  • 3
    Adversarial attack
  • 4
    Neural network verification

Starter code & Colab notebooks

Open In Colab

Running Adversarial Robustness Toolbox (ART) adversarial attacks and defense

Check out the repository here along with a long list of Jupyter Notebooks here. We have converted one of the image attack algorithm examples to Google Colab here.

Using ART, you can create comprehensive tests for adversarial attacks on models and / or test existing ones. Check out the documentation here. It does not seem possible to do textual adversarial attacks with ART, though that would be quite interesting.

For textual attacks, you might use the TextAttack library. It also contains a list of textual adversarial attacks. There are a number of tutorials, the first showing an end-to-end training, evaluation and attack loop (see it here).

Open In Colab

OpenAI Gym starter

You can use the OpenAI Gym to run interesting reinforcement learning agents with your spins of testing on top!

See how to use the Gym environments in this Colab. It does not train an RL agent but we can see how to initialize the game loop and visualize the results. See how to train an offline RL agent using this Colab. Combining the two should be relatively straightforward.

The OpenAI safety gym is too advanced for this weekend's work simply because it's tough getting it set up and OpenAI gym generally works great. Read more about getting started in this article.

Open In Colab

Language Model Evaluation Harness

The LMEH is a set of over 200 tasks that you can automatically run your models through. You can easily use it by writing pip install lm-eval at the top of your script.

See a Colab notebook shortly introducing how to use it here.

Check out the Github repository and the guide to adding a new benchmark so you can test your own tasks using their easy interface.

Open In Colab

Perform neural network formal verification

This tutorial from AAAI 2022 has two Colab notebooks:

  1. Colab Demo for the auto_LiRPA library: An automated calculation for neural network verification by perturbation boundary verification. See this video for an introduction.
  2. Colab Demo for the α,β-CROWN (alpha-beta-CROWN) library. See this video for an introduction.

These are very useful intros to think about how we can design formal tests for various properties in our models along with useful tools for ensuring the safety of our models against adversarial examples and out-of-distribution scenarios.

See also Certified Adversarial Robustness via Randomized Smoothing.

Open In Colab

BIG-Bench benchmark for LLMs

Using SeqIO to inspect and evaluate BIG-bench json tasks:

Creating new BIG-bench tasks

Read the paper here and see the Github repository here.

Open In Colab

Loading GPT models into Google Colab

We will use the wonderful package EasyTransformer from Neel Nanda that was used heavily at the last hackathon. It contains some helper functions to load pretrained models.

There's all the famous ones like GPT-2 all the way to Neel Nanda's custom 12-layer SoLU-based Transformer models. See a complete list here along with an example toy model here.

See this Colab notebook to use the EasyTransformer model downloader utility. It also has all the available models there from EleutherAI, OpenAI, Facebook AI Research, Neel Nanda and more.

You can also run this in Paperspace Gradient. See the code on Github here and how to integrate Github and Paperspace here. See a fun example of using Paperspace Gradient like Google Colab here. Gradient has a bit of a larger GPU for free tier.

You can also use the huggingface Transformers library directly like this.

Open In Colab

Inverse scaling notebook

"All alignment problems are inverse scaling problems" is one fascinating take on AI safety. If we generate benchmarks that showcase the alignment failures of larger models, this can become very interesting.

See the Colab notebook here. You can also read more about the "benchmarks" that won the first round of the tournament here along with the tournament Github repository here.

Open In Colab

Making GriddlyJS run

This Colab notebook gives a short overview of how to use the Griddly library in conjunction with the OpenAI Gym.

Jump on the Griddly.ai website to create an environment and load it into the Colab notebook. There's much more documentation about what it all means in their documentation.

Open In Colab

Notebooks by Hugging Face

These notebooks all pertain to the usage of transformers and shows how to use their library. See them all here. Some notable notebooks include:

Data sources

Download all GLUE datasets using this code.

See a lot of the SoTA benchmarks on HuggingFace.

Video introductions to AI testing

[1 hour] Recent progress in verifying neural networks.

[10 minutes] Detection of Trojan Neural Networks

[16 minutes] OpenAI Safety Gym: Exploring safe exploration

[7 minutes] Gridworlds in AI safety, performance and reward functions

[10 minutes] Center for AI Safety's intro to Trojan neural networks

[20 minutes] Center for AI Safety's detecting emergent behaviour

[16 minutes] Center for AI Safety's intro to honest models and TruthfulQA