AI safety testing is becoming increasingly important as governments require rigorous safety certifications. The deployment of the EU AI Act and the development of AI standards by NIST in the US will both necessitate such testing.
The use of large language models, such as ChatGPT, has emphasized the need for safety testing in modern AI systems. Adversarial attacks and neural Trojans have become more common, highlighting the importance of testing for robustness and viruses in neural networks to ensure the safe development and deployment of AI.
In addition, the rapid development of related fields, such as automatic verification of neural programs and differential privacy, offers promising research for provably safe AI systems.
This page provides an overview of the interactive demos and research approaches you can take towards testing AI.
Use this API key for OpenAI API access: sk-rTnWIq6mUZysHnOP78veT3Bl
You probably want to view this website on a computer or laptop.
See here how to upload your project to the hackathon page and copy the PDF report template here.
Using ART, you can create comprehensive tests for adversarial attacks on models and / or test existing ones. Check out the documentation here. It does not seem possible to do textual adversarial attacks with ART, though that would be quite interesting.
For textual attacks, you might use the TextAttack library. It also contains a list of textual adversarial attacks. There are a number of tutorials, the first showing an end-to-end training, evaluation and attack loop (see it here).
You can use the OpenAI Gym to run interesting reinforcement learning agents with your spins of testing on top!
See how to use the Gym environments in this Colab. It does not train an RL agent but we can see how to initialize the game loop and visualize the results. See how to train an offline RL agent using this Colab. Combining the two should be relatively straightforward.
The LMEH is a set of over 200 tasks that you can automatically run your models through. You can easily use it by writing pip install lm-eval at the top of your script.
See a Colab notebook shortly introducing how to use it here.
This tutorial from AAAI 2022 has two Colab notebooks:
These are very useful intros to think about how we can design formal tests for various properties in our models along with useful tools for ensuring the safety of our models against adversarial examples and out-of-distribution scenarios.
Using SeqIO to inspect and evaluate BIG-bench json tasks:
Creating new BIG-bench tasks
We will use the wonderful package EasyTransformer from Neel Nanda that was used heavily at the last hackathon. It contains some helper functions to load pretrained models.
See this Colab notebook to use the EasyTransformer model downloader utility. It also has all the available models there from EleutherAI, OpenAI, Facebook AI Research, Neel Nanda and more.
You can also run this in Paperspace Gradient. See the code on Github here and how to integrate Github and Paperspace here. See a fun example of using Paperspace Gradient like Google Colab here. Gradient has a bit of a larger GPU for free tier.
You can also use the huggingface Transformers library directly like this.
"All alignment problems are inverse scaling problems" is one fascinating take on AI safety. If we generate benchmarks that showcase the alignment failures of larger models, this can become very interesting.
This Colab notebook gives a short overview of how to use the Griddly library in conjunction with the OpenAI Gym.
CheckList, a dataset to test models: Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
France: Villani report
LMentry: A Language Model Benchmark of Elementary Language Tasks. They basically take 25 quite easy tasks for humans and formalize them into textual understanding for LLMs.
[1 hour] Recent progress in verifying neural networks.
[10 minutes] Detection of Trojan Neural Networks
[16 minutes] OpenAI Safety Gym: Exploring safe exploration
[7 minutes] Gridworlds in AI safety, performance and reward functions
[10 minutes] Center for AI Safety's intro to Trojan neural networks
[20 minutes] Center for AI Safety's detecting emergent behaviour
[16 minutes] Center for AI Safety's intro to honest models and TruthfulQA