Large language models are always getting better and better thanks to the efforts of OpenAI, Google, Anthropic, and other generative AI startups. To establish a shared methodology for impartial assessment of those models’ safety once release, the US and UK governments have inked a Memorandum of Understanding. Together, the AI Safety Institute in the UK and the US counterpart—which Vice President Kamala Harris announced but hasn’t started operations yet—will create a battery of tests to evaluate the risks and guarantee the security of “the most advanced AI models.”
As part of the collaboration, they intend to share personnel, information, and technical know-how. Apparently, one of their first objectives is to conduct collaborative testing on a model that is available to the general public. The Financial Times was informed by UK science minister Michelle Donelan, who signed the deal, that they “really got to act quickly” since they anticipate the release of a new generation of AI models during the course of the next year. They think those models have the potential to be “complete game-changers,” but they are unsure of their full potential.
According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” US Secretary of Commerce Gina Raimondo said. “Our partnership makes clear that we aren’t running away from these concerns — we’re running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”
While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that “do not endanger the rights and safety of the American people.” A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban “AI that manipulates human behavior or exploits people’s vulnerabilities,” “biometric categorization systems based on sensitive characteristics,” as well as the “untargeted scraping” of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules.