A group of 28 nations, including China and the US, has agreed to work together to identify and manage potential risks from “frontier” artificial intelligence (AI), marking the first such multilateral agreement.
Published by the UK, the Bletchley Declaration on AI Safety outlines the countries’ recognition of the “urgent need” to ensure AI is developed and deployed in a “safe, responsible way” for the benefit of a global community. This effort requires wider international cooperation, according to the Declaration, which has been endorsed by countries across Asia, EU, and the Middle East, including Singapore, Japan, India, France, Australia, Germany, South Korea, United Arab Emirates, and Nigeria.
Also: Generative AI could help low code evolve into no code – but with a twist
The countries recognize significant risks can emerge from intentional misuse or unintended issues of control of frontier AI, in particular, risks from cybersecurity, biotechnology, and disinformation. The Declaration points to potentially serious and catastrophic harm from AI models, as well as risks associated with bias and privacy.
–>
Along with their recognition that risks and capabilities are still not fully understood, the nations have agreed to collaborate and build a shared “scientific and evidence-based understanding” of frontier AI risks.
Also: As developers learn the ins and outs of generative AI, non-developers will follow
The Declaration describes frontier AI as systems that encompass “highly capable general-purpose AI models”, including foundation models, which can carry out a wide range of tasks, as well as specific, narrow AI.
“We resolve to work together in an inclusive manner to ensure human-centric, trustworthy, and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives,” states the Declaration.
“In doing so, we recognize that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximizes the benefits, and takes into account the risks associated with AI.”
This approach could include establishing classifications and categorizations of risks based on a country’s local circumstances and applicable legal frameworks. There may also be a requirement for cooperation on fresh approaches, such as common principles and codes of conduct.
Also: Can AI code? In baby steps only
The group’s efforts will focus on building risk-based policies across the countries, collaborating where appropriate, and recognizing nation-level approaches may differ. Alongside the need for increased transparency by private actors who are developing frontier AI capabilities, these new efforts include developing relevant evaluation metrics and tools for safety testing, as well as public-sector capabilities and scientific research.
UK Prime Minister Rishi Sunak said: “This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI.”
UK Technology Secretary Michelle Donelan added: “We have always said that no single country can face down the challenges and risks posed by AI alone, and today’s landmark Declaration marks the start of a new global effort to build public trust by ensuring the technology’s safe development.”
A Singapore-led project known as Sandbox was also announced this week, with the aim of providing a standard set of benchmarks to assess generative AI products. The initiative pools resources from major global players that include Anthropic and Google, and is guided by a draft catalog that categorizes current benchmarks and methods used to evaluate large language models.
Also: The ethics of generative AI: How we can harness this powerful technology
The catalog compiles commonly used technical testing tools, organizing these according to what they test and their methods, and recommends a baseline set of tests to evaluate generative AI products. The goal is to establish a common language and support “broader, safe and trustworthy adoption of generative AI”.
The United Nations (UN) last month set up an advisory team to look at how AI should be governed to mitigate potential risks, with a pledge to adopt a “globally inclusive” approach. The body currently comprises 39 members and includes representatives from government agencies, private organizations, and academia, such as the Singapore government’s chief AI officer, Spain’s secretary of state for digitalisation and AI, and OpenAI’s CTO.
Artificial Intelligence
<!–>
–>