At UK’s AI Summit developers and govts agree on testing to help manage risks

By Martin Coulter and Paul Sandle

BLETCHLEY PARK, England (Reuters) -Leading AI developers agreed to work with governments to test new frontier models before they are released to help manage the risks of the rapidly developing technology, in a “landmark achievement” concluding the UK’s artificial intelligence summit.

Some tech and political leaders have warned that AI poses huge risks if not controlled, ranging from eroding consumer privacy to danger to humans and causing a global catastrophe, and these concerns have sparked a race by governments and institutions to design safeguards and regulation.

At an inaugural AI Safety Summit at Bletchley Park, home of Britain’s World War Two code-breakers, political leaders from the United States, European Union and China agreed on Wednesday to share a common approach to identifying risks and ways to mitigate them.

British Prime Minister Rishi Sunak said that declaration, the action on testing and a pledge to set up an international panel on risk would “tip the balance in favour of humanity”.

He said the United States, EU and other “like-minded” countries had reached a “landmark agreement” with select companies working at AI’s cutting edge on the principle that models should be rigorously assessed before and after they are deployed.

Yoshua Bengio, recognised as a Godfather of AI, will help deliver a “State of the Science” report to build a shared understanding of the capabilities and risks ahead.

“Until now the only people testing the safety of new AI models have been the very companies developing it,” Sunak said. “We shouldn’t rely on them to mark their own homework, as many of them agree.”


The summit has brought together around 100 politicians, academics and tech executives to plot a way forward for a technology that could transform the way companies, societies and economies operate, with some hoping to establish an independent body to provide global oversight.

In a first for Western efforts to manage AI’s safe development, a Chinese vice minister joined other political leaders on Wednesday at the summit focused on highly capable general-purpose models called “frontier AI”.

Wu Zhaohui, China’s vice minister of science and technology, signed a “Bletchley Declaration” on Wednesday but China was not present on Thursday and did not put its name to the agreement on testing.

Sunak had been criticised by some lawmakers in his own party for inviting China, after many Western governments reduced their technological cooperation with Beijing, but Sunak said any effort on AI safety had to include its leading players.

He also said it showed the role Britain could play in bringing together the three big economic blocs of the United States, China and the European Union.

“It wasn’t an easy decision to invite China, and lots of people criticised me for it, but I think it was the right long-term decision,” Sunak said at a press conference.

Microsoft-backed OpenAI, Anthropic, Google DeepMind, Microsoft, Meta and xAI attended sessions at the summit on Thursday, alongside leaders including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris and U.N. Secretary-General António Guterres.

Von der Leyen said complex algorithms could never be exhaustively tested, so “above all else, we must make sure that developers act swiftly when problems occur, both before and after their models are put on the market”.

Entrepreneur Elon Musk told fellow attendees on Wednesday that governments should not rush to roll out AI legislation, two sources said.

Instead, he suggested companies using the technology were better placed to uncover problems, and they could share their findings with lawmakers responsible for writing new laws.

The billionaire had the final words on AI after the summit ended in a conversation with Sunak, due to be broadcast later on Thursday on Musk’s X, the platform previously known as Twitter.

“We live in the most interesting times,” he said. “And I think this is 80% likely to be good, and 20% bad, and I think if we’re cognisant and careful about the bad part, on balance actually it will be the future that we want.”

(Reporting by Paul Sandle, Martin Coulter and Alistair Smout; Additional reporting by William James and Jan Strupczewski; Editing by Emelia Sithole-Matarise, Susan Fenton and Richard Chang)