Google DeepMind, OpenAI, and Anthropic will grant the UK government “early or priority access” to their models for research and safety purposes, Prime Minister Rishi Sunak revealed during his speech at London Tech Week.
While it’s still unclear what type of data the tech companies have pledged to share, the aim of the move is to help the government better understand and evaluate the opportunities and risks of these systems.
Sunak re-emphasised the “extraordinary potential of AI” in sectors such as healthcare, education, and public services, aligning the development of artificial intelligence with the UK’s goal to become “the best country in the world for tech.”
In the past few months, the government has been ramping efforts to turn the UK into an AI-enabled country and economy. In March, it announced an initial £100m in funding to establish a designated AI taskforce for the development of foundation models — following a nearly £1bn investment in a new exascale computer and a dedicated AI Research Resource.
Amidst global concerns over the serious threats AI poses to humanity, the UK has suggested a “pro-innovation” approach to AI regulation. Unlike the EU’s AI Act, the white paper presented by the British government doesn’t support the introduction of fixed laws or a designated regulatory body. Instead, it proposes “flexible, context-specific principles” that fall under the oversight of existing regulators.
At London Tech Week, Sunak acknowledged people’s fears and stressed the UK’s ambition to become “not just the intellectual home, but [also] the geographical home of global AI safety regulation.”
The British Prime Minister didn’t disclose any details on regulatory developments, but revealed that the UK will host the first ever Global Summit on AI Safety later this year.
Close collaboration with tech giants active in the field, such as DeepMind and OpenAI, could indeed provide the UK with a significant strategic advantage in evaluating and regulating these systems, before legislation requires transparency elsewhere in Europe.
But involving such companies in the conversation also comes with a risk. In lack of an already established legal framework, they could influence safety approaches around their technologies. This means that the UK would need to ensure that any outcomes of this private-public partnership won’t favour the companies’ agenda, but create guardrails for AI’s safe development.
Get the TNW newsletter
Get the most important tech news in your inbox each week.