Hailed as a landmark piece of international agreement, the Bletchley Declaration saw 28 countries affirm that urgent action must be taken to ensure future AI development is "human-centric, trustworthy and responsible” in order to "transform and enhance human wellbeing, peace and prosperity”*1. Noble goals, but anyone reading the agreement further could be forgiven for noticing that it’s a little light on actual detail. So, just how well does the Bletchley Declaration reflect the discussions of the eponymous two day Bletchley Park conference? Here are three of its key affirmations and what they might mean for the real world development of frontier AI.
1. “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. […] we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.”
Probably the most quoted passage from the declaration, this headline-ready snippet names cybersecurity, biotechnology, and disinformation as specific areas of concern, but otherwise remains cautiously broad in its scope. The phrasing of this issue in the declaration was always going to raise a few eyebrows - John Tasioulas, Director of the Institute for Ethics in AI at University of Oxford, described it as having stretched “the concept of safety” to include “pretty much all values under the sun”*2, while Eigen Technologies founder Lewis Liu denounced it as “doom mongering”*3. President of the European Commission Ursula Von de Leden, however, used her speech in the first session of the conference to draw attention to the AI act, which gives concrete examples of AI risk from “unacceptable”, such as voice-activated toys that encourage dangerous behaviour in children, through to “minimal”, such as AI-enabled spam filters*4. In fact, the entire first session of the summit was dedicated entirely to round table discussions describing the risk posed by AI, including the risks from “Unpredictable Advances”, and “Loss of Control”*5. If you’re picturing scenes from The Terminator right now then you’re likely not alone, but Poppy Gustafsson, CEO of cybersecurity firm Darktrace, reassured reporters that the closed door sessions focused on the "daily reality" of AI, rather than whether the “robots are going to kill us all”*6.
2. “Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. […] This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks.”
Much was made at the conference of the breadth of countries represented, including China, whose attendance and agreement Prime Minister Rishi Sunak took great care to emphasise. Despite Sunak warning against “glass half empty”*7 thinking, China’s presence at the conference remained an uneasy one, with vice minister of science Wu Zhaohui being granted only limited access to the conference's events. Indeed, given that only last year the National Cyber Security Center warned that Chinese technical development was “likely to be the single biggest factor affecting the UK’s cyber security in the years to come”*8, there were many who felt that any inclusion of Chinese officials at all was unjustifiable. Most vocal among these was former Prime Minister Liz Truss, who urged Sunak to withdraw his invitation, saying that “no reasonable person expects China to abide by anything agreed at this kind of summit, given their cavalier attitude to international law.”*9
Equally divisive between participants was the subject of legislative responses to AI development. While the European Commission moves ahead with passing the AI act, Sunak refuses to give a timeframe for when any comparable legislation might be produced by the UK. In his statement to the press on the final day of the conference he said that although “ultimately binding requirements will likely be necessary” governments needed time to better understand what they are legislating for, with his highly promoted new AI Safety Institute tasked with gathering empirical data before the government can “spell out the formal regulations”*7. Even the agreement for leading AI companies to submit their models for testing prior to release did not get unanimous agreement, but was instead adopted by a select group of “like-minded governments”*10. China, of course, was not among this number, although the technology ministry declined to say why. Despite Sunak’s protestations then, the jury is likely to remain out on whether a truly international consensus is possible.
3. “We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.”
The idea of greater transparency from key AI developers is a refrain repeated throughout the conference. Despite the European Commission’s vice-president for values and transparency, Věra Jourová, pointing out the UK’s propensity to value “social responsibility” over regulation*11, Sunak made it clear that he does not believe AI developers can “mark their own homework”*7 on this subject.
The “relevant actors'' represented at Bletchley Park included representatives from Amazon, Google DeepMind, Meta, and Microsoft, as well as OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak, many of whom were also recent signories to the open letter “Pause Giant AI Experiments”. Calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, this letter urges developers to draw breath from asking whether they can, and think instead on whether they should*12. Such a pause would allow for many of the systems Sunak describes to actually be put in place – safety institutes, expert panels, and a deeper understanding of what we are actually dealing with. Government systems, after all, are not known for their agility in keeping pace with rapid change.
As you may expect though, not everyone agrees with this approach. Google DeepMind co-founder Mustafa Suleyman said that although he wasn’t ruling out such a pause, he didn’t see “any evidence today that frontier models of the size of GPT-4 […] present any significant catastrophic harms”*11. Taking a stronger stance, the British Computer Society described such a pause as “unrealistic” with the potential to “Result in a position which is ‘asymmetric’”, providing “bad actors an advantage in developing AI for nefarious purposes.” In their report “Helping AI Grow Up - Without Pressing Pause"”, they echoed the position of the UK government by recommending instead a system of independent oversight and careful monitoring*13.
It seems then that most agree on the “what”, while the “how” remains open to interpretation. The question of light touch vs heavy handed politics, of formal vs informal oversight, and of public enforcement vs personal responsibility, is likely to rumble on. So too will the quiet tug of war over what constitutes harm, or how international cooperation can be achieved in an area with such high stakes and such deep rivalries. One thing is clear though, and that is that the Bletchley Declaration is only the beginning of what will be a long and evolving conversation. The coming virtual mini-summit hosted by the Republic of Korea in 6 months time will be critical in maintaining that momentum, as well as strengthening the relationships built over the course of the conference. What the future holds remains uncertain, but as Leyen said in the conclusion of her speech, “history is watching”*14.