The new cycle of AI summits: from Bletchley Park 2023 to Paris 2025
On 10 and 11 February 2025, France will host the AI Action Summit, following in the footsteps of the United Kingdom (2023) and South Korea (2024). This event will bring together heads of state, leaders of international organizations and businesses, researchers and members of civil society. Its aim is to lead a global reflection on the future of AI in the general interest, while ensuring respect for the common good. This summit is part of a series of international events aimed at regulating and encouraging the ethical and responsible use of AI, while stimulating its economic potential.
The summit will focus on 5 key areas: 1/ AI in the public interest; 2/ the future of work; 3/ innovation and culture; 4/ trusted AI; 5/ global governance of AI. By organising this summit, France is seeking to regain the initiative by moving from a preventive approach, focused on risk management, to a resolutely proactive strategy. The ambition is to position AI as a lever for positive transformation in key sectors such as health, education, climate and the modernization of the State. However, the harmonization of regulations between countries, the geopolitical divergences in terms of AI – between the United States (whose new president will be known at the time), China and the European Union in particular – and the voluntary nature of the commitments are likely to put the brakes on France’s ambitions. This note looks back at the two international summits organized since 2023 before presenting the prospects for the summit organized by France. |
Bletchley Park 2023: review and prospects for a first concrete step towards global AI governance
- While interest in artificial intelligence has risen sharply over the past two decades, it was the creation of ChatGPT by OpenAI in November 2022 that really disrupted the sector. This development brought back to the table, in an unprecedented way, the issues of governance of artificial intelligence, and the problems of regulation. Artificial Intelligence Summits were therefore created, in the form of biannual international meetings organized by governments to discuss the security and regulation of artificial intelligence. The first AI Safety Summit was hosted by the UK at Bletchley Park a year later, in November 2023, focusing on the risks posed by AI and the need for action.
- At the end of the summit, 54 countries, including the whole of the European Union, the United States, China and the United Kingdom, signed the Bletchley Declaration, which sets out the establishment of universal safety standards to govern the development of artificial intelligence without curbing innovation. This text emphasizes the ethical aspects of AI in terms of human rights, greater transparency, and the responsibility of the players involved in the development and use of these technologies.
In addition to the heads of state, Big Tech leaders such as Sam Altman of OpenAI and Elon Musk of Tesla, as well as representatives from Anthropic, Google DeepMind, Microsoft, xAI and Meta, were involved in the discussions on the need for regulation and enhanced monitoring to ensure that AI serves beneficial purposes while minimizing the risks associated with its use. - Following the summit, an international report on the safety of AI, written by a group of international experts on the safety of advanced AI systems, was published on 17 May 2024. Former UK Prime Minister Rishi Sunak also launched the world’s first AI Safety Institute, tasked with testing the safety of new types of AI. The United States quickly followed suit by creating its own AI safety institute, and the Institut Montaigne is recommending that France follow suit.
- Despite the laudable intentions of the Bletchley Declaration, it does not clarify how surveillance and control mechanisms will not hamper innovation. What’s more, it glosses over the differences between countries on how to regulate AI. Some, such as the United Kingdom and China, favour a more flexible approach, while the European Union and the United States are adopting stricter regulations. The European AI Act, for example, bans intrusive and discriminatory uses, such as real-time biometric identification in public places, a measure that goes well beyond the declaration. China, on the other hand, continues to integrate AI into controversial practices such as mass surveillance, social rating and weaponry, reflecting a very different approach. This divergence illustrates an underlying geostrategic competition. It is therefore important that the regulations resulting from these summits do not penalize new players to the benefit of the technology giants, already established in the United States or China, who are waging an intense technological war. Finally, the declaration has been criticized for focusing on AI’s catastrophic scenarios, to the detriment of concerns about its impact on workers or the specific challenges facing countries in the South.
2024: Seoul’s ‘mini summit’ on AI
- On 21 and 22 May 2024, the Republic of Korea and the United Kingdom jointly organized the second AI summit entitled the ‘Seoul AI Summit’, with the aim of maintaining the momentum generated by the Bletchley summit. This ‘mini summit’ was held both virtually and in Seoul over two consecutive days of events. The summit was structured around two conferences, the AI Seoul Summit and the AI Global Forum. These conferences were held simultaneously, in the same place and with most of the same participants.
- The Seoul AI Summit followed a similar format to the UK’s 2023 summit, inviting heads of state, including China, which once again chose to attend only the ministerial meeting. A select group of AI industry representatives were also present to present the safety measures adopted in connection with the Bletchley Declaration.
- The first day was co-chaired by South Korean President Yoon Suk Yeol and UK Prime Minister Rishi Sunak. On the second day, a digital ministerial meeting was held in Seoul, jointly organized by the South Korean and UK ministries, while the Global AI Forum was held in parallel. Although Korea took centre stage, the UK remained at the forefront, seeking to keep a firm grip on the summit agenda.
- This summit adjusted its approach slightly by incorporating themes such as innovation and inclusiveness into the risk related AI issues on the agenda. However, some critics felt that this diversification could dilute the focus on safety, which had made the UK summit unique, in an already crowded landscape of AI policy initiatives. This was seen as a factor explaining a drop in participation and media coverage, a phenomenon that was nevertheless expected for an event presented as a ‘mini summit’.
- Whilst the Seoul summit did not achieve the same notoriety as the UK summit, it was nonetheless successful. It contributed to an increase in the number of AI safety institutes around the world, pointing to a significant increase in governments’ global AI safety capacity. In addition to the US and UK initiatives, Japan, South Korea and Canada have announced the creation of their own AI safety institutes. The European Union has suggested that the AI Office, created under the European AI Act, should take on a similar role for the whole of the Union.
- Finally, the Korean and British organizers obtained a declaration of intent signed by ten countries and the European Union, aimed at creating a network of cooperation between these different institutes. While the UK summit had laid the foundations for this idea, the Seoul summit broadened the scope to a global level, promoting a collaborative approach to AI safety.
2025: France displays its ambition to be at the forefront of the AI race
- In February 2025, France will take over by organizing the third edition of the Summit, illustrating its intention to move from a preventive approach to a proactive strategy, with the central issue of openness on the agenda.
- On the eve of the VivaTech innovation fair, on 21 May 2024, Emmanuel Macron had already stated France’s ambition to become a world leader in AI, positioning Paris as a nerve centre for this technology.
- For this year’s event, the Elysée wishes to emphasize a more positive vision of artificial intelligence. Emmanuel Macron has stated that, although mastering AI is an existential challenge for France, this technology represents above all an engine for growth in strategic sectors such as health, education, state transformation and the fight against climate change. The summit will be structured around five key themes: AI serving the public interest, the future of labour, innovation and culture, trusted AI and global governance of AI. These themes will also address cross-cutting issues such as gender equality and environmental implications.
- Anne Bouverot, France’s special envoy for AI, confirmed that this edition will be more inclusive and oriented towards the use of AI for the public good. At the end of September, Anne Bouverot attended AI events at the UN General Assembly, including with Sam Altman (CEO of Open AI) and other international leaders in New York. The French summit will build on the work done by both the United Nations and the Bletchley Park summit to coordinate global AI regulation.
- However, several challenges remain: the summit could be based mainly on voluntary commitments that are not binding on states and companies, limiting any tangible impact. In addition, the regulation of AI-producing companies, operating under a variety of legal regimes and not bound by state borders, poses an additional regulatory challenge. Faced with this challenge, Ms Bouverot stressed that France would continue to strengthen its international partnerships and draw on the recently launched international network of AI safety institutes to assess AI systems.
- Although details of the venue and participants have yet to be finalized, Ms Bouverot reported that many technology executives from major companies had confirmed their attendance. France is working to broaden the guest list beyond industry, including through initiatives such as the Paris Peace Forum’s call for projects to promote AI projects for the common good.
- US participation in the Summit could depend on the outcome of the November 2024 presidential election. The summit is scheduled for 10 and 11 February, less than a month after the inauguration of the new president. Currently, US AI initiatives are strongly aligned with the values promoted by the Biden-Harris administration, notably through the AI executive order signed by Joe Biden, which emphasizes security, transparency and the fight against misinformation. However, if Donald Trump is re-elected, the objectives of the summit could be at odds with his priorities. Some US Republicans have strongly criticized efforts to combat disinformation, describing them as censorship. Donald Trump has also promised to repeal Biden’s executive order on AI and replace it with a policy based on the slogan ‘Make America First in AI’, which is likely to be geared more towards economic competitiveness than international cooperation or ethical issues.
Marc Reverdin, Managing Director, mr@reverdin.eu
If you would like to discuss the political situation further and understand what impact it will have on the business climate and macroeconomic framework, as well as on regional and international policy, please do not hesitate to contact us.
We help our clients navigate political and financial dynamics from local to global.