Hide table of contents

Covi Franklin - February 2026

The first year of Donald J. Trump's second term in office has been about as eventful as any forecaster or analyst would have dared to imagine when he was sworn in in January 2025. For those of us in Europe who had hoped we might still be able to rely on the certainty of a strong NATO to shield growing concerns over Russian security threats - Trump's latest efforts to take over Greenland and the subsequent diplomatic furore have laid bare the rift that has been growing between relations either side of the Atlantic. This is, for the most part, not great news. But there is enough bad news floating around to last a lifetime...so I thought it would be useful to explore a potential benefit of this change in relations between the US and its allies. Could this transition in strategic positioning on the part of the 'middle powers,'[1] as Mark Carney put it, open the door for improved prospects for multilateral coordination on sensible AI safety?

The Case for Multilateralism

Before beginning, let me briefly explain why I think that finding successful avenues for an effective multilateral approach to AI governance is really, really important. And yes, it's true that right now (with the UN facing potential collapse) it feels like a dreadful time to try and implement any form of ambitious multilateral approach to any issue with strategic implications for great powers. It feels unlikely that such an effort would succeed and I'm mindful that much of my argument throughout this piece relies on a heavy dose of hope; but that doesn't mean it's any less important.

Now, many have argued that prioritising the 'let's all work together' approach misses the reality of the world we're living in. This is the 'race' argument to AI development; the US are in an existential race against the Chinese Communist Party (CCP) to develop AGI/ASI - a capacity that will forever transform the strategic balance between the two and will lead to vastly different futures for humanity. The risks of this approach are well articulated, notably in the AI 2027 scenario, whereby the lacklustre approach to safety driven by the race dynamic leads to a misaligned AI which turns on humanity.[2] Underpinning the urgency of this approach is the belief articulated by Dario Amodei, Anthropic CEO, that China 'have hands down the clearest path to [an] AI-enabled totalitarian nightmare,' and that the US/West's approach to China should therefore be to do everything possible to hinder their development of sovereign AI to delay and/or prevent this 'totalitarian lock-in'.[3]

I think this fear and scepticism towards the motivations of the CCP is broadly reasonable. Yet I also think that concern of the emergence of totalitarian lock-in (that follows on from centralisation of AI capacity), must be held in balance with concerns related to a range of other potential existential AI risks that follow on from poorly governed development of these capacities (ranging from misalignment, to proliferation of technologies that allow for biological weapon production). The 'race' informs geopolitical strategic outcomes - but I don't believe it is productive in preventing the more existential risks that we face.

To illustrate this, let's consider that underlying the logic of those who believe the AI race is a broadly positive endeavour, is the notion that if only the US can reach ASI first - then all will be well. We will be launched into a near-infinite resource world under the grand banner of American freedom, or as the US AI Action Plan puts it - 'winning this race will usher in a new era of human flourishing.'[4] The real question is, would the US winning the AI race stop China from continuing its own efforts to catch up? Would China simply decide, 'okay we lose' or in some way be forced to cease its own research? I think that this is incredibly unlikely. There would be no benefit to China to simply 'downing tools.' Instead it would be more likely that they would seek to accelerate efforts in the futile hope that they might be able to catch up, to close the gap that had opened up, and to reverse the US' strategic advantage. Desperate, accelerated AI development is really a worst case scenario when it comes to fears of misaligned AI posing existential risks.[5] The existence of a 'good' aligned AGI/ASI does not provide comprehensive protection from the destructive capacities of a 'bad' misaligned AGI/ASI.[6] 

So here is the crux of my view; whatever the short-medium term geopolitical benefits that people can see in conducting 'the race' (and I do believe that there are valid benefits), these are separate from the far greater, existential risks that exist in the medium-long term if any country, anywhere does not properly and safely govern the emergence of AGI/ASI. The world must ensure that collectively, it avoids the worst, existential outcomes and risks that come with the creation of AI.

The State of Play

So - having made my case for why I think that putting in place some form of basic global guardrails is of critical importance in avoiding existential risk, let's review how the world is actually doing so far. Unsurprisingly...not too well. The international community's efforts towards a more multilateral approach have largely centred around the AI Safety/Action/Impact Summit series, which began in earnest with the UK-led Bletchley Conference back in November 2023, and resulted in the signing of the Bletchley Declaration - a promising start in focusing the global community on the risks inherent with frontier AI models. But progress following this has been less convincing, with the Paris Summit (co-chaired by France and India) of February 2025 criticised as focusing on 'boosterism instead of reckoning with the gravity of a future containing artificial general intelligence', and the upcoming Delhi conference expected to follow in much the same vein.[7] The siren call of explosive economic growth is a difficult one to resist.

The other important avenue to consider is potential collaboration between the two big players themselves - China and the US. Since the US-China Track 1 discussions on AI safety met back in May 2024 in Geneva (an opening round of discussions which produced little tangible results) during the Biden administration, there has been no formal follow up. What we have seen from the Trump administration so far is a broadly adversarial approach, as outlined in the US AI Action Plan. While the Trump administration made a reversal on its previous, more stringent, restrictions on sales of H200 Nvidia chips to China, this decision seems more related to a desire to maintain US chip-market dominance, as well as to ease tensions following China's imposition of restrictions on the sale of rare-earth metals at the height of trade tensions between the two nations. While the sale of Nvidia chips comes with restrictions on the potential end-user application (no military/intelligence etc.), there was no effort to link this potential boost for China's AI industry to more rigorous safety standards and cooperation. All signs so far point to the fact that Trump's dealings with China for the next three years will be defined by profit and power, with genuine cooperative efforts taking a back seat.

This is concerning - particularly given that there is a reasonable chance that the next three years will see some critical strides towards AGI, if not the arrival of AGI itself. Now is the time when effective safety policy is most needed, and can be most impactful in influencing scenarios which are yet to take place. This is arguably becoming all the more important as we see a diffusion of powerful open-source models and decentralised use of AI agents (the recent excitement about the creation of Moltbook, the Reddit-like social network for AI Agents, is an illustrative example.) This scenario is illustrative of just how quickly new, unexpected, and high-impact developments with AI technology can, in theory, emerge. There is extremely limited scope within existing national policies for how to regulate the use, behaviour and diffusion of AI agents, let alone any sort of multilateral consensus; yet, we are seeing individual users explore and experiment with agents with the capacity to have real-world impact in a broadly unregulated manner. As progress and experimentation of frontier models continue, we must be mindful of the risks of what could happen when national authorities fail to effectively regulate this fast-moving technology.

Changing Tides

This now brings us back to the core question. What now? What can be done despite the concerning strategic outlook of two powerful actors prioritising development speed over safety concerns?

I believe that in the ongoing and severe rift in relations between the EU/UK and the US, there is an opportunity for the 'middle powers' to have an increasingly effective advocacy role in pushing for more cooperative measures on AI safety. Prior to the fractious developments among the NATO alliance it is hardly surprising that the Chinese would have looked sceptically on the intentions of many of the Western middle powers given their clear and solid alignment with the US on the vast majority of economic and military issues.[8] Take for example the AUKUS agreement, back in 2021, which squarely positioned the UK as part of the military apparatus that were gearing up to hamper Chinese ambitions in the Pacific. It would be only rational for China to view the UK with great scepticism in regard to all topics (including on AI) that could impact the strategic balance of power between China and the US' broad sphere of influence.

Now I'm not naively suggesting that damaged relations with the US means that China will suddenly view UK/EU as out-and-out allies (this would be neither realistic, nor desirable from the perspective of the middle powers who have fundamentally contrasting values). I am however suggesting that this opens up room, space for real dialogue and influence which was not necessarily possible before. Recent visits on the part of European leaders have been welcomed by People's Daily (the CCP's premier media outlet and official newspaper) as a 'pragmatic turn towards China', which will enable 'collaboration on shared challenges from the green transition to digital governance that can generate mutual benefits and contribute to greater stability.'[9] 

We have already seen a concrete openness on the part of China to engage seriously on these issues following the launch of the China AI Safety and Development Association (CnAISDA) at the Paris international AI conference in February 2025. Following this, China launched its own Global AI Action Plan later in the year, which included a particular focus on the need for international efforts in relation to AI safety and working through multilateral institutions such as the UN and International Telecommunications Union (ITU). China (like many countries) seems to be grappling to find the balance between the need for AI safety and the desire for diffusion into the economy to drive growth - but they have nevertheless begun putting in place the infrastructure needed for real, credible engagement with other partners.[10]

Areas for Progress

Some, such as the Centre for Long Term Resilience's recent analysis, have emphasised that due to the global desire to reap economic benefits from AI, stringent safety measures that directly slow frontier AI development may be resisted by governments.[11] Nevertheless, there are multiple areas where I think effective policy and advocacy efforts on the part of the middle powers could make a marked difference in areas that are able to balance ongoing progress with improved safety actions. I think there are two particular approaches that could potentially move the needle.

The first is to develop mandatory governance frameworks to apply to all models/labs operating in their territory so that market desire to access these territories results in widespread adoption, regardless of US/China positioning. This could include shifting the emphasis from voluntary AI safety commitments to more enforceable mandatory requirements, particularly in relation to transparency of safety practices and reporting of AI-incidents. It could also include the establishment of relevant international legal frameworks that effectively hold  misuse of AI systems to account - such that regardless of the country an organisation operates in, it will be mindful of the need to adhere to such regulations to avoid potential cross-jurisdiction liability.

The second is to develop pragmatic cooperative mechanisms at a high-level, that enable all sides of the AI race to engage on critical issues of common concern. This could include the implementation of processes and procedures on AI incident preparedness and response, as well as exploration of deescalation mechanisms - i.e. are there early-warning markers that can help inform the AI community when risks are becoming unacceptably high and a change of approach is needed?

The US Question

Now, the obvious question is that much of this equation has been focused on how the Atlantic rift will increasingly enable the middle powers to influence China. However, does this really mean anything if the US, who have arguably been the most publicly resistant to potential safety-based restraints on their technological advances, continue to press ahead with safety considered a lower-level priority? Indeed, just this month the US refused to sign its support for the 2026 Global AI Safety Report. For all the talk of the risk of totalitarian lock-in, could the more imminent existential threat revolve around irresponsible AI development practices taking place in the US? This is, unfortunately, a major concern of mine - which is only compounded by recent news of xAI's merger with SpaceX as part of Musk's vision of harnessing orbital, solar-powered data centres to further advance AI development.[12] 

Nevertheless, if one were to suppose that US-based AI development posed an equal or greater threat to global safety than any other nation's actions - we must consider how, if at all, the rest of the global community would be able to move the needle. Is the appropriate response simply despair, inaction, and hope that the US might show self-imposed restraint? I think this is needlessly defeatist. Firstly - if middle powers were theoretically able to establish certain safety guardrails and preparedness measures which China agreed to, this would potentially empower the voices of more safety-first actors in the US; the Anthropic model may be able to garner more support over the xAI attitudes towards AI safety if there is clear precedent that a more rigorous approach to safety does not necessarily equate a blow to competitiveness.

In addition, showing real progress on practical collaboration on safety measures may go some way towards enabling a toning down of the 'race rhetoric'. This is admittedly optimistic, but if there are small ways in which countries can contribute to some degree of softening around rhetoric, then this can only be a good thing.

Finally, it is worth considering the impact that effectively influencing China's approach to AI could have on the US post-2028. While I think it is fairly unlikely that the Trump administration's fundamental perspective on these issues will radically transform, a post-Trump administration, perhaps looking to rebuild damaged relationships, might be more likely to engage with and adhere to sets of AI safety principles if the rest of the major global players have already rallied around key measures.

Concluding Thoughts

Ultimately, those hoping to encourage broader prioritisation of AI safety over economic benefits face a challenging environment at this moment in time. Nevertheless, we must look to those areas where positive efforts can be made.

If middle powers can effectively leverage their newfound strategic ambiguity in this increasingly multipolar world, it could mark a starting point for improved collaboration and dialogue. Any opportunity to shift away from purely adversarial language on the issue of AI safety is one to be embraced.

There is, of course, a certain irony in all of this. The fracturing of the transatlantic alliance - an event that carries genuine risks for European security and the rules-based international order - may paradoxically create the conditions for more productive multilateral engagement on one of the most consequential challenges humanity faces. Whether the middle powers can seize this moment remains to be seen, but the alternative - continued drift towards an ungoverned AI race between hostile powers - is too dangerous to accept without trying.

 

  1. ^

    CBC News (2026) 'Read Mark Carney's full speech on middle powers navigating a rapidly changing world', CBC News, 20 January. Available at https://www.cbc.ca/news/politics/mark-carney-speech-davos-rules-based-order-9.7053350

  2. ^

    AI 2027 Team (2025) AI 2027. AI Futures Project. Available at https://ai-2027.com/

  3. ^

    Amodei, D. (2026) 'The Adolescence of Technology'. Available at https://www.darioamodei.com/essay/the-adolescence-of-technology.

  4. ^

    The White House (2025) Winning the Race: America's AI Action Plan.

  5. ^

    While a range of risks exists, the most likely risk that concerns me regarding China independently conducting its own retrospective AI race (even after the race is finished) is the emergence of powerful, non-aligned AI.

  6. ^

    That is not to say that there's no scenario  where a 'good' ASI could provide protection from a 'bad' ASI. I just don't think that we can confidently trust that by developing a good ASI that we're saved from all risks of a bad one. 

  7. ^

    Ó hÉigeartaigh, S., 2025. What Comes After the Paris AI Summit? Royal United Services Institute (RUSI), 27 March.

  8. ^

    I'm not in any way seeking to comment on the merits of the UK or any other country's strategic/military position in relation to China (this is not a reflection on 'China good' vs 'China bad'.) It is simply an effort to acknowledge how changing perspectives of posturing towards China may inform ongoing interactions in the AI space

  9. ^

    Majueran, M. (2026) 'Europe takes pragmatic turn toward China', People's Daily Online, 4 February. Available at: https://en.people.cn/n3/2026/0204/c90000-20422495.html

  10. ^

    Elmgren, K., Singer, S. and Guest, O. (2025) 'Is China Serious About AI Safety?', AI Frontiers, 14 October. Available at: https://ai-frontiers.org/articles/is-china-serious-about-ai-safety

  11. ^

    Shaffer Shane, T. and Whittlestone, J. (2025) 'How the UK AI bill can improve AI security', The Centre for Long-Term Resilience. Available at: https://www.longtermresilience.org/reports/how-the-uk-ai-bill-can-improve-ai-security-2/

  12. ^

    xAI seems one of the frontier labs with the loosest safety concerns if their approach to image editing is anything to go by.

  13. Show all footnotes

10

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities