Hide table of contents

This post is to catalyze discussion on Anton Leicht's Threading the Needle substack piece published Jan/26: How AI Safety Is Getting Middle Powers Wrong: The case for pivoting from global governance to national interests.

I'm Canadian i.e. a middle power state citizen and I particularly encourage and welcome discussion with those of you residing in middle powers (a slightly fuzzy list!), along with others who hold informed perspectives on middle power contexts.

This is my first post on the forum. Thanks for contributing to discussion!

 

At a high-level, Leicht's argument is below. I preserve his wording as much as possible throughout these main points; much more detail on each (and every single) point is in the full piece.


The argument:

  • The AI development-focused approach for middle powers was always a long shot; today it’s actively harmful.
  • If safety advocates in middle powers abandoned the unrealistic pursuit of leverage and global governance, they could pivot to making AI deployment go well in middle powers.
  • Helping middle powers navigate AI deployment, build resilience, and avoid strategic blunders is tractable, neglected, and would actually advance safety.
     

    The existing platform doesn't really work & may hurt the broader AI safety agenda:

  • Safety advocates have largely engaged with the middle power conversation from a US- and China-centric point of view, assuming assume that critical risks emerge from the development of AI systems and so their engagement in middle powers should be as with levers to affect US development
  • Attempts to influence (US) development from outside don't really work, whether through domestic law or international policy, and will grow even less effective as AI and international technology regulation become increasingly more salient
  • AI development-focused international work is also increasingly unhelpful to the broader safetyist agenda.  Loudly broadcasting the strategy of using national laws and international treaties to externally constrain the US has led to more forceful rejections of any attempt at international governance.
  • The focus on AI development frequently leads safety advocates to work against the narrow national interest of their respective countries, weakening the movement’s standing in national environments by seeking to leverage the nation's resources to pursue a broader altruistic mission and not its own unique goals.
     

    Trade-offs:

  • Political and financial capital for AI safety is better spent elsewhere than AI development: advancing the national and collective AI strategy of middle powers writ large
  • Trade-off: limited talent and funding - we can't simply add the type of work suggested below to the existing portfolio
  • Trade-off: more importantly, credibility - the political ramifications of safety advocates endorsing domestic regulation that trades off their nations’ interests in favour of the greater good loses their leadership & influence on a range of other issues they’d otherwise be helpful voices on like national sovereignty, AI strategies, beneficial deployment, and downstream resilience 
     

    Safety-relevant reasons to work on middle power policy now:

  • middle powers are where harms from AI misuse might manifest earliest and most dramatically
  • middle powers are where gradual disempowerment and widespread destitution seem most plausible
  • middle powers getting AI wrong can lead to a destabilised world
  • contributing to national AI strategies in middle power governments should itself be a safetyist priority
  • this work could open a potential door back to the AI development focus – if there is some way to bring middle powers into a position of strength, they might once again affect AI development


    What the safety Movement Can Do:

  • The safety movement is a capable player, and could start/continue mobilising resources toward this goal
  • This is already happening in some places but we need a decisive pivot to grapple with the reputational dimension and deploy resources at scale. Existing work advancing middle powers’ strategic agendas is not endorsed from the top, has not made its way into the mainstream view of what matters, is not reflected in big funding or talent pipelines, and is not presented as a key pathway to safety-minded impact.
  • Orgs: Creating new middle power organizations as main drivers of this process will be beneficial
  • Talent: The research ecosystem can do more, most promisingly in talent pipelines where major safety-aligned mentorship programs could offer a starting point for middle-power-focused researchers
  • Funding: Major funders could publicly prioritise these cause areas, launch RFPs around them, and incubate organisations that tackle them. Most ambitiously, they could apply the same strategy to the middle power space that they have to the national security conversation - making enough space to do this work through for e.g. keystone grants to top-tier institutions and cultivating deep expertise between old-school policy hands and entrepreneurial safety advocates whose cutting-edge knowledge enrich the discussion.


 

As a Canadian, this argument really resonates with me. For the first time since I encountered the EA AI safety cause, this piece helps me make sense of the kind of foggy uncertainty I've been navigating as I've tried to sort out if there's any effective way I can contribute to this cause that doesn't boil down to one single path: secure a (highly competitive) job at an American organization. 

It also gives me a sense of vision that matches the actual powers, influences and goals a middle power state has where so far I have mostly encountered invisibility and certainly not vision. 

This piece was eye-opening for me. I recognize others may have thought long and hard about this and may be actively building toward it. I'd like to learn from others:

  • what's your reaction to this? is this argument new to you or are you familiar with it? where do you agree, where do you see issues? share your hot take, well thought-out opinion, whatever
  • are there examples of work underway toward this 'decisive pivot'?
  • what might forward look like? are you building anything? is someone else you know of?

3

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities