Hide table of contents

This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.

Caveats: I’m writing this post in my personal capacity and it does not necessarily represent the views of anyone else. This is the first draft and I’m rushing to publish it in time for the EA Strategy Fornite, so please take into account that it’s quite rough. “LMIC” (low- and middle-income countries) is a bundle of incredibly varied countries and this post oversimplifies that diversity. Please be mindful that I don’t intend to argue that my claims apply equally to all LMIC – knowledge about local particularities is always key to understanding what works best in each place. 

All images in this post were generated with Runway ML (no particular preference for it, it was just more convenient at the time and impossible to turn my back on the irony of its name)

Key takeaways

  • I think AI safety (AIS) field-building has gone through different phases in the last year that entailed distinct strategies and repercussions in LMIC.
    • Growth phase (until Nov 2022): available funding, several experimental projects started in LMIC, AIS field-building felt low stakes and focused on information value and expansion. 
    • Crisis phase (Nov 2022 - March 2023): resource constraints, reputational challenges, and higher risk involved in AIS field-building due to association with scandals. Some LMIC projects slowed down or stopped as the bar became higher due to the risks.
    • Mainstream phase (March 2023 - currently): increased attention and resources, easier to discuss AIS, including extreme scenarios. The bar for AIS field-building remains high as we want high-quality engagement due to AI discussions being crowded, even in LMIC.
  • In the end, I speak directly to field-builders:
    • Keep downside risks of your engagement more prominently in mind (e.g., poisoning the well, accelerating capabilities instead of safety, lowering the quality of the AIS debate).
    • Start modestly with educational/upskilling projects (e.g., skilling up a small group of colleagues) and get feedback early from more experienced people.

 

Who should read this? How should you read this?

  • Who: People who are considering starting AI safety field-building efforts in LMIC, people already working in LMIC who are confused about their strategic next steps, and/or people interested in learning more about strategic differences between AIS hubs and other places
  • How: The most important parts are probably my take on the current "mainstream phase" and the final section on my tips to pursue AIS field-building ("How I think..."). Maybe go there first and then go back to the beginning if you're interested in the whole thing?

Epistemic status

  • A big chunk of my reasoning is informed by my experience doing AI safety field-building-related activities in Brazil (from where I come) over the last year via Condor Camp, a talent search project focused on Brazilian university students and x-risk mitigation. 
  • But another chunk of my reasoning is based on conversations I had with dozens of folks running AIS field-building-related projects in LMIC around the world (even though often more EA movement building rather than AIS specifically).
  • I think I’m pretty confident about the description of the changes and repercussions in LMIC field-building. I’m less confident about the “How I think AIS field-builders in LMIC should proceed” part as I haven’t thought as much about it, and encourage reactions and alternative suggestions.

My framework for thinking about AIS field-building in LMIC

I’ve found myself updating my views a lot over the last year on how I think we should approach AIS field-building globally. I think one framing that has helped me understand those swings in my own opinion is thinking in terms of different “phases” where resources, incentives, and risks changed. 

For simplicity, I’ll break down the last year in three rough phases: 

  • Growth phase (until Nov 2022)
  • Crisis phase (Nov 2022 - Mar 2023)
  • Mainstream phase (Mar 2023 - currently)

Each of these phases seems to me to have had different repercussions on field-building in places where there is no pre-existing AI safety field, which is the case for most low- and middle-income countries (LMIC), which is what I’ll focus on here. “Developing country with practically no pre-existing AI safety field or community” is what I mean by “LMIC” throughout this post.

Phases of AIS field-building and repercussions in LMIC

Growth phase (until Nov 2022)

Resources and incentives

  • Funding was more easily available, especially for “experimental” projects (many LMIC-related projects fell under this category and got kickstarted in this phase)
  • Gradually, more people from various regions started to join the community and initiatives popped up in LMIC as a product of this additional availability of human capital (even if many people were fairly inexperienced or didn’t have in-depth models of AI safety)
  • Kickstarting projects in various countries also brought a lot of information value to the field

Risks 

  • Talking about AGI risk was perceived as somewhat weird and often required a relatively long reasoning process that passed through more legible EA themes
  • Despite weirdness points, I think talking about AI safety was somewhat low-stakes. Most LMIC didn’t really have clear paths to getting involved in anything AGI-related or even clear reasons to care about AGI development and its consequences
  • Since risks seemed low, it seemed worth it to talk about AI safety in LMIC to a) find untapped talent in new places, and b) potentially prime people and decision-makers for when more consequential times came

State of affairs of AIS field-building in LMIC (in my view)

  • Longtermism was the main framing for AI-related efforts (e.g., talking about future generations and scope sensitivity, bundling together AI safety, pandemic preparedness, nukes)
  • As a result, the most prominent problem was that longtermist issues are typically understood in contrast to “neartermist” causes, which are much more clearly pressing in LMIC
    • This is the surface-level problem that many people think of when they reflect on the challenges of EA/LT in LMIC in general.
    • As a response to this, I think a good framing to talk about AI safety and other longtermist causes typically involved referring to the effects that these low-probabibility, transformative problems could have on more mainstream problems (e.g., how the COVID pandemic disrupted education, the health system, economic growth)
    • But I feel like many community builders felt somewhat aversive reactions to the future generations framing when working on the ground. I think this contributed to many projects being somewhat “on the fence” about AIS and preferring to frame it in the context of other more legibly relevant EA causes (at least I personally felt that to some extent and did that more than once in my work with Brazilians)

Summary of that period for AIS field-building in LMIC

  • Many new projects started, and AIS considerably grew beyond the US/Europe bubble
  • AIS was often still tightly connected to EA
  • The bar for working on AIS field-building-related projects was lower, with smart, generalist EAs taking the lead on field-building projects
  • Much of the value came from a) information, b) talent search, and c) priming people about a novel, important problem

Crisis phase (Nov 2022 - March 2023)

Resources available

  • With the FTX crisis in November 2022, a considerable chunk of financial resources was pulled from AIS field-building activities (not only FTX-related resources, but all the grantmaking institutions became temporarily slower and wary)
    • I naively think that experimental projects suffered the most, and many LMIC projects were in that category 
  • Other scandals related to racism and sexual harassment involving prominent figures in the community, reputational capital also became quite scarce (e.g., it isn’t enough to be affiliated with a prestigious university if the first google search with someone’s name leads to awful results)

Risks

  • I think in this phase the risks were considerably higher. I wrote a couple of memos during that period arguing for more caution on AIS-related activities in LMIC-like places: one on “Challenges to building longtermism in Brazil” for EA Brazil and another on “We should slow down global AIS field-building” for the Summit on Existential Security
  • I think a big chunk of the risks specific to this phase was related to poisoning the well due to losing to (circumstantial) controversy. Controversy related to AIS and EA was incredibly high, and I think these scandals hit differently in developing countries because of socioeconomic and historic circumstances, and because there is no previous AIS-related context there
  • I have some anecdotal evidence from places like Brazil, India, and Mexico that professors and other experts were warning people new to the field (e.g., Condor Camp participants) that work on AIS was doomsday cult-like and not rigorous, which I think turned off some people and limited the reach and impact of some projects

State of affairs of AIS field-building in LMIC (in my view)

  • My sense is that many projects got frozen, slowed down or effectively died due to the resource constraints and emotional burden that period imposed on many field-builders
    • I think this hit differently in LMIC because it’s more common for people working on meta projects to face financial hardship (not necessarily being poor, but things like contributing to one’s family bills are more common). Even if they’re not facing hard circumstances at that particular moment, I think it’s more common for young people from LMIC to think they have few shots at success and they can’t really stop their careers or endure instability for too long. I think this is indeed true in many cases (or at least I often feel like that myself)
    • So even in cases where the project received funding months after the crisis started, some people had already moved on or were unwilling to pick up the projects because they felt they couldn’t endure the instability of the ecosystem (and the emotional burden that came with it)
  • Besides that, I think several people raised alarm bells (including me via the memos I mentioned), and this made some field-builders update in the direction of caution
    • So this period ended up being useful for some field-builders to prepare projects, skill up, and plan ahead for when the resources landscape became more favorable (e.g., I think some folks in India have done a great job at this)
    • Still, I think there was some lag between the mindset of the growth phase (that we should have a low bar for experimentation) and the crisis phase (that the risks were higher and therefore the bar for continuing field-building activities was also higher)
  • On the other hand, projects led by competent field-builders were important to keep high-fidelity information about AIS going 
    • This is particularly important when the news cycle is filled with low-quality information – many media outlets weren’t seeking experts in AI, but rather the attention brought by the connection between AI and controversial billionaires
    • By keeping a flow of high-fidelity information, field-builders gain reputational points and experience and contribute to the public debate (even if mostly within their target audience, which arguably has limited reach)

Summary of that period for AIS field-building in LMIC

  • Many projects died or went on standby due to resource constraints, which arguably hits more hardly folks at LMIC 
  • AIS was often linked to scandals and bundled with controversial, low-status-for-the-mainstream fields like crypto, disincentivizing some folks and hindering the impact of some projects
  • AIS started to dissociate from EA
  • The bar for working on AIS field-building went considerably higher considering the risks related to losing to (circumstantial) controversy
  • Much of the value of the remaining projects came from keeping high-fidelity information flowing for interested folks, which the most competent field-builders managed to keep doing

Mainstream phase (Mar 2023 - currently)

Resources available

  • Attention on AI is much higher since ChatGPT, but more notably attention on AI safety is much higher since at least the FLI letter in March
    • This makes it incredibly easier to not only talk about things that just months ago could be perceived as controversial, but for people to effectively change their careers (which, as I noted, can be more challenging for people in LMIC)
    • This is also quite relevant for places without a pre-existing AI safety community because all of a sudden everyone is more receptive even to more extreme scenarios as the risks became much more palatable (and tangible, considering how interacting with LLMs feels like a technological leap) – this is mostly true for places with a community too, but at least you could previously point to legible things (e.g., FHI at Oxford, CHAI at UC Berkeley, the labs themselves; all of which I think can feel distant or less legible to many people in other countries (but this feels weird to say as Oxford should be legible anywhere, so I’m not that confident about this claim))
  • Funding is still more scarce than in the growth phase but has gradually become more available for projects that clear a higher bar (mainly due to OP resuming grants) 

Risks 

  • I think the risk from people associating AI safety with controversial topics has considerably down, and people are generally much more receptive even to the weirdest ideas (at least in Brazil even x-risks have already made it to national news with the Geoff Hinton announcement)
  • I think the risk now is more related to the required quality of engaging with AI-related discussions being much higher
    • The signal-to-noise ratio is lower: there are now many more interest groups talking about AI and pursuing the attention of key stakeholders. It’s harder to identify who is serious and has valid claims
  • We can still poison the well: key stakeholers can associate AI safety asks with amateur-ish, unprofessional, unprepared efforts and decide to engage with other interest groups. Since there are several interest groups readily available, the time lapse between a frustrating first engagement with AI safety field-builders and locking into a suboptimal state can be quite short 
  • Another risk is accidentally accelerating AI development: LMIC often perceive themselves as carrying a disproportionate burden on global security matters
    • In many developing countries the framing for engaging with AI will be one of “keeping up with recent developments.” 
      • As an analogy, this has continuously happened with environmental concerns and Brazil, considered a bellwether in the theme: presidents from right to left frame environmentalist protectionism as a luxury of developed countries in detriment of developing ones, who carry a disproportionately heavier burden due to having not fulfilled their development (e.g., Brazil’s current president talking about the Amazon today sending mixed messages about protection and exploitation).
    • Some field-builders can probably use this framing to strategically advance safety measures as a way to “bridge the gap” between frontier and peripheral countries
      • The caveat is that even such a safety-focused framing can contribute to race dynamics between e.g., US and China. It’ll be in the interest of any competitor to push for bridging such a gap
      • A concrete example is that the UK prime minister recently conflated capabilities and safety to make the case for “dramatically speed[ing] up UK AI capability”
      • This is a good example of the need for solid inside-view models about AI safety and governance being necessary for leading field-building in different places.
  • At the same time, it’s important to move relatively early as the field is quickly getting crowded – and we can benefit from the international community to help engage people with high-fidelity information about AI safety before they get hijacked by lower-fidelity information

State of affairs of AIS field-building in LMIC (in my view)

  • I think some projects that previously (e.g., in the growth phase) were umbrellas for other causes and AIS are now pivoting towards AIS more strongly (that’s been the case with Condor Camp)
  • New projects are also arising or are in the works (as opposed to the slow down/freeze of the crisis phase), and my perception is that a larger share of them will also be much more focused on AIS than before
  • I think most of the funded projects probably clear for funding, and I like that this has come in the aftermath of a period of preparation and reflection
  • Still, I think some folks are excited about coming into the field quickly and starting projects in new countries when they should probably be more careful and modest with their goals
    • E.g., some folks think immediately of influencing regional policy via think-tank-like research or outreach to policymakers. I think that’s likely a mistake in most cases unless you have a track record of this kind of outreach and have received positive feedback from more experienced members of the AIS community.

Summary of that period for AIS field-building in LMIC

  • Attention is much higher, and resources are gradually becoming available again (even if at lower levels compared to the growth phase)
  • AIS is gradually more dissociated from EA
  • We should maintain a similarly high bar (compared to the crisis phase) for AIS field-building because of the low signal-to-noise ratio (more people are competing for key stakeholders’ attention) and the risk of poisoning the well for the AIS community (with low-quality efforts redirecting attention away from this community)
  • At the same time, we should be strategic with this crucial moment and do our best to advance high-quality efforts. We can make use of existing materials and expertise in the international community for that


 

How I think AIS field-builders in LMIC should proceed

  • I think AIS field-builders should keep certain risks in mind more prominently 
    • Starting with the risks I mentioned above:
      • Poisoning the well
      • Accelerating capabilities instead of safety
    • But also:
      • Unintentionally attracting talent to AI capabilities: this is a separate issue, but low-quality field-building also makes this worse. 
        • I have some anecdotes of people I recently met that were convinced by AIS arguments and emigrated from an LMIC to upskill on fellowships, ultimately ending up working in labs with capabilities research (even though the distinction isn’t often clear and alignment and interpretability are arguably dual use). These anecdotes have updated me towards considering more the potential negative effect of AIS field-building as a whole.
        • Let alone the bycatch who don’t end up being motivated by AIS and are simply drawn to AI capabilities (more anecdotal evidence: I had at least three conversations with people who wanted to work on “AI apps” at a recent EA conference)
      • Low counterfactual impact: the most talented people would get international opportunities anyway.
      • Lowering the quality of AIS debate: at least one high-context person argues that candidates for AI safety teams haven’t improved in recent years, which seems like a data point against AIS field-building. More low-quality research or people make it harder to find high-quality information.
  • I think most wannabe field-builders will benefit from starting modestly with educational/upskilling projects (e.g., skilling up a small group of colleagues, talent search)
    • With projects like this, you build a track record, learn along the way by interacting with the real world (and by interacting more with AIS as a field), and typically will have low downside risks (especially if you don’t advertise it massively, which I think you shouldn’t do at first)
    • After you get traction and experience, if you end up reaching talented enough people, you’ll get pushback that should work as useful feedback about your field-building skills and improve your own network. If your audience trusts you enough to be associated with you and introduce you to others, you might even be able to gradually influence other stakeholders and build the field more widely
  • I also think many field-builders underestimate the importance of getting feedback and looping people in early on
    • To be honest, I think getting early feedback is much more common in this community than in most places, but I still would like to emphasize this because it can be quite frustrating to receive critical feedback only after you’ve put a lot of time into your project. 
    • Also, I think folks in many LMIC might not be used to some cultural norms that might make it easier for people in EA hubs to get early feedback and might consider getting early feedback higher stakes and lower importance than it actually is in this community 
  • I think in the current state of affairs, only after the two steps above (track record + positive feedback from experienced folks), someone should start considering engaging with higher-stakes decisionmakers
     

Some ways in which I can be wrong

  • I think one big cluster of reasons why I might be wrong is related to: it might be better for us to move faster (with a lower bar for AIS field-building)
    • Considering how popular the theme currently is, maybe we should make use of the Overton window before it closes
    • We might want to grab the first-mover advantage and set the tone of the debate
  • On the other hand, maybe we should pause or go in the other direction: maybe AIS field-building is indeed net negative as it ends up differentially advancing AI capabilities
  • LMIC are so different that my generalization is so flawed it doesn’t apply to your particular context (or maybe no context besides my own)


Apologies for the rough post again – if anything isn’t clear or you’d like to chat more about this (especially if you’re considering a new national AIS field-building effort), ping me at renannascimentoaraujo@gmail.com and I can probably send some resources or contacts your way.

56

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 7:07 AM

We should probably worry less about accidentally advancing capabilities these days as in some ways the cat is out of the bag.

I believe we should be less concerned in comparison to other phases, but I also believe we shouldn't be overly unconcerned. We can still screw up in several ways and e.g., direct people to ineffective avenues, poison the well, etc. I think we're in a pivotal moment that requires attention and carefulness to mitigate the chances of a suboptimal lock-in.

You should note in the intro that LMIC stands for "Low and Middle Income Countries". I love the images. 

Good point, just added that. Thanks!

Curated and popular this week
Relevant opportunities