A proposed approach for AI safety movement building

Epistemic status

Written as a non-expert to develop and get feedback on my views, rather than persuade. It will probably be unclear, incomplete and unhelpful in parts, but it should provoke helpful feedback and debate. 

 

Aim

This is an attempt to outline a theory of change for AI Safety movement building. I want to use the theory of change to work with other people in the AI Safety community to understand when and how we should do AI Safety movement building. I don’t necessarily want to immediately accelerate recruitment into AI safety because I take concerns (e.g., 1,2) about the downsides of AI Safety movement building seriously. However, I do want to understand how different viewpoints within the AI Safety community overlap and aggregate.

 

My motivation for writing this series

I asked a lot of people in the AI Safety community about AI Safety movement building and felt that there was a huge amount of variance in vocabulary, opinions, visions and priorities. I expected there to be a lot of variance, but I didn’t expect it to be so hard to synthesise. I desired a better understanding of what to do and why, and somewhat akin to what exists within EA with The Importance, Tractability and Neglectedness (ITN) framework and the EA cause areas. This experience convinced me that rather than jumping into a role, I would be better off taking time to write this series.

 

Audiences

Here is who I wrote this series for and why.

The wider AI Safety community: To get feedback on my attempt at understanding of the AI Safety community, and its key concepts and aims, and current plans.

Potential, New and Current AI Safety movement builders: To explain and get feedback on my ideas and find opportunities to collaborate. To save time by writing my thoughts rather than explaining them repeatedly in conversation. 

 

Outline

I start the series with a post where I conceptualise the AI Safety community as having four overlapping groups, Strategy, Governance, Technical and Movement Building

 

In future posts, I plan to

  • Outline three factors/outcomes for AI Safety related movement building to consider
  • Discuss broad practices to progress these factors/outcomes. 
  • Discuss specific projects to use those practices
  • Discuss specific skills that might be useful for working on practices and projects
  • Argue that improving the quality and quantity of people doing movement building is a bottleneck for progress in AI Safety and EA, and offer ideas for how to do it better. 
  • Argue for a movement building approach I call ‘fractional movement building’: where many if not most people doing useful direct work are funded to do some movement building (e.g., supervision, knowledge brokering, mentoring and recruiting) for an appropriate fraction of their time.