For background and context, see my related series of posts on an approach for AI Safety Movement Building. This is a quick and concise rewrite of the main points in the hope that it will attract better engagement and feedback.
Which of the following assumptions do you agree or disagree with? Follow the links to see some of the related content from my posts.
Assumptions about the needs of the AI Safety community
- A lack of people, inputs, and coordination is (one of several issues) holding back progress in AI Safety. Only a small portion of potential contributors are focused on AI Safety, and current contributors face issues such as limited support, resources, and guidance.
- We need more (effective) movement builders to accelerate progress in AI Safety. Utilising diverse professions and skills, effective movement builders can increase contributors, contributions, and coordination within the AI Safety community, by starting, sustaining, and scaling useful projects. They can do so while getting supervision and support from those doing direct work and/or doing direct work themselves.
- To increase the number of effective AI Safety movement builders we need to reduce movement building uncertainty. Presently, it's unclear who should do what to help the AI Safety Community or how to prioritise between options for movement building. There is considerable disagreement between knowledgeable individuals in our diverse community. Most people are occupied with urgent object-level work, leaving no one responsible for understanding and communicating the community's needs.
- To reduce movement building uncertainty we need more shared understanding. Potential and current movement builders need a sufficiently good grasp of key variables such as contexts, processes, outcomes, and priorities to be able to work confidently and effectively.
- To achieve more shared understanding we need shared language. Inconsistencies in vocabulary and conceptualisations hinder our ability to survey and understand the AI Safety community's goals and priorities.
Assumption about the contribution of my series of posts
I couldn't find any foundation of shared language or understanding in AI Safety Movement building to work from, so I created this series of posts to share and sense-check mine as it developed and evolved. Based on this, I now assume:
- My post series offers a basic foundation for shared language and understanding in AI Safety Movement building, which most readers agree with. I haven't received much feedback but what I have received has generally been supportive. I could be making a premature judgement here so please share any disagreements you have.
Assumption about career paths to explore
If the above assumptions are valid then I have a good understanding of i) the AI Safety Community and what it needs, and ii) a basic foundation for shared language and understanding in AI Safety Movement building that I can build on. Given my experience with entrepreneurship, community building, and research, I therefore assume:
- It seems reasonable for me to explore if I can provide value by using the shared language and understanding to initiate/run/collaborate on projects that help to increase shared understanding & coordination within the AI Safety Community. For instance, this could involve evaluating progress in AI Safety Movement building and/or surveying the community to determine priorities. I will do this while doing Fractional Movement Building (e.g., allocating some of my productive time to movement building and some of my time for direct work/self-education).
Feedback/Sense-checking
Do you agree or disagree with any of the above assumptions? If you disagree then please explain why.
Your feedback will be greatly valued and will help with my career plans.
To encourage feedback I am offering a bounty. I will pay up to 200USD in Amazon vouchers, shared via email, to up to 10 people who give helpful feedback on this post or my previous posts in the series by 15/4/2023. I will also consider rewarding anonymous feedback left here (but you will need to give me an email address). I will likely share anonymous feedback if it seems constructive, and I think other people will benefit from seeing it.
Hey Yonatan, thanks for replying, I really appreciate it! Here is a quick response.
I read the comments by Oliver and Ben in "Shutting Down the Lightcone Offices".
I think that they have very valid concerns about AI Safety Movement Building (pretty sure I linked this piece in my article).
However, I don't think that the optimum response to such concern is to stop trying to understand and improve how we do AI Safety Movement building. That seems premature given current evidence.
Instead, I think that the best response here (and everywhere else there is criticism) is to be proactively try to understand and address the concerns expressed (if possible).
To expand and link into what I discuss in my top level post: When I, a movement builder, read the link above, I think something like this: Oliver/Ben are smarter than I am and more knowledgeable about the AI safety community and it's needs. I should therefore be more concerned than I was about the risks of AI Safety movement building.
On the other hand, lots of other people who are similarly smart and knowledgeable are in favour of AI Safety movement building of various types. Maybe Oliver and Ben hold a minority view?
I wonder: Do Oliver/Ben have the same conception of movement building as me or as many of the other people I have talked to? I imagine that they are thinking about the types of movement building which involve largely unsupervised recruitment whereas I am thinking about a wide range of things. Some of these things involve no recruitment (e.g., working on increasing contributions and coordination via resource synthesis), and all are ideally done under the supervision of relevant experts. I doubt that Oliver and Ben think that all types of movement building are bad (probably not given that they work as movement builders).
So all in all, I am not really sure what to do.
This brings me to some of what I am trying to do at the moment, as per the top level post: trying to create, then hopefully use, some sort of shared language to better understand what relevant people think is good/bad AI Safety Movement building, and why, so that I can hopefully make better decisions.
As part of this, I am hoping to persuade people like Oliver/Ben to i) read something like what I wrote above (so that they understand what I mean by movement building) and then ii) participate in various survey/discussion activities that will help me and others to understand what i) sort of movement building activities they are for and against and why they feel as they do about these options.
Then, when I know all that, hopefully, I will have a much improved and more nuanced understanding of who thinks what and why (e.g., that 75% of respondents want more ML engineers with skill X, or think that a scaled up SERI-mats project in Taiwan would be valuable, or have these contrasting intuitions about a particular option).
I can use that understanding to guide decisions about if/how to do movement building as effectively as possible.
Is that response helpful? Does my plan sound like a bad idea or very unlikely to succeed? Let me know if you have any further questions or thoughts!