I've found myself with an intuition that there's a lot of value in people writing down exactly what they're doing and the mechanisms/assumptions behind that. I further have the intuition that it's perfectly fine if people do this rather quickly as most of the marginal value is in doing this at all rather than making sure it's all perfect. A lot of what I'm writing below won't directly be very useful to other people, but it may be useful in terms of demonstrating how exactly I think people should write down what they're doing.
Currently, I'm doing AI Safety Movement building. Most of my projects (local movement building, AI Safety nudge competition, AI Safety Prioritisation course) are broad pipeline projects. This means that a significant component of their theory of impact is that they deliver value by marginal people increasing the size of the field. So I am implicitly assuming that either a) there's a broad set of people who can contribute to AI safety or b) even if there's only a few special people who can have any real impact that this kind of broad outreach will still find some of these people and this percentage won't be so small as to render broad outreach as essentially worthless.
I should also acknowledge that even though my outreach is reasonably broad, I'm not all the way towards the end of the spectrum. For example, I'm not heavily promoting AI Safety ANZ to the general public, but mostly trying to pull in people from within EA who already have an interest in AI safety or people who I know personally who I'd love to have involved. Similarly, for the intermediate AI Safety course, we're focusing on people who have already done the fundamentals course or have spent a large amount of time engaging with AI Safety.
Further, I’m trying to encourage people to do the AGI safety fundamentals course, so that there’s a baseline of common knowledge so I can target events at a higher level. This is based on the assumption that it is more important to enable already engaged people to pursue it through their career than it is to just create more engaged people.
For my local community building, I don't really see outreach as my main focus. I think it's important to conduct enough outreach to build a critical mass and to engage in outreach when there's an especially good opportunity, but I don't see the size of the community as the most important factor to optimise for, so outside of this I'd only engage in direct outreach occasionally.
I guess one of the key questions I'm trying to understand is what kind of community would be able to solve the alignment problem. I can't say that I've got a good answer to that yet, but I think in addition to needing highly intelligent and highly dedicated people, it's pretty important for these people to be open-minded and for these people to have a strong understanding of which parts of the problem are most important and which approaches are most promising. I’m pretty sold on the claim that most alignment work has marginal impact, but I’m not highly confident yet about which parts of the problem are most important.
I‘m running events monthly in Sydney and every few weeks online. This feels like very little time compared to the amount of time that people need to skill up to the level where they contribute, so I feel that a lot of the benefit will come from keeping people engaged and increasing the proportion of people who are exposed to important ideas or perhaps even develop certain attributes. If I run some retreats they will allow me to engage people for larger amounts of time, but even so, the same considerations apply.
The Sydney AI Fellowship is different in that it provides people with the time to develop deep models of the field or invest significant amounts of time developing skills, and the success of the first program suggests this could be one of the most impactful things that I run.
My current goal with community building is to establish critical mass by experimenting with various events until I’m able to establish reliable turnout, then to try to figure out how to help people become the kind of people who could contribute.
By running these projects I’m having some direct impact, but I’m also hoping that I’ll be able to eventually hand it off to someone else who might even be better suited to community building. I see community building projects as being easier to hand off as if successful they will draw in talented people, but a) I’d be a bit reluctant to hand it over to someone else before I had found a programme of activities that worked b) I worry that delegation is harder than it seems as you need someone who has good judgement which is hard to define AND you have to worry about their ability to further hand it off to someone competent down the line.
In addition to my direct impact, I’m hoping that if I am successful there will be more cause specific movement builders at a country level. Again, this theory of impact assumes that these movement builders will have solid judgement and will be able to produce the right kind of community for making progress on this problem.
Beyond this, I often write ideas up either on Lesswrong, here, Twitter, Facebook or other locations. A large part of my motivation is prob. based on a cognitive bias where I overestimate the impact of these posts, which probably only reach a few people (most for whom they aren’t decision relevant) and most of these people prob only retain a small part of that I write given the flood of content online.
I guess this pushes me heavily towards thinking that it’s important find ways to build communities with common knowledge, but a) this is hard to do as people need to invest time b) it’s hard to figure out what should be common knowledge c) this can lead to conformity.
I also think a lot of value of starting a local AI safety group is that it’s existence passively pushes people to think more about pursuing projects in the this space and it’s existence removes trivial inconveniences.
The existence of a designated organiser makes it very easy for people to know who to reach out to if they want to know more and the existence of the group reduces peoples self-doubt and makes it easier for people to orient themselves.
I’ve been having a decent number of one on one conversations recently. These conversations normally focus around people trying to understand how much of an issue it is, whether they are suited to have an impact and what needs to be done.
In terms of how important it is, I try not to overstate my ML knowledge, but I try to explain why I think I can nonetheless feel confident that this is an issue. In terms of whether people can make a difference, I try to explain how a wider range of people can make a difference than most people think. In terms of what needs to be done, I try to list a bunch of ideas in the hope that it gives the impression that there are lots of projects, but I don’t think I’m doing it very well.
I try to keep up with the AI safety and movement building content on the EA forum, but there’s so much content that I’m struggling. I feel I should prob. focus less on keeping up and more on reading the most important old content, but I find myself really resistant to that idea.
Anyway, I just thought this would be a valuable exercise and I thought I’d share it in case other people find this kind of exercise valuable. I guess the most important thing is to be really honest about what you’re doing and why; and then maybe it’ll become more obvious what you should be doing differently?