My intuition is that there are heaps of very talented people interested in AI Safety but 1/100 of the jobs.
A second intuition I have is that the rejected talent WON'T spillover into other cause areas much (biorisk, animal welfare, whatever) and may event spillover into capabilities!
Let's also assume more companies working towards AI Safety is a good thing (I'm not super interested in debating this point).
How do we get more AI Safety companies off the ground??
Hey Yanni!
Quick response from CE here as we have some insight on this:
a) CE is not funding-limited and does not think AI is an area we will work on in the future, regardless of available funding in the space (we have been offered funding for this many times in the past). You can see a little bit about our cause prioritization here and here.
b) There are tons of organizations that aim or have aimed to do this, including Rethink Priorities, Impact Academy, Center for Effective Altruism and the Longtermist Entrepreneurship Project.
c) An interesting question might be why there has not yet been huge output from other incubators, given the substantial funding and unused talent in the space. I think the best two responses on this are the post-mortem from the Longtermist Entrepreneurship Project and a post we wrote about tips and challenges of starting incubators.
I really want to get to the bottom of this, because it seems like the dominant consideration here (i.e. the crux).
Not a top cause area ≠ Not important
At the risk of being too direct, do you as an individual, believe AI safety is an important cause area for EA's to be working on?