During the last 2.5 or 3 years, I have been trying to learn and get experience on AI Safety so that when I finish my PhD in Physics I might be able to contribute to the AI Safety community efforts. During these years I have noticed that:
- There seem to be large groups of potential junior researchers, and the community has several programs in place for them such as the AI Safety Camp or AI Safety Research Program.
- Funding is growing, but still largely concentrated in a handful of places such as CHAI, FHI, CSER or institutes not affiliated to universities (eg Center for Long Term Risks); and a few companies (DeepMind, OpenAI, Anthropic). But it seems to me that there are still great places out there where research could happen and is currently not.
So, given that the existence of more senior researchers seems to be a bottleneck: what can the community do to get them more interested in this topic? I read somewhere that there are two ways to get people involved:
- Telling them this is an important problem.
- Telling them this is an interesting problem.
I think it may be worth trying out a combination of both, eg: "hey, this is an interesting problem that could be important to make systems reliable". I really don't think one needs to convince them about longtermism as a previous step.
In any case, I wanted to use this question to generate concrete actions such that people such as the EA Long Term Fund managers could put money to solve this bottleneck. The only time I have seen that something similar is the "Claudia Shi ($5,000): Organizing a "Human-Aligned AI” event at NeurIPS." donation registered in https://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants.
There might also be other ways, but I don't think I know academic dynamics so well to know. In any case, I am aware that publication of technical AI Safety in these conferences does not seem to be an issue, so I believe the bottleneck is in getting them to be genuinely interested on the topic.
From your comment, I just learned that Distill.pub is shutting down and this is sad.
The site was beautiful. The attention to detail, and attention to the reader and presentation were amazing.
Their mission seems relevant to AI safety and risk.
Relevant to the main post and the comment above, the issues with Distill.pub seem not to be structural/institutional/academic/social—but operational, related to resources and burnout.
This seems entirely fixable by money, maybe even a reasonable amount compared to other major interventions in the AI/longtermist space?
To explain, consider the explanation on the page:
It seems that 50 hours of a senior editor’s time to work on a draft is pretty wild.
The use of senior staff time like this doesn’t seem close to normal/workable with the resources and incentives on the publication market.
But this can be fixed by hiring junior/middle level ML practitioners and visualization designers/frontend engineers. The salaries are going to be higher than most non-profits but there seems like there is a thick market for these skills.
What doesn’t seem fungible is some of the prestige and vision of the people associated with the project.
As just one example, Mike Bostock is the creator of D3.js and in visualization is a “field leader” by any standard.
Maybe this can interest someone way smarter than me to consider funding/rebuilding/restoring Distill.pub as an AI safety intervention.