What disagreements do the LTFF fund managers tend to have with each other about what's worth funding?
What projects to reduce existential risk would you be excited to see someone work on (provided they were capable enough) that don't already exist?
I've also heard people doing SERI MATS for example explicitly talk/joke about this, about how they'd have to work in AI capabilities now if they don't get AI safety jobs
When people do this, do you think they mostly want someone with more skills or knowledge or someone with better, more prestigious credentials?
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.
I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes.
I would love to see people debate the question of how difficult AI alignment really is. This has been argued before in for example the MIRI conversations and other places but more content would still be helpful for people similar to me who are uncertain about the question. Also, at the EAG events I went to, it felt like there was more content by people who had more optimistic views on alignment so would be cool to see the other side.
It would be cool to have someone with experience in startups who also knows a decent amount about EA because many insights from running a successful startup might apply to people working to ambitiously solve neglected and important problems. Maybe Patrick Collison?