Regarding AI safety, of course.
The detailed version of this question follows: "Is AI safety sufficiently talent-constrained, and on an imminent enough timeline,
that me and any friends I can convince ordinary non-Ivy-League non-top-tier computer scientists, mathematicians, and programmers (including students)
should drop everything (including things you are personally uncomfortable telling people to drop out of)
and apply for every grant out there?"
I would also accept "If you are not very talented, go do something else and let us handle it", or "If you're unsure of your talent, here is a link to an online test or open application thing that will give you good evidence one way or the other".
My answer to the detailed version of the question is "unsure...probably no?": I would be extremely wary of reputation effects and perception of AI safety as a field. As a result, getting as many people as we can to work on this might prove to not be the right approach.
For one, getting AI to be safe is not only a technical problem --- apart from figuring out how to make AI safe, we need to also get whoever builds it to adopt our solution. Second, classical academia might prove important for safety efforts. If we are being realistic, we need to admit that the prestige associated with a field has impact on which people get involved with it. Thus, there will be a point where the costs of bringing more people in on the problem might outweight the benefits.
Note that I am not saying anything like "anybody without an Ivy-league degree should just forget about AI safety". Just that there are both costs and benefits associated with working on this, and everybody should consider these before doing major decisions (and in particular outreach).