People interested in shifting their careers to doing the most good in a given field are sometimes taking big risks to their reputation, financial stability and likelihood of impact. Worse yet, many people avoid taking those risks. This is understood, and on the individual level it makes a lot of sense, but for the community as a whole it may restrict our impact.
When avoiding risks, it is harder to start new initiatives and go against the consensus. It is harder to throw years of experience away to go to do something which seems more important. It is harder to work on high-risk high-reward projects.
I think that there are several things we can do. Many of which are already being done.
We can improve the reputation of EA and specific cause areas, say by building an academic discipline or slowly gaining more public support.
We can strengthen social support, and build good norms around failures.
We can have financial mechanisms designed for financial stability of individuals in EA. Say, insurance and pension funds explicitly targeting people tackling risky projects and taking risky career decisions who are trying to do the most good.
We can put more effort into improving our network. Taking more time to know people in the community, mentor them, manage more projects and connect people.
We can put more effort into vetting, which directly puts people into our sphere of trust.
All of the suggestions here are for reducing risks for other people.
This requires us to take more risks ourselves, to trust others and spend time and money on helping others with their goals. It requires building better institutions and continually improve our community.
We are already doing great work on this, and I appreciate the work done by many people in the community. I think that specifically in the major EA organisations there seems to have a good capacity for risk-taking. I write this mostly as a general reminder for us all.
Do you think that there is any institution or norm severely lacking in the EA community?
At the moment I think there aren't obvious mechanisms to support independent early-stage and high-risk projects at the point where they aren't well defined and, more generally, to support independent projects that aren't intended to lead to careers.
As an example that address both points, one of the highest impact things that I'm considering working on currently is a research project that could either fail in ~3 months or, if successful, occupy several years of work to develop into a viable intervention (with several more failure points... (read more)