Hide table of contents

People interested in shifting their careers to doing the most good in a given field are sometimes taking big risks to their reputation, financial stability and likelihood of impact. Worse yet, many people avoid taking those risks. This is understood, and on the individual level it makes a lot of sense, but for the community as a whole it may restrict our impact. 

When avoiding risks, it is harder to start new initiatives and go against the consensus. It is harder to throw years of experience away to go to do something which seems more important. It is harder to work on high-risk high-reward projects.

I think that there are several things we can do. Many of which are already being done.

We can improve the reputation of EA and specific cause areas, say by building an academic discipline or slowly gaining more public support.

We can strengthen social support, and build good norms around failures.

We can have financial mechanisms designed for financial stability of individuals in EA. Say, insurance and pension funds explicitly targeting people tackling risky projects and taking risky career decisions who are trying to do the most good.

We can put more effort into improving our network. Taking more time to know people in the community, mentor them, manage more projects and connect people. 

We can put more effort into vetting, which directly puts people into our sphere of trust.

All of the suggestions here are for reducing risks for other people

This requires us to take more risks ourselves, to trust others and spend time  and money on helping others with their goals. It requires building better institutions and continually improve our community.

We are already doing great work on this, and I appreciate the work done by many people in the community. I think that specifically in the major EA organisations there seems to have a good capacity for risk-taking. I write this mostly as a general reminder for us all.

Comments23
Sorted by Click to highlight new comments since: Today at 12:46 PM

I strongly agree. Personally (and I know a couple more EAs with the same dilemma), I'd be thrilled to apply to an EA org or even start a new large scale project, but this is too risky for my financial security - I'm forced to spend a couple of years on the credibility of my CV outside EA, since outsiders are not familiar with the professional level of work in EA (I've mostly encountered people associating it with the common low expertise of nonprofits).

Maybe there are ways to directly confront this, such as offering training courses by top universities or enterprises (Google, Microsoft, Facebook etc.) to people who work in the EA sphere, in order to improve the credibility (and level) of their professional skills.

Do you agree that we are bottlenecked by capacity for risk taking?

What kinds of risky choices do EAs avoid? Are you thinking of any specific examples, perhaps particular cause areas or actions?

Main examples for me are

  • Michelle Hutchinson in the 80K podcast talks about Risk in Careers
  • Intuition that this is one cause for an overall lack of useful volunteer work and generally not enough initiatives.
  • People have an understandable preference to start their career working on more commonsensical problems. Direct work on many EA causes puts a big risk for one's career capital.
  • The case for impact purchases.

And frankly, I've just been reading some more about DARPA and how they systematically derisk funders, management and researchers and it seems like something which is amazingly important for most people anywhere, and we are in a good position to collaborate on reducing risk for people seriously doing good

.

What academic disciplines are being developed to make the career-switch less risky? I'm also interested in how insurance/pension funds could even begin to be developed.

I think this could be set up by launching a 501(c)(3) as a donor-advised fund and fiscal sponsor and then setting up funds inside the entity that support specific purposes. For example, having a fund that pays UBI for people working on high-impact entrepreneurship.

I welcome anyone to get in touch with me if they're interested in collaborating on and/or funding such a proposal (estimated setup cost of the entity and necessary legal work: $15,000–$25,000).

Edit: Was inspired to write an EA Forum post on this!

I was thinking of the Global Priorities Institute as the clearest example of trying to normalize longtermism and global priorities research in the academia as a discipline. AI Safety is also getting more mainstream over time, some of it as academic work in that field. 

EA tackles problems which are more neglected. Some of it is still somewhat high-status (evidence-based development as the main thing that pops to my head). So perhaps that kind of risk is almost unavoidable and can only be mitigated for the next generation (and by then, it might be less neglected)

I'd also like to know how someone could go about that kind of insurance/pension fund :)

I would not contribute to insurance for the average EA. I would prefer for the average EA to save a bit of money ("runway") in advance of making a risky decision.

In particular, I have seen quite a few EAs who have decided not take on any paid work for 1+ years after university and I don't want to financially support them unless they're doing something genuinely impactful, in which case I'd give them a grant.

In summary, I'd need a compelling reason to donate my money to middle class people in high income countries instead of literally saving lives, and I haven't seen one yet.

Asking people who specialise in working on early-stage and risky projects to take-care of themselves with runway may be a bit unreasonable. Even if a truly risky project (in the low-probability of a high-return sense) is well executed, we should still expect it to have an a priori success rate of 1 in 10 or lower. Assuming that it takes six months or so to test the feasibility of a project, then people would need save several years worth of runway if they wanted to be financially comfortable while continuing to pursue projects until one worked out (of course, lots of failed projects may be an indication that they're not executing well, but lets be charitable and assume they are). This would probably limit serious self-supported EA entrepreneurship to an activity one takes on at a mid-career or later stage (also noted by OPP in relation to charity foundation):

Starting a new company is generally associated with high (financial) risk and high potential reward. But without a solid source of funding, starting a nonprofit means taking high financial risk without high potential reward. Furthermore, some nonprofits (like some for-profits) are best suited to be started by people relatively late in their careers; the difference is that late-career people in the for-profit sector seem more likely to have built up significant savings that they can use as a cushion. This is another reason that funder interest can be the key factor in what nonprofits get started.

Yes, to be clear, I'm arguing that we should have a robust funding ecosystem. I am opposed to "UBI for EAs"

Sure, I agree that unvetted UBI for all EAs probably would not be a good use of resources. But I also think there are cases where an UBI-like scheme that funded people to do self directed work on high-risk projects could be a good alternative to providing grants to fund projects, particularly at the early-stage.

Yep, I'd be on board with providing specific people with funding to work on whatever projects they find most valuable. But I'd only be likely to provide that to ~10 people and see what happens, as opposed to what I felt this article was suggesting.

Agree. The interesting question for me is where we expect the cutoff to be - what (personal, not project dependent) conditions make it highly effective to give income to an individual. 

This framing makes me notice that it would probably be far from realistic right now, as small initiatives in EA are funding constrained. But this still might be misleading.

I think I'd go further. If an EA organisation or some other EAs aren't willing to support you in running your project then should you be doing it as you're main job?

As a side project, sure, but no funding means noone else is convinced of your impact. This seems like a good reason to choose a different path.

I agree. And I think "willing to support" should normally include enough money for a bit of savings, both for an emergency fund and your pension.

I think what you are both saying makes total sense, and is probably correct. With that said, it might be the case that 

  1. it is much easier to vet people rather than projects
  2. vetting is expensive
  3. we expect some outliers to do a lot of good
  4. financial security is critical for success.
  5. it is technically very hard to set many institutions or to cover many EAs as employees.

Do you think that there is any institution or norm severely lacking in the EA community?

Jade Leung's EAGx talk 'Fostering longtermist entrepreneurship' touched on some relevant ideas related to individual capacity for risk taking. (this isn't in the public CEA playlist, but a recording is still available via the Grip agenda)

Definitely! The notes and slides can be found here

Here is just linking to this post, I think you meant to link somewhere else?

Fixed, thanks! :)

At the moment I think there aren't obvious mechanisms to support independent early-stage and high-risk projects at the point where they aren't well defined and, more generally, to support independent projects that aren't intended to lead to careers.

As an example that address both points, one of the highest impact things that I'm considering working on currently is a research project that could either fail in ~3 months or, if successful, occupy several years of work to develop into a viable intervention (with several more failure points along the way).

With regards to point 1: At the moment, my only option seems to be applying for seed-funding, doing some work and if that its successful, applying to another funder to provide longer-term project funding (probably on several occasions). Each funding application is both uncertain and time consuming, and knowing this somewhat disincentives me from even starting (although I have recently applied for seed stage funding). Having a funding format that started at project inception and could be renewed several times would be really helpful. I don't think something like this currently exists for EA projects.

With regards to point 2: As a researcher, I would view my involvement with the project as winding down if/when it lead to a viable intervention - while I could stay involved as a technical advisor, I doubt I'd contribute much after the technology is demonstrated, nor do I imagine particularly wanting to be involved in later stage activities such as manufacturing and distribution. This essentially means that the highest impact thing I can think of working on would probably need my involvement for, at most, a decade. If it did work out then I'd least have some credibility to get support for doing research in another area, but taking a gamble on starting something that won't even need your involvement after a few years hardly seems like sound career advice to give (although from the inside view, it is quite tempting to ignore that argument against doing the project).

I think that lack of support in these areas is most relevant to independent researchers or small research teams - researchers at larger organisations probably have more institutional support when developing or moving between projects, while applied work, such as distributing an intervention, should be somewhat easier to plan out.

Curated and popular this week
Relevant opportunities