The EA movement has a lot of money. Why is it so hard to launch good projects?

We can shed light on this by comparing EA grantmakers with for-profit firms. They have a list of investment projects. Each offers an expected rate of return: in dollars for for-profit firms, and in altruistic utilons for EA funders. Firms and EA Funders will invest their capital in projects offering return superior to the hurdle rate or cost of capital.

The hurdle rate for a for-profit firm is the expected rate of return on an investment in the stock market. If the best investment project available offers a 9% rate of return, but the stock market offers a 10% rate of return, then the firm will not invest in the project. Instead, they should return it to shareholders as a dividend. Otherwise, they will underperform the market, and shareholders will sell the stock.

For EA funders, the investment decision is a little tricker.

First, they are constrained by their mission. For example, the mission of EA Infrastructure Fund reads, in part:

While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities.

Money is fungible. In theory, so are utilons. But if you donate to EA Infrastructure Fund, they are not going to use it to fund direct work, and they are not going to return it to you as a dividend if they can't find a use for it.

So they have to find mission-aligned projects. They could simply give out grants in descending order from highest-value/most-mission-aligned to least. This might result in throwing away cash on risky/low-value projects that aren't aligned with their mission. Past a certain point, that seems unwise.

So they need to set a hurdle rate, similar to the one that for-profit firms must consider. A minimum threshold of value, security (non-riskiness), and mission alignment. They need to set the bar and hold it firmly in place.

Determining where to set the bar is another challenge. If they set it too low, they'll throw away money. If they set it too high, they'll have money sitting around with nothing to do. This isn't necessarily bad, though. They can save it for the future, in hopes that more high-quality projects will appear later.

It might seem like they could just use that extra money to invest in developing more high-quality projects. Perhaps they could create a school or workshop to help low-quality projects turn into high-quality projects.

However, that in itself is a project. If they had a great idea for how to go about it, a strong team committed to the idea, and access to whatever outside resources they needed to make it a success, then it might be a high-quality project and surpass the investment bar. If not, though, they would reject that idea along with the rest. They money would sit around unspent.

What makes a project high-quality isn't just the idea itself. "A project to generate higher-quality EA projects" is the barest whisp of an idea. The concept needs to be much more specific, with a fairly detailed plan, a team of demonstrated excellence and clear ability to succeed fairly well organized and committed to it. That's not something you typically put together with one blog post.

So EA funders shouldn't lower the bar just because they can't find adequate outlets for their money right now. That would mean they never set a bar in the first place. There's also not an obvious way of finding more high-quality projects. Finally, there's no guarantee that the influx of wealth into the movement will last. Saving that money for the right opportunity, even if it doesn't exist yet, may very well be the right move.

In fact, difficulty in finding investment opportunities that clear the bar is probably caused by the fact that EA is targeting neglected areas. The fact that we look for neglected causes means that there are fewer competent teams and sharp ideas in the space. It should be hard to find projects that clear the hurdle rate.

Newer EAs with projects that get rejected, even as funders have more money than they can spend, might feel particularly disappointed by this outcome. The problem, it may seem, wasn't that there wasn't enough money to go 'round. It was that they weren't deemed worthy of a vote of confidence even when there was plenty.

Having been rejected for an EA grant myself, I say in solidarity: this isn't a referendum on your abilities, idea, or potential. It may be that you and your idea really are capable of making a significant impact. But because funders don't know you, they might regard investing in you and your project as risky. It takes time to prove yourself and be seen as a more certain bet. This has little to do with your innate potential, and a lot to do with the fact that everyone, EA funders included, live in a big confusing world and are struggling to figure out what to do.

On the bright side, it suggests an opportunity.

If the challenge for most new EAs is skilling up, demonstrating their abilities and gaining experience, then you don't have to do that through EA. In fact, if you can go get somebody else to invest in your skilling-up process, then in a way, you're diverting money and mentorship into EA. The movement doesn't have to invest in training neophytes into the highly capable people it needs. Instead, the neophytes can go make a difference in the world beyond EA, then return to the movement ready to make a big EA impact with the skills and resources they've gained along the way. EA is a career endpoint.

Value drift is a real concern with this strategy. So is lock-in. But I think that it's better to risk it than to bang your head against the wall indefinitely, when the kinds of bottlenecks EA funders say they're facing (see comments) include:

  • "Mentoring/pro-active outreach" (Denise Melchin of Meta Fund)
  • "An inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about."" (Claire Zabel of Open Philanthropy)
  • "Domain experts, and sometimes macrostrategy experts... people with final authority..." (Jan Kulveit of FHI)

Don't try to wake up and save the world. Don't be bycatch. Take 15 years and become a domain expert. Take a career and become a macrostrategy expert. Mentor. Run small and non-EA projects. Circle back to EA periodically with your newfound skills and see what a difference you can make then. There is absolutely no way we can have a longtermist movement if we can't be longtermist about our own lives and careers. But if we can, then we can.

Note: Vaidehi Agarwalla and Arjun Khandelwal have done some great work in reframing the EA's journey as a nonlinear individual journey. I encourage you to check it out!

Comments36
Sorted by Click to highlight new comments since: Today at 5:49 PM

Slightly abbreviated reaction:

  • I agree that some people are too focused on explicitly EA work; that some people should spend more time on "non-EA" projects; and that one reason for this is that it can help build knowledge, skills, credentials, etc., without costing as much EA mentor time, EA money, etc.
    • And I think it's useful to repeatedly highlight these facts, which your post contributes to
  • But I also think for some people it does make sense to do explicitly EA work (very) early in their careers
  • Meanwhile, I also think that some people should just indefinitely continue to not work at EA orgs, not do explicitly EA projects, etc., because of the high direct impact some other work has (e.g., certain roles in the US government)

I'm guessing you'd agree with the second and third of those points (is that correct?). But it seems to me that the post implied the opposite of those points. (Though maybe that was just my reading.)

Longer reaction:

It seems to me that this post is making roughly the following claims:

  1. It's true that EA-aligned funders have a lot of money, that many people want to do "EA work", and that, despite this, many of them are not getting funded
  2. But this doesn't mean the EA-aligned funders should lower their bar for providing funding
  3. It also isn't a good idea for those people to just keep pursuing "EA work"
  4. Instead, "EA work" should be seen as something you do later, and until then you should focus on gaining knowledge, skills, networks, credentials, etc. outside EA
  5. (Implied: And this is almost entirely for the instrumental value it provides via helping you get and do well in "EA work" later)

Is this roughly what you were trying to get across?

I think I strongly agree with some nearby claims, but I find several of those claims problematic. Here's the nearby claims I'd agree with.

  1. It's true that EA-aligned funders have a lot of money, that many people want to do "EA work", and that, despite this, many of them are not getting funded
    • This is same claim as above - I agree here
  2. But this doesn't necessarily mean the EA-aligned funders should lower their bar for providing funding
    • I'm currently agnostic about whether various EA funders should lower, raise, or maintain their current bars for funding
  3. It also isn't a good idea for all those people to spend a lot of time pursuing work at explicitly EA orgs or on explicitly EA projects. (And in any case, probably all of those people should spend at least some time pursuing other types of work.)
  4. Instead, many of those people should focus on pursuing work at other types of orgs or other types of projects
  5. In many cases, a key reason for (4) will be for the instrumental value of getting knowledge, skills, credentials, etc., which can later help the person get more explicitly EA work. It's also nice that this costs fewer EA resources, e.g. mentorship time from EAs.
  6. But also in many cases, a key reason for (4) will be that the non-EA work is very EA-aligned, in the sense that it itself has a substantial direct impact.

On 3: I think it can be a good idea for many people to spend some effort pursuing explicitly EA work, and for some people to focus primarily on pursuing such work, even early in their careers.

E.g., this is what I did, and I now have strong reason to believe it was a good move for me. (My evidence includes the fact I had more success in EA job applications than other job applications, and the views expressed by various people who gave feedback on my career plan. That said, I did still apply for quite a few non-EA roles, especially early on when I didn't yet have evidence pushing in favour of me doing explicitly EA roles. And I had already worked for 2 years before starting in an "EA job".)

It also seems worth stating explicitly that I think that that's basically a matter of comparative advantage, rather than how generically talented a person is. E.g., I think I'd be less good at working in government or in academia than a fair number of other people in the EA community would be, and this is part of what pushes in favour of me focusing on explicitly EA roles.

Hi Michael, thanks for your responses! I'm mainly addressing the metaphorical runner on the right in the photograph at the start of the post.

I am also agnostic about where the bar should be. But having a bar means that you have to maintain the bar in place. You don't move the bar just because you couldn't find a place to spend all your money.

For me, EA has been an activating and liberating force. It gives me a sense of direction, motivation to continue, and practical advice. I've run EA research and community development projects with Vaidehi Agarwalla, and published my own writing here and on LessWrong. These outlets, plus my pursuit of a scientific research career, have been satisfying outlets for my altruistic drive.

Not everything has been successful - but I learned a lot along the way, and feel optimistic about the future.

Yet I see other people who seem very concerned and often disappointed at the difficulty they have in their own relationship with EA. Particularly, getting EA jobs and grants, or dealing with the feeling of "I want to save the world, but I don't know how!" I'm extremely optimistic that EA is and will continue to make an outsize positive impact on the world. What I'm more afraid of is that we'll generate what I call "bycatch."

I guess in terms of this metaphor, part of what I'm saying is that there are also some people who aren't "in the race" but really would do great if they joined it (maybe after a few false starts), and other people who are in the race but would be better off switching to tennis instead (and that's great too!). 

And I'm a little worried about saying something like "Hey, the race is super hard! But don't feel bad, you can go train up somewhere else for a while, and then come back!" 

Because some people could do great in the race already! (Even if several applications don't work out or whatever; there's a lot of variation between what different roles need, and a lot of random chance, etc.) And some of these people are erring in the direction of self-selecting out, feeling imposter syndrome, getting 4 job rejections and then thinking "well, that proves it, I'm not right for these roles", etc. 

Meanwhile, other people shouldn't "train up and come back", but rather go do great things elsewhere long-term! (Not necessarily leaving the community, but just not working at an explicitly EA org or with funding from EA funders.) And some of these people are erring in the direction of having their heart set of getting into an "EA job" eventually, even if they have to train up elsewhere first.

---

I'd also be worried about messaging like "Everyone needs to get in this particular race right now! We have lots of money and lots to do and y'all need to come over here!" And it definitely seems good to push against that. But I think we can try to find a middle ground that acknowledges there are many different paths, and different ones will be better for different people at different times, and that's ok. (E.g., I think this post by Rob Wiblin does that nicely.)

Figuring out how to give the right advice to the right person is a hard challenge. That's why I framed skilling up outside EA as being a good alternative to "banging your head against the wall indefinitely." I think the link I added to the bottom of this post addresses the "many paths" component.

The main goal of my post, though, is to talk about why there's a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if you've become convinced that you can't surpass it at this stage in your journey.

It would be helpful to find a test to distinguish EAs who should keep trying from those who should exit, skill up, and return later. Probably one-on-one mentorship, coupled with data on what sorts of things EA orgs look for in an applicant, and the distribution of applicant quality, would be the way to devise such a test.

A team capable of executing a high-quality project to create such a test would (if I were an EA fund) definitely be worthy of a grant!

Also, here's a somewhat relevant intervention idea that seems interesting to me, copied from an upcoming post in my sequence on improving the EA-aligned research pipeline (so this passage focuses on research roles, but you can easily extrapolate the ideas):

Improving the vetting of (potential) researchers, and/or better “sharing” that vetting

For example:

  • Improving selection processes at EA-aligned research organisations
  • Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process.
    • This already happens, but could perhaps be improved by:
      • Increasing how often it happens
      • Increasing how well-targeted the referrals are
      • Increasing the amount of information provided to the second selection process?
      • Increasing how much of the second selection process the candidate can “skip”?
  • Creating something like a "Triplebyte for EA researchers", which could scalably evaluate aspiring/junior researchers, identify talented/promising ones, and then recommend them to hirers/grantmakers^[This idea was suggested as a possibility by a commenter on a draft of this post.]
    • This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/grantmakers

Triplebyte's value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:

  1. Less biased
  2. More predictive of success-linked technical prowess
  3. Convenient (since companies don't have to run the technical interviews themselves)

If there's room for an "EA Triplebyte," that would suggest that EA orgs have at least one of those three problems.

So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.

Are they looking at the same sorts of skills? Are their tests any good? Are the tests time-consuming and burdensome for EA orgs? Alternatively, do many EA orgs pass up on needed hires because they don't have the short-term capacity to evaluate them?

Then you'd need to consider what alternative tests would be a better measurement of technical research prowess, and how to show that they are better predictive of success than present technical interviews.

It would also be important to determine the scale of the problem. Eyeballing this list, there's maybe 75 EA-related organizations. How many hires do they make per month? How often does their search fail for lack of qualified candidates? How many hours do they spend on technical interviews each time? Will you be testing not for EA-specific for general research capacity (massively broadening your market, but also increasing the challenge of addressing all their needs)?

Finally, you'd need to roll that up into a convenient, trustiworthy and reliable package that clients are excited to use instead of their current approach.

This seems like a massive amount of work, demanding a strong team, adequate funding and prior interest by EA orgs, and long-term commitment. It also sounds like it might be really valuable if done well.

Thanks for these thoughts - I think I'll add a link to your comment from that section of my post. 

I think your analysis basically sounds correct to me. I would also be quite surprised if this came into existence (and was actually used by multiple orgs) in the next 10 years, and I don't think it's likely to be the highest priority intervention for improving the EA-aligned research pipeline, though I'd be keen to at least see people flesh out and explore the idea a bit more. 

FWIW, I'm guessing that this commenter on my doc meant something a little bit more distant from Triplebyte specifically than what your comment suggests - in particular, I don't think the idea would be just to conduct technical interviews, but also other parts of the selection process. At least, that's how I interpreted the comment, and seems better to me, given that I think it's relatively rare for EA orgs to actually have technical interviews in their selection processes. (There may often be a few questions like that, but without it being the main focus for the interview. Though I also might be misinterpreting what a technical interview is - I haven't worked in areas like engineering or IT.)

My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.

It just seems to me that Triplebyte is powered by a mature industry that's had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I don't think EA is old or big or wealthy enough to have a sharp sense of exactly what the stable needs are.

For a sense of scale, there are almost 4 million programmers in the USA. Triplebyte launched just 5 years ago. It took millions of people working as programmers to generate adequate demand and capacity for that service to be successful.

All in all, my guess is that what we're missing is charismatic founder-types. The kind of people who can take one of the problems on our long lists of cause areas, turn it into a real plan, pull together funding and a team (of underutilized people), and make it go.

Figuring out how to teach that skill, or replace it with some other foundation mechanism, would of course be great. It's necessary. Otherwise, we're kind of just cannibalizing one highly-capable project to create another. Which is pretty much what we do when we try to attract strong outside talent and "convert" them to EA.

Part of the reason I haven't spent more time trying to found something right off the bat is that I thought EA could benefit more if I developed a skillset in technology. But another reason is that I just don't have the slack. I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.

Most neophytes don't have that kind of slack. That's why I especially lean on the side of "if it hurts, don't do it."

I don't have any negativity toward the encouragement to try things and be audacious. At the same time, there's a massive amount of hype and exploitative stuff in the entrepreneurship world. This "Think of the guy who wrote Winzip! He made millions of dollars, and you can do it too!" line that business gurus use to suck people in to their self-help sites and Youtube channels and so on.

The EA movement had some low-hanging fruit to pick early on. It's obviously a huge win for us to have great resources like 80k, or significant organizations like OpenPhil. Some of these were founded by world-class experts (Pete Singer) and billionaires, but some (80k) were founded by some young audacious people not too far out of grad school. But those needs, it seems to me, are filled. The world's pretty rich. It's easier to address a funding shortfall or an information shortfall, than to get concrete useful direct work done.

Likewise in the business world, it's easier to find money for a project and outline the general principles of how to run a good business, than to actually develop and successfully market a valuable new product. There's plenty of money out there, and not a ton of obvious choices to spend it on. Silicon Valley's looking for unicorns. We're looking for unicorns too. There aren't many unicorns.

I think that the "EA establishment's" responsibility to neophytes is to tell them frankly that there's a very high bar, it's there for a reason, and for your own sake, don't hurt yourself over and over by failing to clear it. Go make yourself big and strong somewhere else, then come back here and show us what you can do. Tell people it's hard, and invite them back when they're ready for that kind of challenge.

I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.

I don't think this is true, at least not as a general rule. I think you can do both (have a safe career and pursue something entrepreneurial) if you make small, focused bets  to begin with and build from there. Related discussion here.

I agree, I should have included "or a safe career/fallback option" to that.

My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.

I used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who aren't very good.

Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect there's a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.

Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:

  1. They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
  2. Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their own job search mechanisms can identify. In effect, they are outsourcing their headhunting for a particular job category.

Given that EA orgs seem primarily to lack specific forms of domain expertise, as well as well-defined project ideas/teams, what would an EA Triplebyte have to achieve?

They'd need to be able to interface with EA orgs and identify the specific forms of domain expertise that are required. Then they'd need to be able to go out and recruit those experts, who might never have heard of EA, and get them interested in the job. They'd be an interface to the expertise these orgs require. Push a button, get an expert.

That seems plausible. Triplebyte evokes the image of a huge recruiting service meant to fill cubicles with basically-competent programmers who are pre-screened for the in-house technical interview. Not to find unusually specific skills for particular kinds of specialist jobs, which it seems is what EA requires at this time.

That sort of headhunting job could be done by just one person. Their job would be to do a whole lot of cold-calling, getting meetings with important people, doing the legwork that EA orgs don't have time for. Need five minutes of a Senator's time? Looking to pull together a conference of immunologists to discuss biosafety issues from an EA perspective? That's the sort of thing this sort of org would strive to make more convenient for EA orgs.

As they gained experience, they would also be able to help EA orgs anticipate what sort of projects the domain experts they'd depend upon would be likely to spring for. I imagine that some EA orgs must periodically come up with, say, ideas that would require some significant scientific input. Some of those ideas might be more attractive to the scientists than others. If an org like this existed, it might be able to tell those EA orgs which ones the scientists are likely to spring for.

That does seem like the kind of job that could productively exist at the intersection of EA orgs. They'd need to understand EA concepts and the relationships between institutions well enough to speak "on behalf of the movement," while gaining a similar understanding of domains like the scientific, political, business, philanthropic, or military establishment of particular countries.

An EA diplomat.

But those needs, it seems to me, are filled.

I agree that there are fewer lower hanging fruit than there used to be. On the other hand, there's more guidance on what to do and more support for how to do it (perhaps "better maps to the trees" and "better ladders" - I think I'm plagiarising someone else on the ladder bit). I'd guess that it is now overall somewhat or significantly harder for someone in the position Ben Todd was in to make something as useful as 80k, but it doesn't seem totally clear.

And in any case, something as useful as 80k is a high bar! I think something could be much less useful and still be very useful. And someone perhaps could "skill up" more than Ben Todd had, but only for like a couple years. 

And I think there really are still a lot of fairly low hanging fruit. I think some evidence for this is the continuing number of EA projects that seem to fill niches that seem like they obviously should be filled, seem to be providing value, and are created by pretty early-career people. (I can expand on this if you want, but I think e.g. looking at lists of EA orgs already gives a sense of what I mean.)

I agree with many parts of your comment, but I continue to think only some sizeable fraction of people should be advised to "Go make yourself big and strong somewhere else, then come back here and show us what you can do", while also: 

  • many people should try both approaches at first
  • many people should focus mostly on the explicitly EA paths (usually after trying both approaches and getting some evidence about comparative advantage)
  • many people should go make themselves big and strong and impactful somewhere else, and then just stay there, doing great stuff

I think it's perhaps a little irresponsible to give public advice that's narrower than that - narrower advice makes sense if you're talking to a specific person and you have evidence about which of those categories of people they're part of, but not for a public audience.

(I think it's also fine to give public advice like "on the margin, somewhat more people should be doing X, and some ways to tell if you specifically should be doing X are Y and Z". I think 80k's advice tends to look like that. Though even that often gets boiled down by other people to "quick, everyone should do X!", and then creates problems.)

I do think getting it would be good to get more clarity on what proportion of EAs are spending too much vs too little time pursuing explicitly EA-aligned roles (given their ultimate goals, fit, etc.), and more clarity on what proxies can be used to help people work out which group they're in. 

Though I think some decent insights can already be gleaned from, for example, posts tagged Working at EA vs Non-EA Orgs or things linked to from those posts, and one on one career advice conversations.

(And I think we can also improve on the current situation - where some people are writing themselves off and others have too narrow a focus - by just making sure we always clearly acknowledge individual variation, there being many different good paths, it taking time to work out what one is a fit for, etc.)

(I also like that Slate Star Codex post you link to, and agree that it's relevant here.)

I broadly endorse this advice.

I would agree strongly - but would advise people to think about how to stay connected to EA via both giving, and their social circles, ideally including local EA leadership positions, while building their resume and skill set.

SHOW: A framework for shaping your talent for direct work covers some quite similar themes, so readers may find that of interest as well.

TLDR: If your career as an EA has stalled, you’ll eventually break through if you do one (or more) of four things: gaining skills outside the EA community, assisting the work of more senior EAs, finding valuable projects that EAs aren’t willing to do, or finding projects that no one is doing yet.

See also Rob Wiblin's comment on the post:

This is great. One thing I'd add is 'Demonstrate'. (Or dare I say.... Show.)

If you think your skills are better than people can currently measure with confidence, you need to find a way to credibly signal how capable you are, while demanding as little time as possible from senior people in the process.

You can do that in a lower level role, or by pulling off some impressive, scrutable and visible project. Or getting a more classic credential. Maybe other things as well.

One reason so many prominent EAs have been writers in the past is not only that it's a very broadly useful skill. It's also a skill which is unusually public and easy for others to evaluate you on. It also gives you a chance to demonstrate your general reasoning ability, which is one of the most widely valued characteristics.

Don't try to wake up and save the world. Don't be bycatch. Take 15 years and become a domain expert. Take a career and become a macrostrategy expert. Mentor. Run small and non-EA projects. Circle back to EA periodically with your newfound skills and see what a difference you can make then. There is absolutely no way we can have a longtermist movement if we can't be longtermist about our own lives and careers. But if we can, then we can.


Many social movements I have been a part of (political, sports, religious etc.) have a sort of "more is more" aspect to them that I see a lot in EA. The basic idea is this: it is fine if you just want try things out, but the more involved you are in the organization, the more events you go to, the more you align your life around the mission of the organization, the better. To a large part, this is why I have often experienced "organizational burnout": when due to changing circumstances I cannot or do not want to be as involved anymore, it is often easier to quit altogether rather than to scale things back. With most of my commitments these days I try to follow a "less is more" approach, where I try to capture about ~10% of the scope of an organization or movement. The 10% scope of EA seems to be the thinking that we should consider how important a problem is when choosing a career or where to donate our funds. If we get more involved than this, there is a danger of not being able to sustain it over a long time. The other advantage of being minimally involved is that we can join a larger number of organizations and get a diverse and rich set of benefits that exceed what any one organization can provide.

This seems like quite solid advice to me and especially relevant in light of posts like this. It makes a lot of sense to try to "skill up" in areas that have lower barriers of entry (as long as they provide comparable or better career capital) and I like the idea that "you can go get somebody else to invest in your skilling-up process, then in a way, you're diverting money and mentorship into EA." This seems especially valuable since it both redirects resources of non-EA orgs that might've otherwise gone to skilling up people who aren't altruistically-minded and frees up the resources of EA orgs that can now go towards developing other members of EA.

In a couple weeks, I'll publish a post on "Intervention options for improving the EA-aligned research pipeline" (as part of this sequence). One of the interventions I discuss there lines up well with some things you mention, so I'll share here the draft of the section on that intervention, in case it's of interest to you or some readers:

Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.

  • Non-EA options for research training, credentials, etc. include:
    • Masters programs
    • PhD programs
    • online courses
    • various “bootcamps”
    • internships at think tanks
    • internships in politics or civil service
    • other jobs
  • Ways to increase and/or improve EAs’ use of such things could include:
    • Simply encouraging this
    • Helping guide people to examples of these things that are high-quality and suited to their needs
      • E.g., via recommendation lists
      • E.g., via 1-1 advice
      • E.g., via things like 80,000 Hours’ articles
    • Providing scholarships or grants specifically to support the use of these options for training, credentials, etc.
  • Regarding why this might be useful:
    • One commenter on a draft of this post wrote “I think academia developed the PhD process for a reason. IMO this is where most of the value in creating top notch researchers lies.”
      • I’d add that academia can also award or deny various credentials, which can then be considered as an aide to vetting (e.g., during EA-aligned hiring or grantmaking processes by EA orgs/funders)^[Of course, those credentials aren’t perfect proxies for what vetters care about. But the same is true of other proxies as well, and it seems clear that academia-related credentials might add some value in some vetting processes.]
    • Another commenter on a draft of this post wrote “I agree, and I think that it would generally be best if people won't need EAs to skill them up in research skills (as this is costly and is available outside of EA).”

[Edit: Actually, I've just had a really interesting conversation about concrete options for actually doing this well which I don't think are widely discussed, so I think I'll expand that section into its own mini post on the topic. I'll comment here once I've done so, since that could provide more ideas on next steps for readers of your post.]

I've now turned that section into a standalone draft post. I'll post it in ~2 or 3 weeks, but if people want to read the draft sooner than that, they can do so here.

Counter-point: If you are interested in an EA job or grant, please do apply to it, even if you haven't finished school. If you're reading the EA forum, you are likely in the demographic of people where (some) EA orgs and grant makers want your application.

I just imagined the world where none of my early-career colleagues had applied to EA things. I think that world is plausibly counterfactually worse. Possibly a world with fewer existing EA adjacent orgs, smaller EA adjacent orgs, or fewer high impact EA jobs. I think dynamic where we have a thriving community of EAs who apply for EA jobs and grants is a major strength of the movement. I think EA orgs benefit so much from having strong applicants relative to the wider hiring market. I hope everyone keeps erring on the side of applying! 

But also yes definitely do look outside of EA - try your best to actually evaluate impact, don't get biased by whether or not something is labeled "EA". 
 

Also don't worry about repeated rejections. Even if you are rejected, your application had an expected value, it increased the probability that a strong hire was made and that more impact was achieved. The strength of the applicant pool matters. Rejection of strong applicants is a sign of a thriving and competitive movement. It means that the job that you thought was important enough to apply to is more likely to be done well by whoever does it.

Rejection should not be taken as evidence that your talent or current level of experience is insufficient. I think that (for most people reading this forum) it's often less a lack of the trust/vetting issue, and more a bit of randomness. I've applied lots of places. In some I did not even make it into the first round, totally rejected. In others I was a top candidate or accepted. I don't think this variance is because of meaningfully differing fit or competitiveness, I think it's because recruiting, grantmaking, any process where you have to decide between a bunch of applications, is idiosyncratic. I'm sure anyone who has screened applications knows what I'm talking about, it's not an exact science. There's a lot of applicants and little time, sometimes snap judgements must be made in a few seconds - at the end we pick a hopefully suitable candidate, but we also miss lots of suitable candidates, sometimes overlooking several "best" candidates. And then there's semi-arbitrary differences in what qualities different screeners emphasize (the interview? a work task? EA engagement? Academic degrees?). When there's a strong applicant pool, it means things are a bit more likely to go well.

(All that said, EA is big enough that all this stuff differs a lot by specific org as well as broader cause area)

Another factor which may play a role in the seeming arbitrariness of it all, is that orgs are often looking for a very specific thing, or have specific values or ideas that they emphasize, or are sensitive to specific key-words, which aren't always obvious and legible from the outside - leading to communications gaps. To give the most extreme example I've encountered of this, sometimes people don't indicate that they know what EA is about in their initial application, perhaps not realizing that they're being considered alongside non-EA applicants or that it might matter. For specific orgs, communication gaps might get more specific. If you're super interested in joining an org, getting a bit of intel on this can really help (and is a lot easier than trying to get experience somewhere else before re-applying!).

Great thoughts, ishaan. Thanks for your contributions here. Some of these thoughts connect with MichaelA's comments above. In general, they touch on the question of whether or not there are things we can productively discover or say about the needs of EA orgs and the capabilities of applications that would reduce the size of the "zone of uncertainty."

This is why I tried to convey some of the recent statements by people working at major EA orgs on what they perceive as major bottlenecks in the project pipeline and hiring process.

One key challenge is triangulation.  How do we get the right information to the right person? 80000 Hours has solved a piece of this admirably, by making themselves into a go-to resource on thinking through career selection from an EA point of view.

This is a comment section on a modestly popular blog post, which will vanish from view in a few days. What would it take to get the information that people like you, MichaelA, and many others have, compile it into a continually maintained resource, and get it into the hands of the people who need it? Does that knowledge have a shelf life long enough to be worth compiling, yet general enough to be worth broadcasting, and that is EA-specific enough to not be available elsewhere?

I'm primarily interested here in making statements that are durably true. In this case, I believe that EA grantmakers will always need to have a bar, and that as long as we have a compelling message, there will consequently always be some people failing to clear it who are stuck in the "zone of uncertainty."

With this post, I'm not trying to tell them what they should do. Instead, I am trying to articulate a framework for understanding this situation, so that the inchoate frustration that might otherwise result can be (hopefully) transmuted into understanding. I'm very concerned about the people who might feel like "bycatch" of the movement, caught in a net, dragged along, distressed, and not sure what to do.

That kind of situation can produce anger at the powers that be, which is a valid emotion. However, when the "powers that be" are leaders in a small movement that the angry person actually believes in, it could be more productive to at least come to a systemic understanding of the situation that gives context to that emotion. Being in a line that doesn't seem to be moving very fast is frustrating, but it's a very different experience if you feel like the speed at which it's moving is understandable given the circumstances.

With this post, I'm not trying to tell them what they should do.

I think that conflicts with some phrasings in this post, which are stated as recommendations/imperatives. So if in future you again have the goal of not telling people what they should do but rather providing something more like emotional support or a framework, I recommend trying to avoid that kind of phrasing. (Because as mentioned in another comment, I think this post in effect provides career advice and that that advice is overly specific and will only be right for some readers.) 

Example paragraph that's stated as about what people should do:

Don't try to wake up and save the world. Don't be bycatch. Take 15 years and become a domain expert. Take a career and become a macrostrategy expert. Mentor. Run small and non-EA projects. Circle back to EA periodically with your newfound skills and see what a difference you can make then. There is absolutely no way we can have a longtermist movement if we can't be longtermist about our own lives and careers. But if we can, then we can.

I can see how you might interpret it that way. I'm rhetorically comfortable with the phrasing here in the informal context of this blog post. There's a "You can..." implied in the positive statements here (i.e. "You can take 15 years and become a domain expert"). Sticking that into each sentence would add flab.

There is a real question about whether or not the average person (and especially the average non-native English speaker) would understand this. I'm open to argument that one should always be precisely literal in their statements online, to prioritize avoiding confusion over smoothing the prosody.

What would it take to get the information that people like you, MichaelA, and many others have, compile it into a continually maintained resource, and get it into the hands of the people who need it?

I guess the "easy" answer is "do a poll with select interviews" but otherwise I'm not sure. I guess it would depends on which specific types of information you mean? To some degree organizations will state what they want and need in outreach. If you're referring to advice like what I said re: "indicate that you know what EA is in your application", a compilation of advice posts like this one about getting a job in EA might help. Or you could try to research/interview to find more concrete aspects of what the "criteria +bar to clear on those criteria" is for different funders if you see a scenario where the answer isn't clearly legible. (If it's a bar at all. For some stuff it's probably a matter of networking and knowing the right person.)

Another general point on collecting advice is that I think it's easy to accidentally conflate "in EA" (or even "in the world") with "in the speaker's particular organization, in that particular year, within that specific cause area" when listening to advice…The same goes for what both you and I have said above. For example, my perspective on early-career is informed by my particular colleagues, while your impression that "funders have more money than they can spend" or the work being all within "a small movement" etc is not so applicable for someone who wants to work in global health. Getting into specifics is super important. 

I agree with most of the things you said.

But I think rejection should be taken as evidence that your talent or current level of experience is insufficient. Rejection from any one round is weak evidence, because there are lots of other factors + random noise that might also explain the result. But if you applied to a similar type of role 100 times and were rejected 100 times without making it through the initial screening, that would be strong evidence. (Caveat that this might just be semantics/pedantry and we might already agree)

I agree, a single rejection is not close to conclusive evidence, but it is still evidence on which you should update (though, depending on the field, possibly not very much)

I agree with your first comment, and am sad to see it downvoted. As I mentioned in my comment above, I think for a lot of people, at least a lot of people who do think they'd be interested in EA jobs or grants, it really makes sense to apply to both EA and non-EA things. And it makes sense to apply to lots of things, even though / because any given application probably has a low chance of success. (And when success happens, that's usually a really big positive for both the applicant and the org/grantmaker, such that it can make up for the cost of many applications.)

 I do think it's possible for people to spend too long applying to things, but I think it's probably more common to make too few applications and so end up either with no offers or with a less good offer than one could've gotten. And I certainly think it's possible for people to focus too much on EA orgs/grants and not apply enough to non-EA ones, but I think often (not always) the real problem there is that they're not applying to enough non-EA stuff, rather than that they're applying to too much EA things.

All that said, I disagree with "Rejection should not be taken as evidence that your talent or current level of experience is insufficient", taken literally. Rejection should be taken as (very) weak evidence of. Consider: If you were accepted, this would be evidence that you are a good fit for the role. And you started out thinking there was some chance you'd be accepted. So a rejection has to be some evidence that you aren't a fit. (See also.)

I think people often update too strongly on that weak evidence, and it's good to caution against that. But the evidence can still matter - e.g., if you've now had 5-10 rejections for one type of role and got an offer for another type, your decision about whether to accept the latter role or keep looking should take into account the now weak/moderate evidence you're not a great fit for the former.

Heh, I was wondering if I'd get called out on that. You're totally right, everything that happens in the world constitutes evidence of something! 

What I should have said is that humans are prone to fundamental attribution error and it is bad to privilege the hypothesis that it's evidence of real skill/experience/resume signalling/degree etc, because then you risk working on the wrong things. Rejections are evidence, but they’re mostly evidence of a low baseline acceptance rate, and only slightly  evidence of other things.

I can imagine someone concluding things like "I'd better get a PhD in the subject so I can signal as qualified and then try again" in a scenario where maybe the thing that would've shifted their chances is rewording a cover letter, spending a single day researching some examples of well-designed CEAs before the work task, or applying on a different year.

'Determining where to set the bar is another challenge. If they set it too low, they'll throw away money. If they set it too high, they'll have money sitting around with nothing to do. This isn't necessarily bad, though. They can save it for the future, in hopes that more high-quality projects will appear later.'

It's worth noticing that these aren't symmetrical cases. Assuming a vaguely normal diminishing returns curve, money 'thrown away' would still have positive EV, just less of it. 

On the other hand, money not spent has whatever your best guess at its future value is, which will depend eg on a) how likely discoveries for opportunities whose EV eclipses the current set, given that you sit on the money, b) the same, given that you spend the money, c) how likely other philanthropists will be to fill future high EV funding gaps if you don't, d) how much the value gained by donating it now would have compounded in the meantime.

Given that Givewell have included the same charity in their top tier (AMF) almost every year since their inception, the upper bound for both a) and b) seems not that high (though intuitively higher for b), and given the increasing numbers of very wealthy EA aligned people, c) also seems to have diminished a lot. So most of the case for conservative funding seems to rest on d.

This seems insufficient to me to justify the low rate of risky - aka exploratory - funding.

I think it's common for funding opportunities "just below the bar" to have capped upside potential, in the sense of the funders thinking that the grants are highly unlikely to generate very high impact. At least, that's my experience with grantmaking. The things I felt unsure about tended to always be of the sort "maybe this is somewhat impactful, but I can't imagine it being a giant mistake not to fund it." By contrast, saving money gives you some chance of having an outsized impact later on, in case you end up desperately needing it for a new, large opportunity. 

Good thoughts. I think this problem decomposes into three factors:

  1. Should there be a bar, or should all EA projects get funded in order of priority until the money runs out?
  2. If there's a bar, where should it be set, and why?
  3. After the bar is set, when should grantmakers re-examine its underlying reasoning to see if it still makes sense under present circumstances?

My argument actively argues that we should have a bar, is agnostic on how high the bar should be, and assumes that the bar is immobile for the purposes of the reader.

At some point, I may give consideration to where and how we set the bar. I think that's an interesting question both for grant makers and people launching projects. A healthy movement would strive for some clarity and consensus. If neophytes could more rapidly gain skill in self-evaluation relative to the standards of the "EA grantmaker's bar," without killing the buzz, it could help them make more confident choices about "looping out and back" or persevering within the movement.

For the purposes of this comment section, though, I'm not ready to develop my stance on it. Hope you'll consider expanding your thoughts in a larger post!

Curated and popular this week
Relevant opportunities