Hide table of contents

What is the actual evidence (examples) in favor of Working at an EAO instead of ETG? Options I am considering are becoming a GR, working in AI safety or Strategy and policy fields and management positions. Relevant examples/sites are much appreciated.

I do not want claims (hiding under "We believe Y is true"). I am really looking for evidence (such as, Working as a GR at open phil would = XX impact. So ETG more than 150k will produce the same XX impact.)

I would hope the evidence to include factors for replaceability, donor contribution vs EA org contribution etc... Also evidence based links for these factors are much appreciated

So far the only example I have is from @milan griffes here: https://80000hours.org/2016/08/reflections-from-a-givewell-employee/

Thanks.

12

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

OP posted this question, worded slightly differently, on Facebook as well. I answered there, and they asked me to repost here.

[TLDR: I don't think that anyone can give you the examples relevant for you. You need to find and speak to the relevant people, outside of EA as well. It is very doable, but there are also workarounds if you can't decide right now. Action creates clarity.]

I think 80K is actually saying it is better for most people to do direct work (including but not limited to neglected roles at EA orgs) than ETG. So don't just consider EAO vs ETG. Direct work most likely will /not/ be at an EA org, but would be within a field important to a top EA cause area.

The preference for roles outside of EA makes sense to me, because, while an EA org is likely to find a few good top candidates they consider value-aligned, acting in the wider world using EA principles is a much more reliable (and even stronger) counterfactual. The counterfactual hire at a non-EA business/org/foundation is unlikely to operate with EA in mind.

This is similar to how earning to give is more of a reliable counterfactual than working at an EA org, in that you are almost certainly adding 100% extra money in the pot--the candidate who would have gotten your high-paying job would almost certainly not have donated to an effective charity.

In the end, though, the path for you depends on a lot. You must consider your personal fit and your EA comparative advantage. It also depends on how you expect your favored cause areas, funding, and EA as a movement to evolve. I recommend brain dumping and drafting out as much as you can regarding those 5 things to clarify expectations and cruxes! If you can find cruxes, then you can investigate with expert interviews. Reach out to individuals, not orgs.

Regarding direct work options, reach out to individuals in roles that you could see yourself in (within or outside of an EA org). Even if you are stuck with half a dozen possible roles, that is narrowed down enough that you can ask people in those roles:

-If they feel they are making an impact

-What other options they were deciding between and why they chose what they did

-Where they think the field will go and what will it need

-If they think you would be a good fit.

Now you can compare ETG to what you learned about direct work. You can interview people earning to give in the field you'd work within and people related to philanthropy in the space you'd be donating to. That could look like:

-Fundraisers for an org you love

-Grantmakers in the space

-Relevant foundation researchers, coordinators, and others

Then see if they expect extra annual donations of A-B to be better/worse than direct work of X, Y, or Z.

If you need to further clarify ETG advantage, you can speak to hiring managers or heads of people at EA or non-EA places you'd be excited to work at. Ask them how much better their best candidate tends to be than their second-best candidate.

On the whole, informational interviews are priceless.

You can find all these people using this method or by asking others if they know someone who is doing a certain role.

Here is a recent forum post on how to prepare for informational interviews (keep in mind you might want to be more formal toward non-EAs). Don't forget to say thanks and cement the bond afterward. If you can help the person in any small way, you should.

And here are two blurbs from 80k encouraging informational interviews and other types of exploration.

So, long story short, you will need to find those people, examples, and evidence that are relevant to you. I get that it is really not easy... I'm in the middle of it too. But just keep getting things down on paper and things will start to become clearer. Take it bit by bit, and try to get more (not necessarily complete) clarity on one aspect per day.

Also, you don't have to have your path figured out now. If you can narrow it down to 2-3 options, see what next step you could take that would be relevant to both/all paths. If you are at the exact branching point today, then try out a role for a year in a way that should give you pretty good career capital for the other option(s). Then switch to try out another role in a year's time if it still is not clear. Most likely, a couple of key cruxes will arise while you work.

Action creates clarity, so don't worry about getting things perfect for now. You actually just need to learn enough to take your immediate next step with confidence.

Good luck and feel free to PM

[TLDR: I don't think that anyone can give you the examples relevant for you.

I don't need examples "relevant to me". I just wanted to know what sort of impact people are making say in Open Phil CE or other EAs considering relevant factors such as replaceability in fields like GR, AI SSP, and management positions. Sorry that was not clear.

I think 80K is actually saying it is better for most people to do direct work (including but not limited to neglected roles at EA orgs) than ETG.

This is a claim what 80khours makes. Do you have ONE example for this c

... (read more)

I mean, it's kinda intertwined, right? Presumably you are earning to give to fund people to do stuff. So someone needs to do that stuff. That person could be you. Or you could be the one funding. I think it really comes down to comparative advantage / personal fit (how good are you personally at doing vs. earning?) and marginal resources (of the orgs you would donate to or work for, how much do they want a talented person versus more money?).

In short, I think getting general examples of people having a high impact by working in an EA org would be misleading for anyone actually making this kind of career path decision.

I mean, it's kinda intertwined, right? Presumably you are earning to give to fund people to do stuff. So someone needs to do that stuff. That person could be you. Or you could be the one funding. I think it really comes down to comparative advantage / personal fit (how good are you personally at doing vs. earning?) and marginal resources (of the orgs you would donate to or work for, how much do they want a talented person versus more money?).

How do I do this Peter? I would think I need to start with what values of impact I can get with ETG and working a

... (read more)
5
Peter Wildeford
4y
To make a very long story very short, I think you should focus on trying to get a direct work job while still doing what is needed to keep your FAANG* options open. Then apply to direct work jobs and see if you get one. If you don't, pivot to FAANG. Also, while doing a FAANG job, you could still aim to build relevant skills for direct work and then switch. This is what I did (except I didn't work in FAANG specifically). Also, from what I know, donating $200k/yr while working in FAANG is possible for the top ~10% of engineers after ~5 years. ~ *For those following along who don't know, FAANG = Facebook Amazon Apple Netflix Google
3
agent18
4y
Peter please bear with me. 1. So it looks like you are suggesting that ALL DIRECT WORK (DW) any day is better than FAANG type of work, provided you get a job, EVEN if THE MARKET pool IS has many strong applicants. Is that correct? 2. I think I can focus on one, either on keeping FAANG open or on DW opportunities. I am 29, Indian by birth and working in Netherlands right now. The common route to a Big Bucks FAANG job (hence California), would require 50k$ in costs and a Master's degree to get into the US. And I probably need to start masters in 1-2 years max, if I hope to be a FAANG guy in US (Guess, feeling). So prepping on this from "now" on would be option 1. I don't think I will make it to Direct work jobs now based on what I have seen. I would need to work intensely on it separately as well, depending on what type of job. This would be option 2 provided I know what to focus on. Focusing on option 1 and 2 I think will be hard at the same time I think in this case! Thoughts? 3. Direct work in what? Each seems to need its own separate prep: GR, AI safety tech researcher, Management positions How do I compare different opportunities? It circles back again I think to calculations, examples of values. 4. On the other hand I could try to COPY YOU. * Get a Data Science Job in the US (by doing a Master's maybe?) * Be REALLY GREAT at something! Have atleast a Triple Master Rank on Kaggle (for e.g.,) (2-3 years maybe) * Be involved with EA community (treasurer, research manager-->No idea how to get there though!) * Build relevant skills for direct work (Not sure what "relevant skills" mean) * And SOMEHOW IT WILL WORK OUT! (possibly because there is a lot of overlap between research, Data science?) Can you give 2 examples of relevant skills you built for a particular direct work? And how you built it? Wow. The Power of ETG at FAANG.

I suppose I'm not directly answering your question, but I think it might be pretty hard to answer well, if you want to try to account for replaceability properly, because many people can end up in different positions because of you taking or not taking a job at an EA org, and it wouldn't be easy to track them. I doubt anyone has tried to. See this and my recent post.

I suppose I'm not directly answering your question, but I think it might be pretty hard to answer well, if you want to try to account for replaceability properly, because many people can end up in different positions because of you taking or not taking a job at an EA org, and it wouldn't be easy to track them.

If one hasn't taken into account replaceability, or the displacement chain, how do you know it is better to work in EA orgs rather than ETG (for X dollars).

Milan Griffes reports with a replaceability of 10% (guess) and attributing 60% (guess) contr

... (read more)
3
MichaelStJules
4y
New charities will sometimes be started to make more EA org positions, and they wouldn't get far if they didn't have people who were the right fit for them. Rethink Priorities and Charity Entrepreneurship are relatively new (although very funding-constrained, and this might be the bottleneck for their hiring and the bottleneck for starting new charities like them). Charity Entrepreneurship is starting many more EA orgs with their incubation program (incubated charities here). Maybe worth reaching out to them to see what their applicant pool is like? I think there are also specific talent bottlenecks, see [1], [2], [3]. Actually, this last one comes from Animal Advocacy Careers, a charity incubated by Charity Entrepreneurship to meet the effective animal advocacy talent bottlenecks. Btw, I think you have the wrong link for Carricks.
4
Jamie_Harris
4y
Thanks for mentioning AAC! Not sure about Rethink Priorities but minor correction is that last time I spoke to CE about this, they didn't see funding as a substantial constraint for them. They felt more constrained by high quality applicants to their programme. Edit: CE are now looking for funding, so are at least partly funding constrained!
3
agent18
4y
Good idea. I will contact them as well to see the talent pool. If they still need "high-quality people", somehow getting better (gaining) in that direction seems like a good opportunity. Micheal, I have written an article here: http://agent18.github.io/is-ea-bottlenecked-2.html in my unfinished blogspace about [1] and [2]. I really don't find evidence for their claims of bottlenecks. Or I don't understand what they are trying to say. For example, GR in GPR is recommended by 80khours in their high-impact-careers post, also in the surveys, also in the separate problem profiles etc... but yet during open phil's round on there is literally 100s of "good resumes" and "many candidates worthy of positions" but OP could not consume all of them. Peter Hurford can also be seen talking about the lack of Talent constrian in GR (I think) This I really need to look into. Thanks for that. Thanks. Corrected it. Sorry about that. Bottom line I don't know how people evaluate which career to choose. Many people are redirecting me to posts from 80khours. But I find only claims there. When I ask organizations on value generated replaceability I don't get any info from them. I think people do a guess at max, falling prey to vague words like Career Capital or possibly primarily focusing on what they are good at or I don't know. Anyways... It seems like a dead end to think that I can actually evaluate what I should be doing. Your thoughts? How did you end up choosing to go to DarwinAI? Why not something else like GR in GPR or FAAANG?
6
MichaelStJules
4y
I'd say it was kind of decided for me, since those other options were ruled out at the time. I applied to internships at some EA orgs, but didn't have any luck. Then, I did a Master's in computational math. Then, I started working part-time at a machine learning lab at the university while I looked for full-time work. I applied to AI internships at the big tech companies, but didn't have any luck. I got my job at DarwinAI because I was working for two of its cofounders at the lab. I had no industry experience before that. I'm currently trying to transition to effective animal advocacy research, reading more research, offering to review research before publication, applying to internships and positions at the orgs, and studying more economics/stats, one of the bottlenecks discussed here, with quantitative finance as a second choice, and back to deep learning in the industry as my third. I feel that EA orgs have been a bit weak on causal inference (from observational data), which falls under econometrics/stats.
1
agent18
4y
Your options sounds solid. I guess your 28 and can thus still get into relatively different quantitative Finance. But, how did you decide that it is best for you to dedicate your time to AAR? You could be working at GiveWell/Open Phil as a GR, or in OpenAI/MIRI in AI safety research (especially with your CS and Math background), you could also be working in ETG at the FAANG. Also 80khours no where seems to suggest that AAR of all the things are "high-impact-careers" nor does the EA survey say anything about it. In fact the survey talks about GR and AI safety. And did you account for replaceability and other factors? If so, how did you arrive at these numbers? So you hope to apply causal inference in AAR? Lastly I want to thank you from the heart for taking your time and effot to respond to me. Appreciate it brother.
6
MichaelStJules
4y
26, but 2 years isn't a big difference. :) So I'm choosing AAR over other causes due to my cause prioritization, which depends on both my ethical views (I'm suffering-focused) and empirical views (I have reservations about longtermist interventions, since there's little feedback, and I don't feel confident in any of their predictions and hence cost-effectiveness estimates). 80,000 Hours is very much pushing longtermism now. I'm more open to being convinced about suffering risks, specifically. I'm leaning against a job consisting almost entirely of programming, since I came to not enjoy it that much, so I don't think I'd be motivated to work hard enough to make it to $200K/year in income. I like reading and doing research, though, so AI research and quantitative finance might still be good options, even if they involve programming. I didn't do any explicit calculations. The considerations I wrote about replaceability in my post and the discussion here have had me thinking that I should take ETG to donate to animal charities more seriously. I think econometrics is not very replaceable in animal advocacy research now, and it could impact the grants made by OPP and animal welfare funds, as well as ACE's recommendations. I'll try a rough comparison now. I think there's more than $20 million going around each year in effective animal advocacy, largely from OPP. I could donate ~1% ($200K) of that in ETG if I'm lucky. On the other hand, if do research for which I'd be hard to replace and that leads to different prioritization of interventions, I could counterfactually shift a good chunk of that money to (possibly far) more cost-effective opportunities. I'd guess that corporate campaigns alone are taking >20% of EEA's resources; good intervention research (on corporate campaigns or other interventions) could increase or decrease that considerably. Currently only a few people at Humane League Labs and a few (other) economists (basically studying the effects of reforms in

From this older article:

This may apply, for example, to taking a job with Givewell, who likely follow a process more akin to ‘threshold hiring’.In this case, it seems likely that taking this job may increase the number of overall jobs by close to 1.

Not very good evidence, though, without word directly from GiveWell.

More on threshold hiring here, but no EA-specific examples.

Thanks Michael. As you said, we would need to confirm it from GiveWell. In 2019 they planned to hire 3-5 for new research staffs. It looks like they are physically limiting the growth of GiveWell compared to the available "talent pool" as expressed in Open Phil's hiring round. Also the priors suggest that GiveWell would like to "grow slowly": https://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/

So I really doubt we should go by the claim of 80k in this regard.

4
Aaron Gertler
4y
Note that GiveWell stated longer-term plans to "more than double" the size of their research team by early 2022. I assume that one of their bottlenecks is that they recently chose a new Managing Director who will add more research management capacity, but who doesn't start until July 2020. I wouldn't be surprised if hiring scales up after that (though I don't know for sure that it will).
Comments3
Sorted by Click to highlight new comments since: Today at 4:48 AM

Would OPP and the EA Funds grant more funding overall if new EA orgs are started or do they distribute a fixed amount of funding? New EA orgs will create more positions for EAs to fill.

It looks like EA Funds distributes a roughly fixed amount (based on their own fundraising) whereas I imagine OpenPhil is more flexible.

This also seems right to me. We roughly try to distribute all the money we have in a given year (with some flexibility between rounds), and aren't planning to hold large reserves. So from just our decisions we couldn't ramp up our grantmaking because better opportunities arise.

However, I can imagine donations to us increasing if better opportunities arise, so I do expect there to be at least some effect.

Curated and popular this week
Relevant opportunities