For people reading this post now as part of the decade review, I think this article was useful to get people thinking about this issue, but the more comprehensive data in this later post is more useful for actually estimating the rate of drop out.
This was popular, but I'm not sure how useful people found it, and it took a lot of time. I hoped it might become an ongoing feature, but I couldn't find someone able to and willing to run it on an ongoing basis.
These are still the best data on community drop out I'm aware of.
I think the post made some important but underappreciated arguments at the time, especially for high stakes countries with more cultural differences, such as China, Russia, and Arabic speaking countries. I might have been too negative about expanding into smaller countries that are culturally closer. I think it had some influence too, since people still often ask me about it.
One aspect I wish I'd emphasised more is that it's very important to expand to new languages – my main point was that the way we should do it is by building a capable, native-language ... (read more)
I still think this post was making an important point: that the difference in cause views in the community was because the most highly engaged several thousand people and the more peripheral people, rather than between the 'leaders' and everyone else.
This was written pretty recently and I still agree with it!
This is still our most current summary of our key advice on career planning, and I think useful as a short summary.
If I was writing it again today, there are a few points where it could be better synced with our updated key ideas series, and further simplified (e.g. talking about 3 career stages rather than 4).
There is still little writing about what the fundamental claims of EA actually are, or research to investigate how well they hold, or work to communicate such claims. This post is one of the few attempts, so I think it's still an important piece. I would still really like people to do further investigation into the questions it raises.
I think the approach taken in this post is still good: make the case that extinction risks are too small to ignore and neglected, so that everyone should agree we should invest more in them (whether or not you're into a longtermism).It's similar to the approach taken in the Precipice, though less philosophical and longtermist.
I think it was a impactful post in that it was 80k's main piece arguing in favour of focusing more of existential risk during a period when the community seems to have significantly shifted towards focusing on those risks, and during ... (read more)
Hey there,My impression is that the relative degree of ops bottleneck might have become worse recently (after easing a bit by early 2020), so we'll consider updating that blurb again. To double check this, we would ideally run another survey of org leaders about skill needs, and there's some chance that happens in the next year.
Another reason why we dropped it is just because 'work at EA orgs' is already a priority path, and this is a subpath of that, and I'm not sure we should list both the broader path and subpath within the priority paths list (e.g. I also think 'research roles at EA orgs' is a big bottleneck but don't want to break that out as a separate category).
Just some quick feedback that I didn't find it very convincing to say that people like Peter Singer, Julian Savulescu, Jeff McMahan and Jeff Sebo have supported things like 1DaySooner, since they're pretty affiliated with EA and consequentialist ethics. I don't think anyone is claiming that consequentialist or EA-affiliated bioethicists have silly views. The review of randomly selected bioethics papers seems more convincing.
It's not exactly a nice conclusion.
You'd need to think something like geniuses tend to come from families with genius potential, and these families also tend to be in the top couple of percent by income.
It would line up with claims made by Gregory Clark in The Son Also Rises.
To be clear, I'm not saying I agree with these claims or think this model is the most plausible one.
I was pretty struck by how per capita output isn't obviously going down, and it's only when you do the effective population estimates that it does.
Could this suggest a 4th hypothesis: the 'innate genius' theory: about 1 in 10 million people are geniuses, and at least since around 1400, talent spotting mechanisms were good enough to find them, so the fraction of the population that was educated or urbanised doesn't make a difference to their chances of doing great work.
I think I've seen people suggest this idea - I'm curious why you didn't include it in the post.
Agree it's worth trying! We're hoping to try some sponsorships at 80k, and I think there are a couple of other collaborations and attempts at sponsorship going on.
Good point - seems plausible that it's a little more effective than their final $1000.
I agree I should have mentioned movement building as one of the key types of roles we need.
I did mention it in my later talk specifically about the implications: https://80000hours.org/2021/11/growth-of-effective-altruism/
Thanks, fixed. (https://twitter.com/ben_j_todd/status/1462882167667798021)
It's hard to know – most valuations of the human capital are bound up with the available financial capital. One way to frame the question is to consider how much the community could earn if everyone tried to earn to give. I agree it's plausible that would be higher than the current income on the capital, but I think could also be a lot less.
It's hard to know – most valuations of the human capital are bound up with the available financial capital.
Agreed. Though I think I believe this much less now than I used to. To be more specific, I used to believe that the primary reason direct work is valuable is because we have a lot of money to donate, so cause or intervention prioritization is incredibly valuable because of the leveraged gains. But I no longer think that's the but-for factor, and as a related update think there are many options at similar levels of compellingness as p... (read more)
Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.
First off, I agree with this:
I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.
I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbe... (read more)
One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable
I'm not sure about this, but I currently believe that the human capital in EA is worth considerably more than the financial capital.
There isn't a hard cutoff, but one relevant boundary is when you can ignore the other issue for practical purposes. At 10-100x differences, then other factors like personal fit or finding an unusually good opportunity can offset differences in cause effectiveness. At, say 10,000x, they can't.
Sometimes people also suggest that e.g. existential risk reduction is 'astronomically' more effective than other causes (e.g. 10^10 times), but I don't agree with that for a lot of reasons.
That's fair - the issue is there's a countervailing force in that OP might just fill 100% of their budget themselves if it seems valuable enough. My overall guess is that you probably get less than 1:1 leverage most of the time.
I think this dynamic has sometimes applied in the past.
However, Open Philanthropy are now often providing 66%, and sometimes 100%, so I didn't want to mention this as a significant benefit.
There might still be some leverage in some cases, but less than 1:1. Overall, I think a clearer way to think about this is in terms of the value of having a diversified donor base, which I mention in the final section.
+1 to this!If you're a software engineer considering transitioning into AI Safety, we have a guide about how to do it, and attached podcast interview.There are also many other ways SWE can use their skills for direct impact, including in biosecurity and by transitioning into information security, building systems at EA orgs, or in various parts of govt.
To get more ideas, we have 180+ engineering positions on our job board.
There are no sharp cut offs - just gradually diminishing returns.
An org can pretty much always find a way to spend 1% more money and have a bit more impact. And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.
You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually... (read more)
Yes, my main attempt to discuss the implications of the extra funding is in the Is EA growing? post and my talk at EAG. This post was aimed at a specific misunderstanding that seems to have come up. Though, those posts weren't angsty either.
This is the problem with the idea of 'room for funding'. There is no single amount of funding a charity 'needs'. In reality there's just a diminishing return curve. Additional donations tend to have a little less impact, but this effect is very small when we're talking about donations that are small relative to the charity's budget (if there's only one charity you want to support), or small relative to the EA community as a whole if you take a community perspective.
Makes sense - have added a note to the list.
I agree that's better - have changed it.
One quick comment is that people who are more self-motivated can easily progress via reading books, online content, podcasts etc. - and they don't need a fellowship at all.
Besides reading material, the main extra thing they need are ways to meet suitable people in the community – after they have some connections they'll talk about it the ideas naturally with those connections.
To get these people, you mainly need to:
1. Reach them with something interesting
2. Get them subscribed to something (e.g. newsletter, social media), so you can periodically remind the... (read more)
Applied Divinity Studies and Rossa O'Keeffe-O'Donovan both pointed out that talking about a single 'bar' can sometimes be misleading.
For instance, it can often be worth supporting a startup charity that has, say, a 10% chance of being above the bar, even if the expected value is that they're below the bar. This is because funding them provides value of information about their true effectiveness.
It can also been worth supporting organisations that are only a little above the bar but might be highly scalable, since that can create more total giving opportuni... (read more)
We should keep reminding ourselves that FTX's value could easily fall by 90% in a big bear market.
Normally with the podcasts we cut the filler words in the audio. This audio was unedited so ended up with more filler than normal. We've just done a round of edits to reduce the filler words.
I'm not a funder myself, so I don't have a strong take on this question.
I think the biggest consideration might just be how quickly they expect to find opportunities that are above the bar. This depends on research progress, plus how quickly the community is able to create new opportunities, plus how quickly they're able to grow their grantmaking capacity.
All the normal optimal timing questions also also relevant (e.g. is now an unusually hingey time or not; the expected rate of investment returns).
The idea of waiting 10 years while you gradually build a t... (read more)
Hey, that seems like I mis-spoke in the talk (or there's a typo in the transcript). I think it should be "current bar of funding with global development".
I think in general new charities need to offer some combination of the potential for higher or similar cost-effectiveness of AMF and scalability. Exactly how to weigh those two is a difficult question.
Attempt to summarise the key points on Twitter:
A hacky solution is just to bear in mind that 'movement building' often doesn't look like explicit recruitment, but could include a lot of things that look a lot like object level work.
We can then consider two questions:
This would ignore the object level value projected by the movement building efforts, but that would be fine, unless they're of comparable value.
For most interventions, either the movement building effects or the object level value is going to dominate, so we can just treat them as one of the other.
That all makes sense, thank you!
I had a similar question. I've been reading some sources arguing for strong action on climate change recently, and they tend to emphasise tipping points.
My understanding is that the probability of tipping points is also accounted for in the estimates of eq climate sensitivity, and is one of the bigger reasons why the 95% confidence interval is wide.
It also seems like if ultimately the best guess relationship is linear, then the expectation is that tipping points aren't decisive (or that negative feedbacks are just as likely as positive feedbacks).
Does that seem right?
This is a useful post and updated my estimate of the chance of lots of warming (>5 degrees) downwards.
Quick question: Do you have a rough sense of how the different emission scenarios translate into concentration of CO2 in the atmosphere?
The reason I ask is that I had thought there's a pretty good chance that concentrations double compared to preindustrial, which would suggest the long-term temperature rise will be roughly 2 - 5 centigrade with 95% confidence – using the latest estimate of ECS.
However, the estimates in the table are mo... (read more)
I don't mean to imply that, and I agree it probably doesn't make sense to think longtermist causes are top and then not donate to them. I was just using 10x GiveDirectly as an example of where the bar is within near termism. For longtermists, the equivalent is donating to the EA Long-term or Infrastructure Funds. Personally I'd donate to those over GiveWell-recommended charities. I've edited the post to clarify.
Would be useful to see the number of unique users over time, rather than just engagement hours.
Is the aim here to generate a bunch of PR for EA, or to actually convince Elon Musk to do more EA-aligned giving?
If the latter, I doubt trying to publicly pressure him into donating to an EA global poverty charity as part of a twitter debate is the best way to do it. (In fact, he already knows several EAs and has donated to EA orgs before.)
The 'get PR' angle (along the lines of what Fin is saying below) seems more promising – in that ideally we'd have more 'public intellectuals' focused on getting EA into the media & news cycle. This is mai... (read more)
I'd actually say there's a lot of work done on recruiting HNW donors - it's just mainly done via one-on-one meetings so not very visible.
That said, Open Philanthropy, Effective Giving, Founder's Pledge, Longview & Generation Pledge all have it as part of their mission.
There would be even more work on it, but right now the bottleneck seems to be figuring out how to spend the money we already have (we're only deploying $400m p.a. out of over $40bn+, under 1%). If we had a larger number of big, compelling opportunities, we could likely get more mega donors interested.
It's super rough but I was thinking about jobs that college graduates take in general.
One line of thinking is based on a direct estimate:
I think that's roughly right - though some of the questions around timing donations get pretty complicated.
I was wrong about that. The next step for GiveWell would be to drop the bar a little bit (e.g. to 3-7x GiveDirectly), rather than drop all the way to GiveDirectly.
I agree there's a substantial signalling benefit.
People earning to give might well have a bigger impact via spreading EA than through their donations, but one of the best ways to spread EA is to lead by example. Making donations makes it clear you're serious about what you say.
Quick attempt to summarise:
Thanks for the article, I've added a link to our page:
I'd be curious for thoughts on when you should take more courses. The main situations that came to mind for me were: (i) you're learning something you might actually use (e.g. programming) or (ii) you want to open up extra grad school options (e.g. taking extra math courses to open up economics).