All of Benjamin_Todd's Comments + Replies

Empirical data on value drift

For people reading this post now as part of the decade review, I think this article was useful to get people thinking about this issue, but the more comprehensive data in this later post is more useful for actually estimating the rate of drop out.

Careers Questions Open Thread

This was popular, but I'm not sure how useful people found it, and it took a lot of time. I hoped it might become an ongoing feature, but I couldn't find someone able to and willing to run it on an ongoing basis.

More empirical data on 'value drift'

These are still the best data on community drop out I'm aware of.

Why not to rush to translate effective altruism into other languages

I think the post made some important but underappreciated arguments at the time, especially for high stakes countries with more cultural differences, such as China, Russia, and Arabic speaking countries. I might have been too negative about expanding into smaller countries that are culturally closer. I think it had some influence too, since people still often ask me about it.

One aspect I wish I'd emphasised more is that it's very important to expand to new languages – my main point was that the way we should do it is by building a capable, native-language ... (read more)

New data suggests the ‘leaders’’ priorities represent the core of the community

I still think this post was making an important point: that the difference in cause views in the community was because the most highly engaged several thousand people and the more peripheral people, rather than between the 'leaders' and everyone else.

A new, cause-general career planning process

This is still our most current summary of our key advice on career planning, and I think useful as a short summary.

If I was writing it again today, there are a few points where it could be better synced with our updated key ideas series, and further simplified (e.g. talking about 3 career stages rather than 4). 

What actually is the argument for effective altruism?

There is still little writing about what the fundamental claims of EA actually are, or research to investigate how well they hold, or work to communicate such claims. This post is one of the few attempts, so I think it's still an important piece. I would still really like people to do further investigation into the questions it raises.

The case for reducing existential risk

I think the approach taken in this post is still good: make the case that extinction risks are too small to ignore and neglected, so that everyone should agree we should invest more in them (whether or not you're into a longtermism).

It's similar to the approach taken in the Precipice, though less philosophical and longtermist.

I think it was a impactful post in that it was 80k's main piece arguing in favour of focusing more of existential risk during a period when the community seems to have significantly shifted towards focusing on those risks, and during ... (read more)

Why is Operations no longer an 80K Priority Path?

Hey there,

My impression is that the relative degree of ops bottleneck might have become worse recently (after easing a bit by early 2020), so we'll consider updating that blurb again. To double check this, we would ideally run another survey of org leaders about skill needs, and there's some chance that happens in the next year.

Another reason why we dropped it is just because 'work at EA orgs' is already a priority path, and this is a subpath of that, and I'm not sure we should list both the broader path and subpath within the priority paths list (e.g. I also think 'research roles at EA orgs' is a big bottleneck but don't want to break that out as a separate category).

3Anonymous_EA5dThanks for this! I really appreciate how carefully 80K thinks these questions through and have updated toward this bottleneck having gotten worse fairly recently, as you suggest. With that said, if there was an ops bottleneck in 2018 and 2019 as reflected inprevious surveys of skill needs [] , and if the ops bottleneck is back as of now, I wonder whether early 2020 was more the exception than the rule. I don't want to rush your process. At the same time, because I perceive this as fairly urgent bottleneck (as seems to be at least somewhat confirmed in comments by CarolineJ, Scronfinkle, and Anya Hunt), I'll just note that I hope that survey does in fact happen this year. I doubt I can be helpful with this, but feel free to DM me if I could be - for example, I can think of at least one person who might be happy to run the survey this year and would likely do a good job. Again, I appreciate that you all are extremely thoughtful about these decisions. I will offer, from my outside perspective, that it seems like the Priority Paths already do a great job of conveying the value of research skills (e.g. 5/9 of the Priority Paths have the word "research" in the title), whereas they don't currently convey the value of operations skills. I'm not sure whether adding ops back to the Priority Paths is the best way to address this, or if there's another better option, such as simply removing the blurb about how ops skills are less needed now. But I think right now a reader of 80K would likely get the impression that ops skills are much less urgently needed than they are (see for example Eli Kaufman's comment on this post).
The Bioethicists are (Mostly) Alright

Just some quick feedback that I didn't find it very convincing to say that people like Peter Singer, Julian Savulescu, Jeff McMahan and Jeff Sebo have supported things like 1DaySooner, since they're pretty affiliated with EA and consequentialist ethics. I don't think anyone is claiming that consequentialist or EA-affiliated bioethicists have silly views. The review of randomly selected bioethics papers seems more convincing.

2Devin Kalish7dOn the one hand I agree that that piece of evidence is my least systematic and convincing. I mostly raise it because of Willy in the world asking for a bioethicist petition on challenge trials and Matt Yglesias citing the 1Day Sooner letter in claiming that bioethicists seem out of step with regular philosophers. In this context I thought it made sense to dig a little bit into the contents of the letter. On the other hand, I do think that Sebo and Singer and McMahan and Savulescu (and for that matter Jessica Flanigan and Anders Sandberg and others) should count towards the bioethicist scorecard, and if some bioethicists are consequentialist/EA-affiliated, that doesn't mean they are in some separate category, it should instead undermine some of the stereotypes.
Comments for shorter Cold Takes pieces

It's not exactly a nice conclusion.

You'd need to think something like geniuses tend to come from families with genius potential, and these families also tend to be in the top couple of percent by income.

It would line up with claims made by Gregory Clark in The Son Also Rises.

To be clear, I'm not saying I agree with these claims or think this model is the most plausible one.

3Charles Dillon 8dUnderstood, thanks. Yeah, this seems like a bit of an implausible just-so story to me.
Comments for shorter Cold Takes pieces

I was pretty struck by how per capita output isn't obviously going down, and it's only when you do the effective population estimates that it does.

Could this suggest a 4th hypothesis: the 'innate genius' theory: about 1 in 10 million people are geniuses, and at least since around 1400, talent spotting mechanisms were good enough to find them, so the fraction of the population that was educated or urbanised doesn't make a difference to their chances of doing great work. 

I think I've seen people suggest this idea - I'm curious why you didn't include it in the post.

5Charles Dillon 8dThis seems implausible to me, unless I'm misunderstanding something. Are all such geniuses pre-1900 assumed to come from the aristocratic classes? Why? If no, are there many counterexamples of geniuses in the lower classes being discovered in that time by existing talent spotting mechanisms? If yes, why would this not be the case any more post-1900, or is the claim that it is still the case?
What does the growth of EA mean for our priorities and level of ambition?

Agree it's worth trying! We're hoping to try some sponsorships at 80k, and I think there are a couple of other collaborations and attempts at sponsorship going on.

Despite billions of extra funding, small donors can still have a significant impact

Good point - seems plausible that it's a little more effective than their final $1000.

Is effective altruism growing? An update on the stock of funding vs. people

I agree I should have mentioned movement building as one of the key types of roles we need.

I did mention it in my later talk specifically about the implications:

Despite billions of extra funding, small donors can still have a significant impact

Thanks, fixed. (

A Red-Team Against the Impact of Small Donations

It's hard to know – most valuations of the human capital are bound up with the available financial capital. One way to frame the question is to consider how much the community could earn if everyone tried to earn to give. I agree it's plausible that would be higher than the current income on the capital, but I think could also be a lot less.

 It's hard to know – most valuations of the human capital are bound up with the available financial capital. 

Agreed. Though I think I believe this much less now than I used to.  To be more specific, I used to believe that the primary reason direct work is valuable is because we have a lot of money to donate, so cause or intervention prioritization is incredibly valuable because of the leveraged gains. But I no longer think that's the but-for factor, and as a related update think there are many options at similar levels of compellingness as p... (read more)

A Red-Team Against the Impact of Small Donations

Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.

First off, I agree with this:

I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.

I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbe... (read more)

5david_reinstein2moI want to 'second' some key points you made (which I was going to make myself). The main theme is that these 'absolute' thresholds are not absolute; these are simplified expressions of the true optimization problem. The real thresholds will be adjusted in light of available funding, opportunities, and beliefsabout future funding. See comments (mine and others) on the misconception of 'room for more funding'... the "RFMF" idea must be, either an approximate relative judgment ('past this funding, we think other opportunities may be better') or short-term capacity constraint ('we only have staff/permits/supplies to administer 100k vaccines per year, so we'd need to do more hiring and sourcing to go above this'.) Diminishing returns ... but not to zero The bar moves
2Denkenberger2moI think this is a very useful way of putting it. I would be interested in anyone trying to actually quantify this (even to just get the right order of magnitude from the top). I suspect you have already done something in this direction when you decide what jobs to list on your job board.

One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable

I'm not sure about this, but I currently believe that the human capital in EA is worth considerably more than the financial capital.

5robirahman2moThis is highly implausible. First of all, if it's true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities. But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we're really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity. When you're working on global poverty, perhaps you'd want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today. For x-risks this seems totally implausible. What's the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don't think alignment progress has outpaced that. If time until AGI is limited and short then we're actively falling behind. I don't think their investments or effectiveness are increasing fast enough for this explanation to make sense.
Despite billions of extra funding, small donors can still have a significant impact

There isn't a hard cutoff, but one relevant boundary is when you can ignore the other issue for practical purposes. At 10-100x differences, then other factors like personal fit or finding an unusually good opportunity can offset differences in cause effectiveness. At, say 10,000x, they can't.

Sometimes people also suggest that e.g. existential risk reduction is 'astronomically' more effective than other causes (e.g. 10^10 times), but I don't agree with that for a lot of reasons.

1JeremyR2moGot it - thanks for taking the time to respond!
Despite billions of extra funding, small donors can still have a significant impact

That's fair - the issue is there's a countervailing force in that OP might just fill 100% of their budget themselves if it seems valuable enough. My overall guess is that you probably get less than 1:1 leverage most of the time.

Despite billions of extra funding, small donors can still have a significant impact

I think this dynamic has sometimes applied in the past.

However, Open Philanthropy are now often providing 66%, and sometimes 100%, so I didn't want to mention this as a significant benefit.

There might still be some leverage in some cases, but less than 1:1. Overall, I think a clearer way to think about this is in terms of the value of having a diversified donor base, which I mention in the final section.

6Neel Nanda2moIf they have a rule of providing 66% of a charity's budget, surely donations are even more leveraged? $1 to the charity unlocks $2. Of course, this assumes that additional small donations to the charity will counter-factually unlock further donations from OpenPhil, which is making some strong assumptions about their decision-making
AI Safety Needs Great Engineers

+1 to this!

If you're a software engineer considering transitioning into AI Safety, we have a guide about how to do it, and attached podcast interview.

There are also many other ways SWE can use their skills for direct impact, including in biosecurity and by transitioning into information security, building systems at EA orgs, or in various parts of govt.

To get more ideas, we have 180+ engineering positions on our job board.

Despite billions of extra funding, small donors can still have a significant impact

There are no sharp cut offs - just gradually diminishing returns.

An org can pretty much always find a way to spend 1% more money and have a bit more impact. And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.


You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually... (read more)

4MichaelStJules2moThe marginal impact can be much smaller, but this depends on the particulars. I think hiring is the most important example, especially in cases where salaries make up almost all of the costs of the organization. Suppose a research organization hired everyone they thought was worth hiring at all (with their current management capacity as a barrier, or based on producing more than they cost managers, or based on whether they will set the org in a worse direction, etc.). Or, the difference between their last hire and their next hire could also be large. How would they spend an extra 1% similarly cost-effectively? I think you should expect a big drop in marginal cost-effectiveness here. Maybe in many cases there are part-time workers you can get more hours from by paying them more. I think my hiring example could generalize to cause areas where the output is primarily research and the costs are primarily income. E.g., everyone we'd identify to do more good than harm in AI safety research in expectation could already be funded (although maybe they could continue to use more compute cost-effectively?). The same could be true for grantmakers. Maybe we can just always hire more people who aren't counterproductive in expectation, and the drop is just steep, and that's fine since the stakes are astronomical. I agree with this for global health and poverty, but I expect the drop in cost-effectiveness to be much worse in the other big EA cause areas and especially in organizations where the vast majority of spending is on salaries.
Despite billions of extra funding, small donors can still have a significant impact

Yes, my main attempt to discuss the implications of the extra funding is in the Is EA growing? post and my talk at EAG. This post was aimed at a specific misunderstanding that seems to have come up. Though, those posts weren't angsty either.

5HowieL2moLinking to make it easier for anybody who wants to check these out. Is effective altruism growing? An update on the stock of funding vs. people [] What does the growth of EA mean for our priorities and level of ambition? [talk transcript] []
Despite billions of extra funding, small donors can still have a significant impact

This is the problem with the idea of 'room for funding'. There is no single amount of funding a charity 'needs'. In reality there's just a diminishing return curve. Additional donations tend to have a little less impact, but this effect is very small when we're talking about donations that are small relative to the charity's budget (if there's only one charity you want to support), or small relative to the EA community as a whole if you take a community perspective.

We need alternatives to Intro EA Fellowships

One quick comment is that people who are more self-motivated can easily progress via reading books, online content, podcasts etc. - and they don't need a fellowship at all.

Besides reading material, the main extra thing they need are ways to meet suitable people in the community – after they have some connections they'll talk about it the ideas naturally with those connections.

To get these people, you mainly need to:

1. Reach them with something interesting

2. Get them subscribed to something (e.g. newsletter, social media), so you can periodically remind the... (read more)

9Ashley Lin2moThanks Ben! Agreed that readings / connections are some of the most important things needed to capture the most talented and proactive people. That said, it seems like even the most “self-motivated” people get distracted in the college environment, where there are so many competing things to learn and student groups to be part of. As a result, I think slightly more structure is needed to get these people: * For #2, instead of just getting folks subscribed to a newsletter, I like the idea of informal group chats and Discords that hold self-motivated people in asynchronous discussion spaces as they explore on their own. * For #4, I think these could be bucketed into “opportunities” and expanded a lot more (1-on-1s with EA leaders/professionals, invitations to retreats/EAG, invite-only socials, internship/fellowship opportunities, etc). Would love to see what a top of funnel program actually designed for the most talented and proactive students looks like though.
2ChanaMessinger2moThe subscription seems like a really exciting point here, since the tabling post made me think that it's possible to get lots of people on your mailing list. Maybe putting all those people in a Facebook group or discord and seeing if that can be made consistently active, which gives low-cost ways to discuss that can also be scaled up to channels to talk about more in depth stuff, allows people who can't make it to the meetings to come, is an easy way of disseminating resources, etc.
What does the growth of EA mean for our priorities and level of ambition?

Applied Divinity Studies and Rossa O'Keeffe-O'Donovan both pointed out that talking about a single 'bar' can sometimes be misleading.

For instance, it can often be worth supporting a startup charity that has, say, a 10% chance of being above the bar, even if the expected value is that they're below the bar. This is because funding them provides value of information about their true effectiveness.

It can also been worth supporting organisations that are only a little above the bar but might be highly scalable, since that can create more total giving opportuni... (read more)

A Model of Patient Spending and Movement Building

We should keep reminding ourselves that FTX's value could easily fall by 90% in a big bear market.

What does the growth of EA mean for our priorities and level of ambition?

Normally with the podcasts we cut the filler words in the audio. This audio was unedited so ended up with more filler than normal. We've just done a round of edits to reduce the filler words.

What does the growth of EA mean for our priorities and level of ambition?

I'm not a funder myself, so I don't have a strong take on this question.

I think the biggest consideration might just be how quickly they expect to find opportunities that are above the bar. This depends on research progress, plus how quickly the community is able to create new opportunities, plus how quickly they're able to grow their grantmaking capacity.

All the normal optimal timing questions also also relevant (e.g. is now an unusually hingey time or not; the expected rate of investment returns).

The idea of waiting 10 years while you gradually build a t... (read more)

What does the growth of EA mean for our priorities and level of ambition?

Hey, that seems like I mis-spoke in the talk (or there's a typo in the transcript). I think it should be "current bar of funding with global development".

I think in general new charities need to offer some combination of the potential for higher or similar cost-effectiveness of AMF and scalability. Exactly how to weigh those two is a difficult question.

1AppliedDivinityStudies2moMakes sense, thanks!
4NunoSempere2moHey Ben, see this comment [] , I think that this post originally did not make it clear that the constant size point does depend on empirical/reasonable model assumptions.
A Model of Patient Spending and Movement Building

A hacky solution is just to bear in mind that 'movement building' often doesn't look like explicit recruitment, but could include a lot of things that look a lot like object level work.

We can then consider two questions:

  • What's the ideal fraction to invest in movement building?
  • What are the highest-return movement building efforts? (where that might look like object-level work)

This would ignore the object level value projected by the movement building efforts, but that would be fine, unless they're of comparable value. 

For most interventions, either the movement building effects or the object level value is going to dominate, so we can just treat them as one of the other.

Good news on climate change

That all makes sense, thank you!

Good news on climate change

I had a similar question. I've been reading some sources arguing for strong action on climate change recently, and they tend to emphasise tipping points.

My understanding is that the probability of tipping points is also accounted for in the estimates of eq climate sensitivity, and is one of the bigger reasons why the 95% confidence interval is wide.

It also seems like if ultimately the best guess relationship is linear, then the expectation is that tipping points aren't decisive (or that negative feedbacks are just as likely as positive feedbacks).

Does that seem right?

6John G. Halstead2moThe new models account for potential feedbacks from permafrost carbon. I'm also not especially worried about that feedback or the one from methane clathrates. The world was about 4 degrees warmer a few million years ago, and we didn't get a rapid carbon input from these sources. And the models and basic physics suggest that these would be slow acting multi-centennial scale feedbacks. The Sherwood et al (2020) paper accounts for evidence from the paleoclimate which should in principle pick up some tipping points from the past, though what we are doing now is not a perfect analogue for past climate change in various ways, and paleoclimate proxies are imperfect. Our confidence in the linear relationship between cumulative emissions and warming is lower the higher emissions get. The IPCC is less sure it holds once we get past 1,000 billion tonnes of carbon (on top of the 650 billion tonnes we have already emitted). The Sherwood et al (2020) paper only estimates ECS for up to two doublings of CO2 concentrations, so 1,100ppm. Beyond that, we have less of a clue, especially as CO2 concentrations wouldn't have been that high for tens of millions of years. I am worried about feedbacks if emissions do get that high. Imo, the most worrying thing about climate change is the potential for unexpected surprises, especially from cloud feedbacks, eg here []. That is the first time a fast feedback has shown up in the models. But that is something we reach when we get to 1,300ppm, which is probably several centuries away. There is some stuff from the planetary boundaries people arguing that we are on the brink of massive and disastrous tipping points even at 2 degrees, eg this [] widely cited paper from 2018. That paper fits the planetary boundaries pattern of arguing that there is a potentially significant environmental tipping point close by, on the basis of limited or non-exist
Good news on climate change

This is a useful post and updated my estimate of the chance of lots of warming (>5 degrees) downwards.


Quick question: Do you have a rough sense of how the different emission scenarios translate into concentration of CO2 in the atmosphere?


The reason I ask is that I had thought there's a pretty good chance that concentrations double compared to preindustrial, which would suggest the long-term temperature rise will be roughly 2 - 5 centigrade with 95% confidence – using the latest estimate of ECS.


However, the estimates in the table are mo... (read more)

4John G. Halstead2moHi Ben, CO2 concentrations on the different shared socioeconomic pathways are shown in Table 5 here []. On the most likely scenario - RCP4.5 - CO2 concentrations would double relative to pre-industrial by around 2060. I think this comes down to the difference between the transient climate response to cumulative emissions and equilibrium climate sensitivity. On the assumption that CO2 concentrations stabilise, ECS tells you the warming you get eventually once the climate system has reached equilibrium (not including ice sheet feedbacks). If CO2 concentrations stabilise, then it would take decades to centuries for the system to reach equilibrium. Whereas the warming figures in the table is the warming you get at 2100. I have wrestled with trying to convert things to CO2 concentrations and then trying to infer warming from ECS, but it is unnecessary. CO2 concentrations will not stabilise, so the system will never truly be in equilibrium. The TCRE is much more informative. How cumulative emissions translate into CO2 concentrations is model-dependent. 1 ppm of atmospheric CO2 is equivalent to 2.13 gigatonnes of airborne carbon. However, the amount of carbon that we burn that remains in the atmosphere (the airborne fraction) changes with emissions - the airborne fraction increases the more we emit because land and ocean carbon sinks get exhausted, which you can see in the doughnut charts below.
What's the role of donations now that the EA movement is richer than ever?

I don't mean to imply that, and I agree it probably doesn't make sense to think longtermist causes are top and then not donate to them. I was just using 10x GiveDirectly as an example of where the bar is within near termism. For longtermists, the equivalent is donating to the EA Long-term or Infrastructure Funds. Personally I'd donate to those over GiveWell-recommended charities. I've edited the post to clarify.

1Gage Weston2mook tank you for clarification, I think that makes sense
EA Forum engagement doubled in the last year

Would be useful to see the number of unique users over time, rather than just engagement hours.

4Aaron Gertler2moI've added a chart to the post.
Can EA leverage an Elon-vs-world-hunger news cycle?

Is the aim here to generate a bunch of PR for EA, or to actually convince Elon Musk to do more EA-aligned giving?

If the latter, I doubt trying to publicly pressure him into donating to an EA global poverty charity as part of a twitter debate is the best way to do it. (In fact, he already knows several EAs and has donated to EA orgs before.)


The 'get PR' angle (along the lines of what Fin is saying below) seems more promising – in that ideally we'd have more 'public intellectuals' focused on getting EA into the media & news cycle. This is mai... (read more)

5Nathan Young3moI suggest the aim should be PR. Do you have any thoughts on the route to being a public intellectual?
4MaxRa3moI understood it like this: Elon wants to embarrass the bad take on CNN, but he’s actually not averse to donating the money to do something awesomely good (and attention grabbing). If there’s a way to spin a story where he is using a money in a superior and big way, such that it will maybe end up as one of the great things he did in his life, I wouldn’t be shocked if it happens. Without any pressure.
Can EA leverage an Elon-vs-world-hunger news cycle?

I'd actually say there's a lot of work done on recruiting HNW donors - it's just mainly done via one-on-one meetings so not very visible.

That said, Open Philanthropy, Effective Giving, Founder's Pledge, Longview & Generation Pledge all have it as part of their mission.

There would be even more work on it, but right now the bottleneck seems to be figuring out how to spend the money we already have (we're only deploying $400m p.a. out of over $40bn+,  under 1%). If we had a larger number of big, compelling opportunities, we could likely get more mega donors interested.

What's the role of donations now that the EA movement is richer than ever?

It's super rough but I was thinking about jobs that college graduates take in general.

One line of thinking is based on a direct estimate:

  • Average college grad income ~$80k, so 20% donations = $16k per year
  • Mean global income is ~18k vs. GiveDirectly recipients at $500
  • So $1 to GiveDirectly creates value equivalent to increasing global income by ~$30
  • So that's ~$500k per year equivalent
  • My impression is very few jobs add this much to world income (e.g. here's one piece of reading about this). Maybe just people who are both highly paid and do something with a lot
... (read more)
What's the role of donations now that the EA movement is richer than ever?

I think that's roughly right - though some of the questions around timing donations get pretty  complicated.

What's the role of donations now that the EA movement is richer than ever?

I was wrong about that. The next step for GiveWell would be to drop the bar a little bit (e.g. to 3-7x GiveDirectly), rather than drop all the way to GiveDirectly.

What's the role of donations now that the EA movement is richer than ever?

I agree there's a substantial signalling benefit.

People earning to give might well have a bigger impact via spreading EA than through their donations, but one of the best ways to spread EA is to lead by example. Making donations makes it clear you're serious about what you say.

What's the role of donations now that the EA movement is richer than ever?

Quick attempt to summarise:

  1. Earning to give is still impactful – probably more impactful than 98%+ of jobs. The current funding bar in e.g. global health by GiveWell is about 10x GiveDirectly, and so marginal donations still have about that level of impact. In longtermism, the equivalent bar is harder to quantify, but you can look at recent examples of what's been funded by the EA Infrastructure and Long Term Funds (the equivalent of GiveDirectly is something like green energy R&D or scaling up a big disease monitoring program). Small donors can probabl
... (read more)
2Gage Weston2moI'm curious why you and many EA's who focus on longtermism don't suggest donating to longtermist cause areas (as examples often focuses on Givewell or ACE charities). It seems like if orgs I respect like Open Phil and long term future fund are giving to longtermist areas, then they think that's among the most important things to fund, which confuses me when I then hear longtermists acting like funding is useless on the margin or that we might as well give to GiveWell charities. It gives me a sense that perhaps there's either some contradiction going on, or I'm missing something, but either way it makes it very difficult for me to get others excited about longtermism if they won't enter it with their career and even the die-hard longtermists are saying marginal funding is useless or at least worse than GiveWell charities.
2RichardAnnilo3moI am also curious to understand why you think that earning to give is more impactful than 98%+ of jobs. Also, did you mean 98% of EA-aligned jobs or all jobs?
1RichardAnnilo3moThanks for the answer. Just to make sure I understand #1. You're saying that if I donated 1000€ to GiveWell right now, my donation would be expected to have 10 times as much impact as a donation to GiveDirectly? However, in the coming years that might change to 5x or 2x?
6keller_scholl3moI was parsing your comment here [] as saying that the marginal impact of a GiveWell donation was pretty close to GiveDirectly. Here it seems like you don't endorse that interpretation?
Many Undergrads Should Take Light Courseloads

Thanks for the article, I've added a link to our page:


I'd be curious for thoughts on when you should  take more courses. The main situations that came to mind for me were: (i) you're learning something you might actually use (e.g. programming) or (ii) you want to open up extra grad school options (e.g. taking extra math courses to open up economics).

1Mauricio3moThanks! My initial guess is those are situations when it's good to take specific / high-workload classes--but I'm not sure they're always situations when it's good to take more classes (since people can sometimes take those kinds of classes as parts of meeting their graduation requirements).
Load More