richard_ngo's Shortform

by richard_ngo13th Jun 202035 comments
34 comments, sorted by Highlighting new comments since Today at 1:40 AM
New Comment

I'm leaning towards the view that "don't follow your passion" and "try do really high-leverage intellectual work" are both good pieces of advice in isolation, but that they work badly in combination. I suspect that there are very few people doing world-class research who aren't deeply passionate about it, and also that EA needs world-class research in more fields than it may often seem.

Another related thing that isn't discussed enough is the immense difficulty of actually doing good research, especially in a pre-paradigmatic field. I've personally struggled to transition from engineer mindset, where you're just trying to build a thing that works (and you'll know when it does), to scientist mindset, where you need to understand the complex ways in which many different variables affect your results.

This isn't to say that only geniuses make important advances, though - hard work and persistence go a long way. As a corollary, if you're in a field where hard work doesn't feel like work, then you have a huge advantage. And it's also good for building a healthy EA community if even people who don't manage to have a big impact are still excited about their careers. So that's why I personally place a fairly high emphasis on passion when giving career advice (unless I'm talking to someone with exceptional focus and determination).

Then there's the question of how many fields it's actually important to have good research in. Broadly speaking, my perspective is: we care about the future; the future is going to be influenced by a lot of components; and so it's important to understand as many of those components as we can. Do we need longtermist sociologists? Hell yes! Then we can better understand how value drift might happen, and what to do about it. Longtermist historians to figure out how power structures will work, longtermist artists to inspire people - as many as we can get. Longtermist physicists - Anders can't figure out how to colonise the galaxy by himself.

If you're excited about something that poses a more concrete existential risk, then I'd still advise that as a priority. But my guess is that there's also a lot of low-hanging fruit for would-be futurists in other disciplines.

What is the strongest argument, or the best existing analysis, that Givewell top charities actually do more good per dollar than good mainstream charities focusing on big-picture issues (e.g. a typical climate change charity, or the US Democratic party)?

If the answer is "no compelling case has been made", then does the typical person who hears about and donates to Givewell top charities via EA understand that?

If the case hasn't been made [edit: by which I mean, if the arguments that have been made are not compelling enough to justify the claims being made], and most donors don't understand that, then the way EAs talk about those charities is actively misleading, and we should apologise and try hard to fix that.

I think the strongest high-level argument for Givewell charities vs. most developed-world charity is the 100x multiplier.

That's a strong reason to suspect the best opportunities to improve the lives of current humanity lie in the developing world, but not decisive, and so usually analyses have been done, particularly of 'fan-favourite' causes like the ones you mention. 

I'd also note that both the examples you gave are not what I would consider 'Mainstream charity'; both have prima facie plausible paths for high leverage (even if 100x feels a stretch), and if I had to guess right now my gut instinct is that both are in the top 25% for effectiveness. 'Mainstream charity' in my mind looks more like 'your local church', 'the arts', or 'your local homeless shelter'. Some quantified insight into what people in the UK actually give to here.

At any rate, climate-change has had a few of these analyses over the years, off the top of my head here's a recent one on the forum looking at the area in general, there's also an old and more specific analysis of Cool Earth by GWWC, which after running through a bunch of numbers concludes:

Even with the most generous assumptions possible, this is still at least one order of magnitude greater than the cost of saving a life through donations to highly effective health charities such as the Against Malaria Foundation (at $3,461).

As for other areas, Givewell, (in?)famously, used to recommend charities in US education, but stopped after deciding their estimated effectiveness didn't stack up to what they could achieve in the Global Health/Poverty space. 

I don't have anything to hand for the US Democratic party, but lots of people talked in various places about donations directed at helping the Clinton campaign in 2016 and then the Biden campaign in 2020, so I'd start there. 80k's thoughts on the value of a vote would be a starting point.

If the case hasn't been made

On a different note, I'm somewhere between bemused and disappointed that you could think this is a possibility, especially for causes which many EAs were very positively disposed to prior to their involvement with EA (and in the case of climate change, a large number remain so!). To be clear, I'm mostly disappointed in the movement's ability to propagate information forward and in the fact that such analysis has apparently become so rare that you might think such common questions have never been looked at. Also to be clear, it could well be that the cases made are wrong, and I'd be happy to see refutations, but suggesting they haven't been made is quite a bit stronger; suggests a wilful blind spot, and I'm reminded of this SSC post. 

It's plausible to me that there's something near-term we simply haven't done an analysis of, and should have done, and if we did it would look very strong, but if so I'd strongly expect to be in an area that does not naturally occur to EA's core demographic, rather than in areas and ideas that naturally occur to that group.

Hey Alex, thanks for the response! To clarify, I didn't mean to ask whether no case has been made, or imply that they've "never been looked at", but rather ask whether a compelling case has been made - which I interpret as arguments which seem strong enough to justify the claims made about Givewell charities, as understood by the donors influenced by EA.

I think that the 100x multiplier is a powerful intuition, but that there's a similarly powerful intuition going the other way: that wealthy countries are many times more influential than developing countries (e.g. as measured in technological progress), which is reason to think that interventions in wealthy countries can do comparable amounts of good overall.

On the specific links you gave: the one on climate change (Global development interventions are generally more effective than climate change interventions) starts as follows:

Previously titled “Climate change interventions are generally more effective than global development interventions”.  Because of an error the conclusions have significantly changed. I have extended the analysis and now provide a more detailed spreadsheet model below. In the comments below, Benjamin_Todd uses a different guesstimate model and found the climate change came out ~80x better than global health (even though the point estimate found that global health is better).

I haven't read the full thing, but based on this, it seems like there's still a lot of uncertainty about the overall conclusion reached, even when the model is focused on direct quantifiable effects, rather than broader effects like movement-building, etc. Meanwhile the 80k article says that "when political campaigns are the best use of someone’s charitable giving is beyond the scope of this article". I appreciate that these's more work on these questions which might make the case much more strongly. But given that Givewell is moving over $100M a year from a wide range of people, and that one of the most common criticisms EA receives is that it doesn't account enough for systemic change, my overall expectation is still that EA's case against donating to mainstream systemic-change interventions is not strong enough to justify the set of claims that people understand us to be making.

I suspect that our disagreement might be less about what research exists,  and more about what standard to apply for justification. Some reasons I think that we should have a pretty high threshold for thinking that claims about Givewell top charities doing the most good are justified:

  1. If we think of EA as an ethical claim (you should care about doing a lot of good) and an empirical claim (if you care about that, then listening to us increases your ability to do so) then the empirical claim should be evaluated against the donations made by people who want to do a lot of good, but aren't familiar with EA. My guess is that climate change and politics are fairly central examples of such donations.
  2. (As mentioned in a reply to Denise): "Doing the most good per dollar" and "doing the most good that can be verified using a certain class of methodologies" can be very different claims. And the more different that class is methodologies is from most people's intuitive conception of how to evaluate things, the more important it is to clarify that point. Yet it seems like types of evidence that we have for these charities are very different from the types of evidence that most people rely on to form judgements about e.g. how good it would be if a given political party got elected, which often rely on effects that are much harder to quantify.
  3. Givewell charities are still (I think) the main way that most outsiders perceive EA. We're now a sizeable movement with many full-time researchers. So I expect that outsiders overestimate how much research backs up the claims they hear about doing the most good per dollar, especially with respect to the comparisons I mentioned. I expect they also underestimate the level of internal disagreement within EA about how much good these charities do.
  4. EA funds a lot of internal movement-building that is hard to quantify. So when our evaluations of other causes exclude factors that we consider important when funding ourselves, we should be very careful.

I didn't mean to ask whether no case has been made, or imply that they've "never been looked at", but rather ask whether a compelling case has been made

I'm not quite sure what you're trying to get at here. In some trivial sense we can see that many people were compelled, hence I didn't bother to distinguish between 'case' and 'compelling case'. I wonder whether by 'compelling case' you really mean 'case I would find convincing'? In which case, I don't know whether that case was ever made. I'd be happy to chat more offline and try to compel you :)

there's a similarly powerful intuition going the other way: that wealthy countries are many times more influential than developing countries

I don't think this intuition is similarly powerful at all, but more importantly I don't think it 'goes the other way', or perhaps don't understand what you mean by that phrase. Concretely, if we treat GDP-per-capita as a proxy for influentialness-per-person (not perfect, but seems like right ballpark), and how much we can influence people with $x also scales linearly with GDP-per-capita (i.e. it takes Y months' wages to influence people Z amount), that would suggest that interventions aimed at influencing worldwide events have comparable impact anywhere, rather than actively favouring developed countries by anything like the 100x margin.

I suspect that our disagreement might be less about what research exists,  and more about what standard to apply for justification. 

I agree. I think the appropriate standard is basically the 'do you buy your own bullshit' standard. I.e. if I am donating to Givewell charities over climate change (CC)  charities, that is very likely revealing that I truly think those opportunities are better all things considered, not just better according to some narrow criteria. At that point, I could be just plain wrong in expressing that opinion to others, but I'm not being dishonest. By contrast, if I give to CC charities over Givewell charities, I largely don't think I should evangelise on behalf of Givewell charities, regardless of whether they score better on some specific criteria, unless I am very confident that the person I am talking to cares about those specific criteria (even then I'd want to add 'I don't support this personally' caveats).

My impression is that EA broadly meets this standard, and I would be disappointed to hear of a case where an individual or group had pushed Givewell charities while having no interest in them for their personal or group-influenced donations.

the empirical claim should be evaluated against the donations made by people who want to do a lot of good, but aren't familiar with EA. My guess is that climate change and politics are fairly central examples of such donations.

I'm happy to evaluate against these examples regardless, but (a) I doubt these are central, but not with high confidence, would  be happy to see data and (b) I'm not sure evaluating against typical-for-that-group donations makes a whole lot of sense when for most people donations are a sideshow in their altruistic endeavours. The counterfactual where I don't get involved with EA doesn't look like me donating to climate change instead, it looks like me becoming a teacher rather than a trader and simply earning far less, or becoming a trader and retiring at 30 followed by doing volunteer work. On a quick scan of my relatively-altruistic non-EA friends (who skew economically-privileged and very highly educated, so YMMV) doing good in this kind of direct-but-local way looks like a far more typical approach than making large (say >5% of income) donations to favoured non-EA areas. 

Givewell charities are still (I think) the main way that most outsiders perceive EA. 

Communicating the fact that many core EA organisations have a firmly longtermist focus is something I am strongly in favour of. 80k has been doing a ton of work here to try and shift perceptions of what EA is about. 

That said, in this venue I think it's easy to overestimate the disconnect. 80k/CEA/EA forum/etc. are only one part of the movement, and heavily skew longtermist relative to the whole. Put plainly, in the event that outsiders perceive EA heavily through the lens of Givewell charities because most self-identified EAs are donating and their donations mostly go to Givewell charities, that seems fine, in the sense that perceptions match reality, regardless of what us oddballs are doing. In the event that outsiders perceive this because this used to be the case but is no longer, and there's a lag, then I'm in favour of doing things to try and reduce the lag, example in previous paragraph.

After chatting with Alex Gordon-Brown, I updated significantly towards his position (as laid out in his comments below). Many thanks to him for taking the time to talk; I've done my best to accurately represent the conversation, but there may be mistakes. All of the following are conditional on focusing on near-term, human-centric charities.

Three key things I changed my mind on:

  1. I had mentally characterised EA as starting with Givewell-style reasoning, and then moving on to less quantifiable things. Whereas Alex (who was around at the time) pointed out that there were originally significant disagreements between EAs and Givewell, in particular with EAs arguing for less quantifiable approaches. EA and Givewell then ended up converging more over time, both as EAs found that it was surprisingly hard to beat Givewell charities even allowing for less rigorous analysis, and also as people at Givewell (e.g. the ones now running OpenPhil) became more convinced in less-quantifiable EA methodologies.
    1. Insofar as the wider world has the impression of EA as synonymous with Givewell-style reasoning, a lot of that comes from media reports focusing on it in ways we weren't responsible for.
    2. Alex claims that Doing Good Better, which also leans in this direction, wasn't fully representative of the beliefs of core EAs at the time it was published.
  2. Alex says that OpenPhil has found Givewell charities surprisingly hard to beat, and that this (along with other EA knowledge and arguments, such as the 100x multiplier) is sufficient to make a "compelling case" for them.
    1. Alex acknowledges that not many people who recommend Givewell are doing so because of this evidence; in some sense, it's a "happy coincidence" that the thing people were already recommending has been vindicated. But he thinks that there are enough careful EAs who pay attention to OpenPhil's reports that, if their conclusions had been the opposite, I would have heard people publicly making this case.
  3. Alex argues that I'm overly steelmanning the criticism that EA has received. EA spent a lot of time responding to criticisms that it's impossible to know that any charities are doing a lot of good (e.g. because of potential corruption, and so on), and criticisms that we should care more about people near us, and so on. Even when it came to "systemic change" critiques, these usually weren't principled critiques about the importance of systemic change in general, but rather just "you should focus on my personal cause", in particular highly politicised causes.

Alex also notes that the Givewell headline claim "We search for the charities that save or improve lives the most per dollar" is relatively new (here's an earlier version) and has already received criticism.

Things I haven't changed my mind about:

  1. I still think that most individual EAs should be much more careful in recommending Givewell charities. OpenPhil's conclusions are based primarily off (in their words) "back-of-the-envelope calculations", the details of which we don't know. I think that, even if this is enough to satisfy people who trust OpenPhil's researchers and their methodologies, it's far less legible and rigorous than most people who hear about EA endorsement of Givewell  charities would expect. Indeed, they still conclude that (in expectation) their hits-based portfolio will moderately outperform Givewell.
  2. OpenPhil's claims are personally not enough to satisfy me. I think by default I won't endorse Givewell charities. Instead I'll abstain from having an opinion on what the best near-term human-centric charities are, and push for something more longtermist like pandemic prevention as a "default" outreach cause area instead. But I also don't think it's unreasonable for other people to endorse Givewell charities under the EA name.
  3. I still think that the 100x multiplier argument is (roughly) cancelled out by the multiplier going the other way, of wealthy countries having at least 100x more influence over the world. So, while it's still a good argument for trying to help the poorest people, it doesn't seem like a compelling argument for trying to help the poorest people via direct interventions in poor countries.

Overall lessons: I overestimated the extent to which my bubble was representative of EA, and also the extent to which I understood the history of EA accurately.

Alex and I finished by briefly discussing AI safety, where I'm quite concerned about a lack of justification for many of the claims EAs make. I'm hoping to address this more elsewhere.

Thanks for the write-up. A few quick additional thoughts on my end:

  • You note that OpenPhil still expect their hits-based portfolio to moderately outperform Givewell in expectation. This is my understanding also, but one slight difference of interpretation is that it leaves me very baseline skeptical that most 'systemic change' charities people suggest would also outperform, given the amount of time Open Phil has put into this question relative to the average donor. 
  • I think it's possible-to-likely I'm mirroring your 'overestimating how representative my bubble was' mistake, despite having explicitly flagged this type of error before because it's so common. In particular, many (most?) EAs first encounter the community at university, whereas my first encounter was after university, and it wouldn't shock me if student groups were making more strident/overconfident claims than I remember in my own circles. On reflection I now have anecdotal evidence of this from 3 different groups.
  • Abstaning on the 'what is the best near-term human-centric charity' question, and focusing on talking about the things that actually appear to you to be among the best options, is a response I strongly support. I really wish more longtermists took this approach, and I also wish EAs in general would use 'we' less and 'I' more when talking about what they think about optimal opportunities to do good. 

it leaves me very baseline skeptical that most 'systemic change' charities people suggest would also outperform, given the amount of time Open Phil has put into this question relative to the average donor. 

I have now read OpenPhil's sample of the back-of-the-envelope calculations on which their conclusion that it's hard to beat GiveWell was based. They were much rougher than I expected. Most of them are literally just an estimate of the direct benefits and costs, with no accounting for second-order benefits or harms, movement-building effects, political effects, etc. For example, the harm of a year of jail time is calculated as 0.5 QALYs plus the financial cost to the government - nothing about long-term effects of spending time in jail, or effects on subsequent crime rates, or community effects. I'm not saying that OpenPhil should have included these effects, they are clear that these are only intended as very rough estimates, but it means that I now don't think it's justified to treat this blog post as strong evidence in favour of GiveWell.

Here's just a basic (low-confidence) case for the cost-efficacy of political advocacy: governmental policies can have enormous effects, even when they attract little mainstream attention (e.g. PEPFAR). But actually campaigning for a specific policy is often only the last step in the long chain of getting the cause into the Overton Window, building a movement, nurturing relationships with politicians, identifying tractable targets, and so on, all of which are very hard to measure, and  which wouldn't show up at all in these calculations by OpenPhil. Given this, what evidence is there that funding these steps wouldn't outperform GiveWell for many policies?

(See also Scott Alexander 's rough calculations on the effects of FDA regulations, which I'm not very confident in, but which have always stuck in my head as an argument that how dull-sounding policies might have wildly large impacts.)

Your other points make sense, although I'm now worried that abstaining about near-term human-centric charities will count as implicit endorsement. I don't know very much about quantitatively analysing interventions though, so it's plausible that my claims in this comment are wrong.

I think we’re still talking past each other here.

You seem to be implicitly focusing on the question ‘how certain are we these will turn out to be best’. I’m focusing on the question ‘Denise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charities’.

Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Currently, I’m not seeing a clear argument for that. ‘Might have wildly large impacts’, ‘very rough estimates’, ‘policy can have enormous effects’...these are all phrases that increase uncertainty rather than concretely change EVs and so are decision-irrelevant. (That’s not quite true; we should penalise rough things’ calculated EV more in high-uncertainty environments due to winners’ curse effects, but that’s secondary to my main point here).

Another way of putting it is that this is the difference between one’s confidence level that what you currently think is best will still be what you think is best 20 years from now, versus trying to identify the best all-things-considered donation opportunity right now with one’s limited information.

So concretely, I think it’s very likely that in 20 years I’ll think one of the >20 alternatives I’ve briefly considered will look like it was a better use of my money that Givewell charities, due to the uncertainty you’re highlighting. But I don’t know which one, and I don’t expect it to outperform 20x, so picking one essentially at random still looks pretty bad.

A non-random way to pick would be if Open Phil, or someone else I respect, shifted their equivalent donation bucket to some alternative. AFAIK, this hasn’t happened. That’s the relevance of those decisions to me, rather than any belief that they’ve done a secret Uber-Analysis.

Hmm, I agree that we're talking past each other. I don't intend to focus on ex post evaluations over ex ante evaluations. What I intend to focus on is the question: "when an EA make the claim that GiveWell charities are the charities with the strongest case for impact in near-term human-centric terms, how justified are they?" Or, relatedly, "How likely is it that somebody who is motivated to find the best near-term human-centric charities possible, but takes a very different approach than EA does (in particular by focusing much more on hard-to-measure political effects) will do better than EA?"

In my previous comment, I used a lot of phrases which you took to indicate the high uncertainty of political interventions. My main point was that it's plausible that a bunch of them exist which will wildly outperform GiveWell charities. I agree I don't know which one, and you don't know which one, and GiveWell doesn't know which one. But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments publicly, in a way that we could learn from if we were more open to less quantitative analysis? (Alternatively, could someone know if they tried? But let's go with the former for now.)

In other words, consider two possible worlds. In one world GiveWell charities are in fact the most cost-effective, and all the people doing political advocacy are less cost-effective than GiveWell ex ante (given publicly available information). In the other world there's a bunch of people doing political advocacy work which EA hasn't supported even though they have strong, well-justified arguments that their work is very impactful (more impactful than GiveWell's top charities), because that impact is hard to quantitatively estimate. What evidence do we have that we're not in the second world? In both worlds GiveWell would be saying roughly the same thing (because they have a high bar for rigour). Would OpenPhil be saying different things in different worlds? Insofar as their arguments in favour of GiveWell are based on back-of-the-envelope calculations like the ones I just saw, then they'd be saying the same thing in both worlds, because those calculations seem insufficient to capture most of the value of the most cost-effective political advocacy. Insofar as their belief that it's hard to beat GiveWell is based on other evidence which might distinguish between these two worlds, they don't explain this in their blog post - which means I don't think the post is strong evidence in favour of GiveWell top charities for people who don't already trust OpenPhil a lot.

But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments [that specific intervention X will wildly outperform] publicly, in a way that we could learn from if we were more open to less quantitative analysis?


I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don't actually make a case; when I cash out people's claims it usually turns out they are asserting 10x - 100x multipliers, not 100x - 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/omission distinctions, and hopefully we at least agree that if even the entrenched advocate doesn't actually think their cause is best under my values, I should just move on. 

As an aside, I know you wrote recently that you think more work is being done by EA's empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/Poverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We aren't actually that far apart on the empirical state of affairs. They just don't want to. They aren't refusing to because they have even better things to do, because most people do very little. Or as Rob put it:

Many people donate a small fraction of their income, despite claiming to believe that lives can be saved for remarkably small amounts. This suggests they don’t believe they have a duty to give even if lives can be saved very cheaply – or that they are not very motivated by such a duty.

I think that last observation would also be my answer to 'what evidence do we have that we aren't in the second world?' Empirically, most people don't care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases it's debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.

Thanks for writing this up! I've found this thread super interesting to follow, and it's shifted my view on a few important points.

One lingering thing that seems super important is longtermism vs prioritising currently existing people. It still seems to me that GiveWell charities aren't great from a longtermist perspective, but that the vast majority of people are not longtermists. Which creates a weird tension when doing outreach, since I rarely want to begin by trying to pitch longtermism, but it seems disingenuous to pitch GiveWell charities.

Given that many EAs are not longtermist though, this seems overall fine for the "is the movement massively misleading people" question

I don't think that the moral differences between longtermists and most people in similar circles (e.g. WEIRD) are that relevant, actually. You don't need to be a longtermist to care about massive technological change happening over the next century. So I think it's straightforward to say things like "We should try to have a large-scale moral impact. One very relevant large-scale harm is humans going extinct; so we should work on things which prevent it".

This is what I plan to use as a default pitch for EA from now on.

Thank you for writing this post-- I have the same intuition as you about this being very misleading and found this thread really helpful.

Here's Rob Wiblin:

We have been taking on the enormous problem of ‘how to help others do the most good’ and had to start somewhere. The natural place for us, GiveWell and other research groups to ‘cut our teeth’ was by looking at the cause areas and approaches where the empirical evidence was strongest, such as the health improvement from anti-malarial bednets, or determining in which careers people could best ‘earn to give’.
Having learned from that research experience we are in a better position to evaluate approaches to systemic change, which are usually less transparent or experimental, and compare them to non-systemic options.

From my perspective at least, this seems like political spin. If advocacy for anti-malarial bednets was mainly intended as a way to "cut our teeth", rather than a set of literal claims about how to do the most good, then EA has been systematically misleading people for years.

Nor does it seem to me that we're actually in a significantly better position to evaluate approaches to systemic change now, except insofar as we've attracted more people. But if those people were attracted because of our misleading claims, then this is not a defence.

Hi Richard, I just wanted to say that I appreciate you asking these questions! Based on the number of upvotes you have received, other people might be wondering the same, and it's always useful to propagate knowledge like Alex has written up further.

I would have appreciated it even more if you had not directly jumped to accusing EA of being misleading (without any references) before waiting for any answers to your question.

This seems reasonable. On the other hand, it's hard to give references to a broad pattern of discourse.

Maybe the key contention I'm making here is that "doing the most good per dollar" and "doing the most good that can be verified using a certain class of methodologies" are very different claims. And the more different that class is methodologies is from most people's intuitive conception of how to evaluate things, the more important it is to clarify that point.

Or, to be more concrete, I believe (with relatively low confidence, though) that:

  • Most of the people whose donations have been influenced by EA would, if they were trying to donate to do as much good as possible without any knowledge of EA, give money to mainstream systemic change (e.g. political activism, climate change charities).
  • Most of those people believe that there's a consensus within EA that donations to Givewell's top charities do more good than these systemic change donations, to a greater degree than there actually is.
  • Most of those people would then be surprised to learn how little analysis EA has done on this question, e.g. they'd be surprised at how limited the scope of charities Givewell considers actually is.
  • A significant part of these confusions is due to EA simplifying its message in order to attract more people - for example, by claiming to have identified the charities that "do the most good per dollar", or by comparing our top charities to typical mainstream charities instead of the mainstream charities that people in EA's target audience previously believed did the most good per dollar (before hearing about EA).

Most of those people believe that there's a consensus within EA that donations to Givewell's top charities do more good than these systemic change donations, to a greater degree than there actually is.

 

Related to my other comment, but what would you guess is the split of donations from EAs to Givewell's top charities versus 'these systemic change donations'?

I ask because if it's highly skewed, I would be strongly against pretending that we're highly conflicted on this question while the reality of where we give says something very different; this question of how to represent ourselves accurately cuts both ways, and it is very tempting to try and be 'all things to all people'. 

All things considered, the limited data I have combined with anecdata from a large number of EAs suggests to me that it is in fact highly skewed.

A significant part of these confusions is due to EA simplifying its message in order to attract more people

I think this is backwards. The 'systemic change' objection, broadly defined, is by far the most common criticism of EA. Correspondingly, I think the movement would be much larger were it better-disposed to such interventions, largely neutralising this complaint and so appealing to a (much?) wider group of people. 

One use case of the EA forum which we may not be focusing on enough:

There are some very influential people who are aware of and somewhat interested in EA. Suppose one of those people checks in on the EA forum every couple of months. Would they be able to find content which is interesting, relevant, and causes them to have a higher opinion of EA? Or if not, what other mechanisms might promote the best EA content to their attention?

The "Forum Favourites" partly plays this role, I guess. Although because it's forum regulars who are most likely to highly upvote posts, I wonder whether there's some divergence between what's most valuable for them and what's most valuable for infrequent browsers.

“...whether there's some divergence between what's most valuable for them and what's most valuable for infrequent browsers.”

I’d strongly guess that this is the case. Maybe Community posts should be removed from Forum favorites?

By default, Community posts don't show up in Forum Favorites, or on the Frontpage at all. You have to check a box to show them.

My recommendation for people interested in EA is to read the EA Newsletter, which filters more heavily than the Forum. effectivealtruism.org ranks first in Google for EA, and has a bunch of different newsletter signup boxes.

As for the Forum, this is part of why the Motivation Series exists (and will soon be linked to from the homepage). As for more up-to-date content, I'd think that the average high-karma Frontpage post probably does a reasonable job of representing what people in EA are working on. But I'd be interested to hear others' thoughts on what the Forum could change to better meet this use case!

The concept of cluelessness seems like it's pointing at something interesting (radical uncertainty about the future) but has largely been derailed by being interpreted in the context of formal epistemology. Whether or not we can technically "take the expected value" even under radical uncertainty is both a confused question (human cognition doesn't fit any of these formalisms!), and also much less interesting than the question of how to escape from radical uncertainty. In order to address the latter, I'd love to see more work that starts from Bostrom's framing in terms of crucial considerations.

There was a lot of discussion in the early days of EA about replacement effects in jobs, and also about giving now vs giving later (for a taste of how much, see my list here, and Julia Weiss' disjoint list here).

The latter debate is still fairly prominent now. But I think that arguments about replacement effects became largely redundant when we started considering the value of becoming excellent in high-leverage domains like altruistically-focused research (for which the number of jobs isn't fixed like it is in, say, medicine).

One claim that I haven't seen yet, though: that the debate about giving now vs giving later is redundant for similar reasons (albeit on a larger scale). In other words, the benefits of building up strong effectively altruistic communities with sound methodologies and proven track records seem much more important than any benefits of saving discussed so far, because if we do well then we'll attract orders of magnitude more money later on (and if we don't, then maybe we don't deserve the money later on). Like in the debate about replacement effects, removing the assumption that certain factors are fixed (number of jobs and amount of money, respectively) transforms the way we should think about the problem.

I think that's a good point, though I've heard it discussed a fair amount. One way of thinking about it is that 'direct work' also has movement building benefits. This makes the ideal fraction of direct work in the portfolio higher than it first seems.

Cool, good to know. Any pointers to places where people have made this argument at more length?

I'm not sure. Unfortunately there's a lot of things like this that aren't yet written up. There might be some discussion of the movement building value of direct work in our podcast with Phil Trammell.

I see. Yeah, Phil and Rob do discuss it, but focused on movement-building via fundraising/recruitment/advocacy/etc, rather than via publicly doing amazing direct work. Perhaps they were implicitly thinking about the latter as well, though. But I suspect the choice of examples shapes people's impression of the argument pretty significantly.

E.g. when it comes to your individual career, you'll think of "investing in yourself" very differently if the central examples are attending training programs and going to university, versus if the central example is trying to do more excellent and eye-catching work.

Agree. I've definitely heard the other point though - it's a common concern with 80k among donors (e.g. maybe 'concrete problems in AI safety' does far more to get people into the field than an explicit movement building org ever would). Not sure where to find a write up!

What are some examples of high-leverage domains aside from AI technical research you can think of? I'm currently transitioning away from nuclear engineering because of the lack of leverage in such a tightly controlled (albeit still highly impactful) industry.