This is a special post for quick takes by richard_ngo. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

(COI note: I work at OpenAI. These are my personal views, though.)

My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years:

  1. AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture. There's a constant flow of ideas and brainstorming in those spaces; the core alignment ideas are standard background knowledge for everyone there. There are hackathons where people build fun demos, and people figuring out ways of using AI to augment their research. Constant interactions with the models allows people to gain really good hands-on intuitions about how they work, which they leverage into doing great research that helps us actually understand them better. When the public ends up demanding regulation, there's a large pool of competent people who are broadly reasonable about the risks, and can slot into the relevant institutions and make them work well.
  2. AI sa
... (read more)

I think it would be helpful for you to mention and highlight your conflict-of-interest here.

I remember becoming much more positive about ads after starting work at Google. After I left, I slowly became more cynical about them again, and now I'm back down to ~2018 levels. 

EDIT: I don't think this comment should get more than say 10-20 karma. I think it was a quick suggestion/correction that Richard ended up following, not too insightful or useful.

good call, will edit in

4
Vasco Grilo🔸
Hi Linch, Cool that you pointed this out! I have the impression comments like yours just above often get lots of karma on EA Forum, particularly when coming from people who already have lots of karma. I wonder whether that is good.
6
Linch
Yeah I think it's suboptimal. It makes sense that the comment had a lot of agree-votes. It'd also make more sense to upvote if Richard didn't add in his COI after my comment, because then making the comment go up in visibility had a practical value of a) making sure almost everybody who reads Richard's comment notices the COI and b) making it more likely for Richard to change his mind. But given that Richard updated very quickly (in <1 hour), I think additional upvotes after his edit were superfluous. 
6[anonymous]
I agree there's a bias where the points more popular people make are evaluated more generously, but in this case I think the karma is well deserved. The COI point is important, and Linch highlights its importance with a relevant yet brief personal story. And while the comment was quick for Linch to make, some people in the EA community would hesitate to point out a conflict of interest in public for fear of being seen as a troublemaker, so the counterfactual impact is higher than it might seem. I strongly upvoted the comment. 

I appreciate you drawing attention to the downside risks of public advocacy, and I broadly agree that they exist, but I also think the (admittedly) exaggerated framings here are doing a lot of work (basically just intuition pumping, for better or worse). The argument would be just as strong in the opposite direction if we swap the valence and optimism/pessimism of the passages: what if, in scenario one, the AI safety community continues making incremental progress on specific topics in interpretability and scalable oversight but achieves too little too slowly and fails to avert the risk of unforeseen emergent capabilities in large models driven by race dynamics, or even worse, accelerates those dynamics by drawing more talent to capabilities work? Whereas in scenario two, what if the AI safety movement becomes similar to the environmental movement by using public advocacy to build coalitions among diverse interest groups, becoming a major focus of national legislation and international cooperation, moving hundreds of billions of $ into clean tech research, etc.

Don't get me wrong — there's a place for intuition pumps like this, and I use them often. But I also think that both techni... (read more)

Yepp, I agree that I am doing an intuition pump to convey my point. I think this is a reasonable approach to take because I actually think there's much more disagreement on vibes and culture than there is on substance (I too would like AI development to go more slowly). E.g. AI safety researchers paying for ChatGPT obviously brings in a negligible amount of money for OpenAI, and so when people think about that stuff the actual cognitive process is more like "what will my purchase signal and how will it influence norms?" But that's precisely the sort of thing that has an effect on AI safety culture independent of whether people agree or disagree on specific policies—can you imagine hacker culture developing amongst people who were boycotting computers? Hence why my takeaway at the end of the post is not "stop advocating for pauses" but rather "please consider how to have positive effects on community culture and epistemics, which might not happen by default".

I would be keen to hear more fleshed-out versions of the passages with the valences swapped! I like the one you've done; although I'd note that you're focusing on the outcomes achieved by those groups, whereas I'm focusing also ... (read more)

This kind of reads as saying that 1 would be good because it's fun (it's also kind of your job, right?) and 2 would be bad because it's depressing. 

Huh, it really doesn't read that way to me. Both are pretty clear causal paths to "the policy and general coordination we get are better/worse as a result."

5
Holly_Elmore
That too, but there was a clear indication that 1 would be fun and invigorating and 2 would be depressing.

I don't think this is a coincidence—in general I think it's much easier for people to do great research and actually figure stuff out when they're viscerally interested in the problems they're tackling, and excited about the process of doing that work.

Like, all else equal, work being fun and invigorating is obviously a good thing? I'm open to people arguing that the benefits of creating a depressing environment are greater (even if just in the form of vignettes like I did above), e.g. because it spurs people to do better policy work. But falling into unsustainable depressing environments which cause harmful side effects seems like a common trap, so I'm pretty cautious about it. 

7
Holly_Elmore
Totally. But OP kinda made it sound like the fact that you found 2 depressing was evidence it was the wrong direction. I think advocacy could be fun and full of its own fascinating logistical and intellectual questions as well as lots of satisfying hands-on work.

"hesitate to pay for ChatGPT because it feels like they're contributing to the problem"

Yep that's me right now and I would hardly call myself a Luddite (maybe I am tho?)

Can you explain why you frame this as an obviously bad thing to do? Refusing to help fund the most cutting edge AI company, which has been credited by multiple people with spurring on the AI race and attracting billions of dollars to AI capabilities seems not-unreasonable at the very least, even if that approach does happen to be wrong.

Sure there are decent arguments against not paying for chat GPT, like the LLM not being dangerous in and of itself, and the small amount of money we pay not making a significant difference, but it doesn't seem to be prima-facie-obviously-net-bad-luddite behavior, which is what you seem to paint it as in the post.

Obviously if individual people want to use or not use a given product, that's their business. I'm calling it out not as a criticism of individuals, but in the context of setting the broader AI safety culture, for two broad reasons:

  1. In a few years' time, the ability to use AIs will be one of the strongest drivers of productivity, and not using them will be... actually, less Luddite, and more Amish. It's fine for some people to be Amish, but for AI safety people (whose work particularly depends on understanding AI well) not using cutting-edge AI is like trying to be part of the original hacker culture while not using computers. 
  2. I think that the idea of actually trying to do good effectively is a pretty radical one, and scope-sensitivity is a key component of that. Without it, people very easily slide into focusing on virtue signalling or ingroup/outgroup signalling (e.g. climate activists refusing to take flights/use plastic bags/etc), which then has knock-on effects in who is attracted to the movement, etc. On twitter I recently criticized a UK campaign to ban a specific dog breed for not being very scope-sensitive; you can think of this as similar to that.
4
NickLaing
I'm a bit concerned that both of your arguments here are a bit strawmannish, but again I might be missing something 1. Indeed ,my comment was regarding the 99.999 percent of people ( including myself) who are not AI researchers. I completely agree that researchers should be working on the latest models and paying for chat GPT 4, but that wasn't my point. I think it's borderline offensive to call people "amish" who boycott potentially dangerous tech which can increase productivity. First it could be offensive to the Amish, as you seem to be using it as a perogative, and second boycotting any 1 technology for harm minimisation reasons while using all other technology can't get compared to the Amish way of life. I'm not saying boycott all AI, that would be impossible anyway. Just perhaps not contributing financially to the company making the most cutting edge models. 1. This is a big discussion, but I think discarding not paying for chat GPT under the banner of poor scope sensitivity and virtue signaling is weak at best and straw Manning at worst. The environmentalists I know who don't fly, don't use it to virtue signal at all, they are doing it to help the world a little and show integrity with their lifestyles. This may or may not be helpful to their cause, but the little evidence we have also seems to show that more radical actions like this do not alienate regular people but instead pull people towards the argument your are trying to make, in this case that an AI frontier arms race might be harmful. I actually changed my mind on this on seeing the forum posts here a few months ago, I used to think that radical life decisions and activism was likely to be net harmful too. what research we have on the topic shows that more radical actions attract more people to mainstream climate/animal activist ideals, so I think your comment "has knock-on effects in who is attracted to the movement, etc." It's more likely to be wrong than right.
8
richard_ngo
I'd extend this not just to include AI researchers, but people who are involved in AI safety more generally. But on the question of the wider population, we agree. "show integrity with their lifestyles" is a nicer way of saying "virtue signalling", it just happens to be signalling a virtue that you agree with. I do think it's an admirable display of non-selfishness (and far better than vice signalling, for example), but so too are plenty of other types of costly signalling like asceticism. A common failure mode for groups people trying to do good is "pick a virtue that's somewhat correlated with good things and signal the hell out of it until it stops being correlated". I'd like this not to happen in AI safety (more than it already has: I think this has already happened with pessimism-signalling, and conversely happens with optimism-signalling in accelerationist circles).

"show integrity with their lifestyles" is a nicer way of saying "virtue signalling",

I would describe it more as a spectrum. On the more pure "virtue signaling" end, you might choose one relatively unimportant thing like signing a petition, then blast it all over the internet while not doing other more important actions that's the cause.

Whereas on the other end of the spectrum, "showing integrity with lifestyle" to me means something like making a range of lifestyle choices which might make only s small difference to your cause, while making you feel like you are doing what you can on a personal level. You might not talk about these very much at all.

Obviously there are a lot of blurry lines in between.

Maybe my friends are different from yours, but climate activists I know often don't fly, don't drive and don't eat meat. And they don't talk about it much or "signal" this either. But when they are asked about it, they explain why. This means when they get challenged in the public sphere, both neutral people and their detractors lack personal ammunition to car dispersion on their arguments, so their position becomes more convincing.

I don't call that virtue signaling, but I suppose it's partly semantics.

One exchange that makes me feel particularly worried about Scenario 2 is this one here, which focuses on the concern that there's:

No rigorous basis for that the use of mechanistic interpretability would "open up possibilities" to long-term safety.  And plenty of possibilities for corporate marketers – to chime in on mechint's hypothetical big breakthroughs. In practice, we may help AI labs again – accidentally – to safety-wash their AI products.

I would like to point to this as a central example of the type of thing I'm worried about in scenario 2: the sort of doom spiral where people end up actively opposed to the most productive lines of research we have, because they're conceiving of the problem as being arbitrarily hard. This feels very reminiscent of the environmentalists who oppose carbon capture or nuclear energy because it might make people feel better without solving the "real problem".

It looks like, on net, people disagree with my take in the original post. So I'd like to ask the people who disagree: do you have reasons to think that the sort of position I've quoted here won't become much more common as AI safety becomes much more activism-focused? Or do you think it would be good if it did?

7
RyanCarey
I just disagreed with the OP because it's a false dichotomy; we could just agree with the true things that activists believe, and not the false ones, and not go based on vibes. We desire to believe that mech-interp is mere safety-washing iff it is, and so on.
5
Remmelt
The problem here is doing insufficient safety R&D at AI labs that enables the AI labs to market themselves as seriously caring about safety and thus that their ML products are good for release. You need to consider that, especially since you work at an AI lab.
2
quinn
Slightly conflicted agree vote: your model here offloads so much to judgment calls that fall on people who are vulnerable to perverse incentives (like, alignment/capabilities as a binary distinction is a bad frame, but it seems like anyone who'd be unusually well suited to thinking clearly about it's alternatives make more money and have less stressful lives if their beliefs fall some ways vs others).  Other than that, I'm aware that no one's really happy about the way they tradeoff "you could copenhagen ethics your way out of literally any action in the limit" against "saying that the counterfactual a-hole would do it worse if I didn't is not a good argument". It seems like a law of opposite advice situation, maybe? As in some people in the blase / unilateral / powerhungry camp could stand to be nudged one way and some people in the scrupulous camp could stand to be nudged another.  It also matters that the "oppose carbon capture or nuclear energy because it might make people feel better without solving the 'real problem'." environmentalists have very low standards even when you condition on them being environmentalists. That doesn't mean they can't be memetically adaptive and then influential, but it might be tactically important (i.e. you have a messaging problem instead of a more virtuous actually-trying-to-think-clearly problem)

history is full of cases where people dramatically underestimated the growth of scientific knowledge, and its ability to solve big problems.

There are 2 concurrent research programs, and if one program (capability) completes before the other one (alignment), we all die, but the capability program is an easier technical problem than the alignment program. Do you disagree with that framing? If not, then how does "research might proceed faster than we expect" give you hope rather than dread?

Also, I'm guessing you would oppose a worldwide ban starting today on all "experimental" AI research (i.e., all use of computing resources to run AIs) till the scholars of the world settle on how to keep an AI aligned through the transition to superintelligence. That's my guess, but please confirm. In your answer, please imagine that the ban is feasible and in fact can be effective ("leak-proof"?) enough to give the AI theorists all they time they need to settle on a plan even if that takes many decades. In other words, please indulge me this hypothetical question because I suspect it is a crux.

"Settled" here means that a majority of non-senile scholars / researchers who've worked full-time on th... (read more)

There are 2 concurrent research programs, and if one program (capability) completes before the other one (alignment), we all die, but the capability program is an easier technical problem than the alignment program. Do you disagree with that framing?

Yepp, I disagree on a bunch of counts.

a) I dislike the phrase "we all die", nobody has justifiable confidence high enough to make that claim, even if ASI is misaligned enough to seize power there's a pretty wide range of options for the future of humans, including some really good ones (just like there's a pretty wide range of options for the future of gorillas, if humans remain in charge).

b) Same for "the capability program is an easier technical problem than the alignment program". You don't know that; nobody knows that; Lord Kelvin/Einstein/Ehrlich/etc would all have said "X is an easier technical problem than flight/nuclear energy/feeding the world/etc" for a wide range of X, a few years before each of those actually happened.

c) The distinction between capabilities and alignment is a useful concept when choosing research on an individual level; but it's far from robust enough to be a good organizing principle on a societal level. Th... (read more)

8
Lukas_Gloor
Even if we should be undecided here, there's an asymmetry where, if you get alignment too early, that's okay, but getting capabilities before alignment is bad. Unless we know that alignment is going to be easier, pushing forward on capabilities without an outsized alignment benefit seems needlessly risky. On the object level, if we think the scaling hypothesis is roughly correct (or "close enough") or if we consider it telling that evolution probably didn't have the sophistication to install much specialized brain circuitry between humans and other great apes, then it seems like getting capabilities past some universality and self-improvement/self-rearrangement ("learning how to become better at learning/learning how to become better at thinking") threshold cannot be that difficult? Especially considering that we arguably already have "weak AGI." (But maybe you have an inside view that says we still have huge capability obstacles to overcome?)  At the same time, alignment research seems to be in a fairly underdeveloped state (at least my impression as a curious outsider), so I'd say "alignment is harder than capabilities" seems almost certainly true. Factoring in lots of caveats about how they aren't always cleanly separable, and so on, doesn't seem to change that.
2
richard_ngo
I am not disputing this :) I am just disputing the factual claim that we know which is easier. Are you making the claim that we're almost certainly not in a world where alignment is easy? (E.g. only requires something like Debate/IA and maybe some rudimentary interpretability techniques.) I don't see how you could know that.
2
Lukas_Gloor
I'm not sure if I'm claiming quite that, but maybe I am. It depends on operationalizations. Most importantly, I want to flag that even the people who are optimistic about "alignment might turn out to be easy" probably lose their optimism if we assume that timelines are sufficiently short. Like, would you/they still be optimistic if we for sure had <2years? It seems to me that more people are confident that AI timelines are very short than people are confident that we'll solve alignment really soon. In fact, no one seems confident that we'll solve alignment really soon. So, the situation already feels asymmetric. On assessing alignment difficulty, I sympathize most with Eliezer's claims that it's important to get things right on the first try and that engineering progress among humans almost never happened to be smoother than initially expected (and so is a reason for pessimism in combination with the "we need to get it right on the first try" argument). I'm less sure how much I buy Eliezer's confidence that "niceness/helpfulness" isn't easy to train/isn't a basin of attraction. He has some story about how prosocial instincts evolved in humans for super-contingent reasons so that it's highly unlikely to re-play in ML training. And there I'm more like "Hm, hard to know." So, I'm not pessimistic for inherent technical reasons. It's more that I'm pessimistic because I think we'll fumble the ball even if we're in the lucky world where the technical stuff is surprisingly easy. That said, I still think "alignment difficulty?" isn't the sort of question where the ignorance prior is 50-50. It feels like there are more possibilities for it to be hard than easy.
0
rhollerith
Do you concede that frontier AI research is intrinsically dangerous? That it is among the handful of the most dangerous research programs ever pursued by our civilization? If not, I hope you can see why those who do consider it intrinsically dangerous are not particularly mollified or reassured by "well, who knows? maybe it will turn out OK in the end!" When I wrote "the alignment program" above, I meant something specific, which I believe you will agree is robust enough to organize society (if only we could get society to go along with it): namely, I meant thinking hard together about alignment without doing anything dangerous like training up models with billions of parameters till we have at least a rough design that most of the professional researchers agree is more likely to help us than to kill us even if it turns out to have super-human capabilities--even if our settling on that design takes us many decades. E.g., what MIRI has been doing the last 20 years. It makes me sad that you do not see that "we all die" is the default outcome that naturally happens unless a lot of correct optimization pressure is applied by the researchers to the design of the first sufficiently-capable AI before the AI is given computing resources. It would have been nice to have someone with your capacity for clear thinking working on the problem. Are you sure you're not overly attached (e.g., for intrapersonal motivational reasons) to an optimistic vision in which AI research "feels like the early days of hacker culture" and "there are hackathons where people build fun demos"?
7
Lukas_Gloor
Interesting and insightful framing! I think the main concern I have is that your scenario 1 doesn't engage much with the idea of capability info hazards and the point that some of the people who nerd out about technical research lack moral seriousness or big-picture awareness to not always push ahead.
4
richard_ngo
Yepp, that seems right. I do think this is a risk, but I also think it's often overplayed in EA spaces. E.g. I've recently heard a bunch of people talking about the capability infohazards that might arise from interpretability research. To me, it seems pretty unlikely that this concern should prevent people from doing or sharing interpretability research. What's the disagreement here? One part of it is just that some people are much more pessimistic about alignment research than I am. But it's not actually clear that this by itself should make a difference, because even if they're pessimistic they should "play to their outs", and "interpretability becomes much better" seems like one of the main ways that pessimists could be wrong. The main case I see for being so concerned about capability infohazards as to stop interpretability research is if you're pessimistic about alignment but optimistic about governance. But I think that governance will still rely on e.g. a deep understanding of the systems involved. I'm pretty skeptical about strategies which only work if everything is shut down (and Scenario 2 is one attempt to gesture at why).
6
Minh Nguyen
Re: Hacker culture I'd like to constructively push back on this: The research and open-source communities outside AI Safety that I'm embedded in are arguably just as, if not more hands-on, since their attitude towards deployment is usually more ... unrestricted. For context, I mess around with generative agents and learning agents. I broadly agree that the AI Safety community is very smart people working on very challenging and impactful problems. I'm just skeptical that what you've described is particularly unique to AI Safety, and think that descriptiom would apply to many ML-related spaces. Then again, I could be extremely inexperienced and unaware of the knowledge gap between top AI Safety researchers and everyone else. Re: Environmentalism I was a climate activist organising FridaysForFuture (FFF) protests, and I don't recall this was ever the prevailing perception/attitude. Mainstream activist movements and scientists put up a united front, and they still mutually support each other today. Even if it was superficial, FFF always emphasised "listen to the science". From a survey of FFF activists: I'm also fairly certain the environmentalist was a counterfactual net positive, with Will Macaskill himself commenting on the role of climate advocacy in funding solar energy research and accelerating climate commitments in What We Owe The Future. However, I will admit that the anti-nuclear stance was exactly as dumb as you've implied, and it embarrasses me how many activists expressed it. Re: Enemy of my Enemy Personally, I draw a meaningful distinction between being anti-AI capabilities and pro-AI Safety. Both are strongly and openly concerned about rapid AI progress, but the two groups have very different motivations, proposed solutions and degree of epistemic rigour. Being anti-AI does not mean pro AI Safety, the former is a much larger umbrella movement of people expressing strong opinions on a disruptive, often misunderstood field. 1. ^ Fronti
4
richard_ngo
I think we agree: I'm describing a possible future for AI safety, not making the claim that it's anything like this now. Not sure what you mean by this but in some AI safety spaces ML capabilities researchers are seen as opponents. I think the relevant analogy here would be, e.g. an oil executive who's interested in learning more about how to reduce the emissions their company produces, who I expect would get a pretty cold reception. Re "alienation", I'm also thinking of stuff like the climate activists who are blocking highways, blocking offices, etc. Makes sense! Yeah, I agree that a lot has been done to accelerate research into renewables; I just feel less confident than you about how this balances out compared with nuclear. I like this distinction, feels like a useful one. Thanks for the comment!
4
trevor1
I think that the 2-scenario model described here is very important, and should be a foundation for thinking about the future of AI safety. However, I think that both scenarios will also be compromised to hell. The attack surface for the AI safety community will be massive in both scenarios, ludicrously massive in scenario #2, but nonetheless still nightmarishly large in scenario #1. Assessment of both scenarios revolves around how inevitable you think slow takeoff is- I think that some aspects of slow takeoff, such as intelligence agencies, already started around 10 years ago and at this point just involve a lot of finger crossing and hoping for the best.
2
Gerald Monroe
Something else you may note here.  The reason environmentalists are wrong is they focus on the local issue and ignore the larger picture.   Nuclear energy: they focus on the local risk of a meltdown or waste disposal, and ignore the carbon emitting power plants that must be there somewhere else for each nuclear plant they successfully block.  Carbon emissions are global, even the worst nuclear disaster is local. Geoengineering: they simply won't engage on actually discussing the cost benefit ratios.  Their reasoning shuts down or they argue "we can't know the consequences" as an argument to do nothing.  This ignores the bigger picture that temperatures are rising and will continue to rise in all scenarios. Land use reform : they focus on the local habitat loss to convert a house to apartments, or an empty lot to apartments, and ignore that laws of conservation of number of humans.  Each human who can't live in the apartment will live somewhere, and probably at lower density with more total environmental damage. Demanding AI Pauses: This locally stops model training, if approved, in the USA and EU.  The places they can see if they bring out the signs in San Francisco.  It means that top AI lab employees will be laid off, bringing any "secret sauce" with them to work for foreign labs who are not restricted.  It also frees up wafer production for foreign labs to order compute on the same wafers.  If Nvidia is blocked from manufacturing H100s, it frees up a share in the market for a foreign chip vendor. It has minimal, possibly zero effect on the development of AGI if you think wafer production is the rate limiting factor.

AI Pause generally means a global, indefinite pause on frontier development. I'm not talking about a unilateral pause and I don't think any country would consider that feasible.

1
Gerald Monroe
That's a reasonable position but if a global pause on nuclear weapons could not be agreed on what's different about AI? If AI works to even a fraction of its potential, it's a more useful tool than a nuclear weapon, which is mostly an expensive threat you can't actually use most of the time, right? Why would a multilateral agreement on this ever happen? Assuming you agree AI is more tempting than nukes, what would lead to an agreement being possible?

It currently seems likely to me that we're going to look back on the EA promotion of bednets as a major distraction from focusing on scientific and technological work against malaria, such as malaria vaccines and gene drives.

I don't know very much about the details of either. But it seems important to highlight how even very thoughtful people trying very hard to address a serious problem still almost always dramatically underrate the scale of technological progress.

I feel somewhat mournful about our failure on this front; and concerned about whether the same is happening in other areas, like animal welfare, climate change, and AI risk. (I may also be missing a bunch of context on what actually happened, though—please fill me in if so.)


I understand the sentiment, but there's a lot here I disagree with. I'll discuss mainly one.

In the case of global health, I disagree that"thoughtful people trying very hard to address a serious problem still almost always dramatically underrate the scale of technological progress." 

This doesn't fit with the history of malaria and other infectious diseases where the opposite has happened, optimism about technological progress has often exceed reality. 

About 60 years ago humanity was positive about eradicating malaria with technological progress. We had used (non-political) swamp draining and DDT spraying to massively reduce the global burden of malaria, wiping it out from countries like the USA and India. If you had done a prediction market in 1970, many malaria experts would have predicted we would have eradicated malaria by now - including potentially with vaccines, in fact it was a vibrant topic of conversation at the time, with many in the 60s believing a malaria vaccine would be here before now.

Again in 1979 after smallpox was eradicated, if you asked global health people how many human diseases we would eradicate by 2023, I'm sure the answer would have been higher th... (read more)

Great comment, thank you :) This changed my mind.

4
richard_ngo
The article I linked above has changed my mind back again. Apparently the RTS,S vaccine has been in clinical trials since 1997. So the failure here wasn't just an abstract lack of belief in technology: the technology literally already existed the whole time that the EA movement (or anyone who's been in this space for less than two decades) has been thinking about it.

Do you think that if GiveWell hadn't recommended bednets/effective altruists hadn't endorsed bednets it would have led to more investment in vaccine development/gene drives etc.? That doesn't seem intuitive to me.

To me GiveWell fit a particular demand, which was for charitable donations that would have reliably high marginal impact. Or maybe to be more precise, for charitable donations recommended by an entity that made a good faith effort without obvious mistakes to find the highest reliable marginal impact donation. Scientific research does not have that structure since the outcomes are unpredictable. 

I don't think it makes sense to think of EA as a monolith which both promoted bednets and is enthusiastic about engaging with the kind of reasoning you're advocating here. My oversimplified model of the situation is more like:

  • Some EAs don't feel very persuaded by this kind of reasoning, and end up donating to global development stuff like bednets.
  • Some EAs are moved by this kind of reasoning, and decide not to engage with global development because this kind of reasoning suggests higher impact alternatives. They don't really spend much time thinking about how to best address global development, because they're doing things they think are more important.

(I think the EAs in the latter category have their own failure modes and wouldn't obviously have gotten the malaria thing right (assuming you're right that a mistake was made) if they had really tried to get it right, tbc.)

Thanks a lot that makes sense, this comment no longer stands after the edits so have retracted really appreciate the clarification!

(I'm not sure its intentional, but this comes across as patronizing to global health folks. Saying folks "don't want to do this kind of thinking" is both harsh and wrong. It seems like you suggest that "more thinking" automatically leads people down the path of "more important" things than global health, which is absurd.

Plenty of people have done plenty of thinking through an EA lens and decided that bed nets are a great place to spend lots of money which is great.

Plenty of people have done plenty of thinking through an EA lens and decided to focus on other things which is great.

One group might be right and the other might be wrong, but it is far from obvious or clear, and the differences of opinion certainly don't come from a lack of thought.

I think it helps to be kind and give folks the benefit of the doubt.)

[This comment is no longer endorsed by its author]Reply

I think you're right that my original comment was rude; I apologize. I edited my comment a bit.

I didn't mean to say that the global poverty EAs aren't interested in detailed thinking about how to do good; they definitely are, as demonstrated e.g. by GiveWell's meticulous reasoning. I've edited my comment to make it less sound like I'm saying that the global poverty EAs are dumb or uninterested in thinking.

But I do stand by the claim that you'll understand EA better if you think of "promote AMF" and "try to reduce AI x-risk" as results of two fairly different reasoning processes, rather than as results of the same reasoning process. Like, if you ask someone why they're promoting AMF rather than e.g. insect suffering prevention, the answer usually isn't "I thought really hard about insect suffering and decided that the math doesn't work out", it's "I decided to (at least substantially) reject the reasoning process which leads to seriously considering prioritizing insect suffering over bednets".

(Another example of this is the "curse of cryonics".)

9
NickLaing
Nice one makes much more sense now, appreciate the change a lot :), have retracted my comment now (I think it can still be read, haven't mastered the forum even after hundreds of comments...)
5
richard_ngo
Makes sense, though I think that global development was enough of a focus of early EA that this type of reasoning should have been done anyway. I’m more sympathetic about it not being done after, say, 2017.

I think this has been thought about a few times since EA started.

In 2015 Max Dalton wrote about medical research and said the below. 

"GiveWell note that most funders of medical research more generally have large budgets, and claim that ‘It’s reasonable to ask how much value a new funder – even a relatively large one – can add in this context’. Whilst the field of tropical disease research is, as I argued above, more neglected, there are still a number of large foundations, and funding for several diseases is on the scale of hundreds of millions of dollars. Additionally, funding the development of a new drug may cost close to a billion dollars .

For these reasons, it is difficult to imagine a marginal dollar having any impact. However, as Macaskill argues at several points in Doing Good Better, this appears to only increase the riskiness of the donation, rather than reducing its expected impact.


In 2018  Peter Wildeford and Marcus A. Davis wrote about the cost effectiveness of vaccines and suggested that a malaria vaccine is competitive with other global health opportunities.

2
Linch
Related: early discussion of gene drives in 2016.

I think I'd be more convinced if you backed your claim up with some numbers, even loose ones. Maybe I'm missing something, but imo there just aren't enough zeros for this to be a massive fuckup.

Fairly simple BOTEC:

  • 2 billion people at significant risk of malaria (WHO says 3 billion "at risk" but I assume the first 2 billion is at significantly higher risk than the last billion).
    • note that Africa has ~95% of cases/deaths and a population of 1.2 billion; I assume you can get a large majority of the benefits if you ignore northern Africa too. 
  • LLINs last 3 years.
  • a bednet covers ~1.5 people (can't find a source so just a guess; note that the main protected population for bednets are mothers and their young children, who usually sleep in the same bed).
  • Say LLINs cost ~$4.50 for simple math (AMF says $2, GiveWell says $5-6; I think it depends on how you do moral accounting)
  • So it costs $2B/year to cover almost all vulnerable people with bednets at current margins.
    • and likely <1B/year if we are fine with just covering the most vulnerable 5/6 of Africa.
  • At 5-10%/year cost of capital, this is equivalent to $20B-$40B to have bednets forever.
    • even less if it's more targeted.
  • Given how much mon
... (read more)
3
richard_ngo
A different BOTEC: 500k deaths per year, at $5000 per death prevented by bednets, we’d have to get a year of vaccine speedup for $2.5 billion to match bednets. I agree that $2.5 billion to speed up development of vaccines by a year is tricky. But I expect that $2.5 billion, or $250 million, or perhaps even $25 million to speed up deployment of vaccines by a year is pretty plausible. I don’t know the details but apparently a vaccine was approved in 2021 that will only be rolled out widely in a few months, and another vaccine will be delayed until mid-2024: https://marginalrevolution.com/marginalrevolution/2023/10/what-is-an-emergency-the-case-of-rapid-malaria-vaccination.html So I think it’s less a question of whether EA could have piled more money on and more a question of whether EA could have used that money + our talent advantage to target key bottlenecks. (Plus the possibility of getting gene drives done much earlier, but I don’t know how to estimate that.)
2
richard_ngo
@Linch, see the article I linked above, which identifies a bunch of specific bottlenecks where lobbying and/or targeted funding could have been really useful. I didn't know about these when I wrote my comment above, but I claim prediction points for having a high-level heuristic that led to the right conclusion anyway.
4
Linch
Do you want to discuss this in a higher-bandwidth channel at some point? Eg next time we're in an EA social or something, have an organized chat with a moderator and access to a shared monitor? I feel like we're not engaging with each other's arguments as much in this setting, but we can maybe clarify things better in a higher-bandwidth setting.   (No worries if you don't want to do it; it's not like global health is either of our day jobs)
7
Hauke Hillebrandt
Global development EAs were very much looking into vaccines around 2015 and then and now it seemed that the malaria vaccine is just not crazy cost-effective, because you have to administer it more than once and it's not 100% effective - see Public health impact and cost-effectiveness of the RTS,S/AS01 malaria vaccine: a systematic comparison of predictions from four mathematical models Modelling the relative cost-effectiveness of the RTS,S/AS01 malaria vaccine compared to investment in vector control or chemoprophylaxis 
5
richard_ngo
An article on why we didn't get a vaccine sooner: https://worksinprogress.co/issue/why-we-didnt-get-a-malaria-vaccine-sooner This seems like significant evidence for the tractability of speeding things up. E.g. a single (unjustified) decision by the WHO in 2015 delayed the vaccine by almost a decade, four years of which were spent in fundraising. It seems very plausible that even 2015 EA could have sped things up by multiple years in expectation either lobbying against the original decision, or funding the follow-up trial.
5
MichaelStJules
Retracted my last comment, since as joshcmorrison pointed out, the vaccines aren't mRNA-based. Still, "Total malaria R&D investment from 2007 to 2018 was over $7 billion, according to data from Policy Cures Research in the report. Of that total, about $1.8 billion went to vaccine R&D." https://www.devex.com/news/just-over-600m-a-year-goes-to-malaria-r-d-can-covid-19-change-that-98708/amp
5
Jason
Moreover, I think there are structural reasons for relatively more of that funding to come from, e.g., Gates than from at least early-stage EA. Although COVID is an exception, vaccine work has traditionally taken many years. I think it is more likely that we'd see the right people approaching this work in an optimal manner if they were offered stable, multi-year funding. And I'm not sure whether at least early "EA" was in a position to offer that kind of funding on a basis that would seem reliable.  So it's plausible to me that vaccine and similar funding was the highest EV option on the table in theory, and that it nevertheless made sense for EA to focus on bednet distribution and other efforts better suited to the funding flows it could guarantee.
4[anonymous]
I'm sympathetic to this. I also think it is interesting to look at how countries that eradicated malaria did so, and it wasn't with bednets, it was through draining swamps etc.  (fwiw, I don't think that criticism applies to EA work on climate change. Johannes Ackva is focused on policy change to encourage neglected low carbon technologies.)
3
MichaelStJules
The new malaria vaccines are mRNA vaccines, and mRNA vaccines were largely developed in response to COVID. I think billions were spent on mRNA R&D. That could have been too expensive for Open Phil, and they might not have been able to foresee the promise of mRNA in particular to invest so much specifically in it and not waste substantially on other vaccine R&D. Open Phil has been funding R&D on malaria for some time, including gene drives, but not much on vaccines until recently. https://www.openphilanthropy.org/grants/?q=malaria&focus-area[]=scientific-research EDIT: By the US government alone, $337 million was invested in mRNA R&D pre-pandemic over decades (and the authors found $5.9 billion in indirect grants), and after the pandemic started, "$2.2bn (7%) supported clinical trials, and $108m (<1%) supported manufacturing plus basic and translational science" https://www.bmj.com/content/380/bmj-2022-073747 Moderna also spent over a billion on R&D, and their focus is mRNA. (May be some or substantial overlap with US funding.) Pfizer and BioNTech also developed mRNA COVID vaccines together.

Maybe I'm misunderstanding your point, but the two malaria vaccine that were recently approved (RTS,S and R21/Matrix M) are not mRNA vaccines. They're both protein-based.  

4
MichaelStJules
Oh, you're right. My bad.
3
richard_ngo
That's very useful info, ty. Though I don't think it substantively changes my conclusion because: 1. Government funding tends to go towards more legible projects (like R&D). I expect that there are a bunch of useful things in this space where there are more funding gaps (e.g. lobbying for rapid vaccine rollouts). 2. EA has sizeable funding, but an even greater advantage in directing talent, which I think would have been our main source of impact. 3. There were probably a bunch of other possible technological approaches to addressing malaria that were more speculative and less well-funded than mRNA vaccines. Ex ante, it was probably a failure not to push harder towards them, rather than focusing on less scalable approaches which could never realistically have solved the full problem. To be clear, I think it's very commendable that OpenPhil has been funding gene drive work for a long time. I'm sad about the gap between "OpenPhil sends a few grants in that direction" and "this is a central example of what the EA community focuses on" (as bednets have been); but that shouldn't diminish the fact that even the former is a great thing to have happen.
2
Linch
There's a version of your agreement that I agree with, but I'm not sure you endorse, which is something like  To be concrete, things I can imagine a more monomaniacal version of global health EA might emphasize (note that some of them are mutually exclusive, and others might be seen as bad, even under the monomanical lens, after more research): * Targeting a substantially faster EA growth rate than in our timeline * Potentially have a tiered system of outreach where the cultural onboarding in EA is in play for a more elite/more philosophically minded subset but the majority of people just hear the "end malaria by any means possible" message * Lobbying the US and other gov'ts to a) increase foreign aid and b) to increase aid effectiveness, particularly focused on antimalarial interventions. * (if politically feasible, which it probably isn't) potentially advocate that foreign aid must be tied with independently verified progress on malaria eradication). * Advocate more strongly, and more early on, for people to volunteer in antimalarial human challenge trials  * Careful, concrete, and detailed CBEs (measuring the environmental and other costs to human life against malarial load) on when and where DDT usage is net positive  * (if relevant) lobbying in developing countries with high malarial loads to use DDT for malaria control * Attempting to identify and fund DDT analogues that pass the CBE for countries with high malarial (or other insect-borne) disease load, even while the environmental consequences are pretty high (e.g. way too high to be worth the CBE for America). * (if relevant) lobbying countries to try gene drives at an earlier point than most conservative experts would recommend, maybe starting with island countries. * Write academic position papers on why the current vaccine approval system for malaria vaccines is too conservative * Be very willing to do side channel persuasion to emphasize that point * Write aggressive, detailed, and widely
2
richard_ngo
Hmm, your comment doesn't really resonate with me. I don't think it's really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like: "Over the next 20 or 50 years, it's very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there's some way of speeding up this biggest lever." I don't think you need this "move heaven and earth" philosophy to do that reasoning; I don't think you need to focus on EA growth much more than we did. The mental step could be as simple as "Huh, bednets seem kinda incremental. Is there anything that's much more ambitious?" (To be clear I think this is a really hard mental step, but one that I would expect from an explicitly highly-scope-sensitive movement like EA.)
2
Linch
Yeah so basically I contest that this alone will actually have higher EV in the malaria case; apologies if my comment wasn't clear enough.

I think part of my disagreement is I'm not sure what counts as "incremental." Like bednets are an intervention, that broadly speaking, can solve ~half the malaria problem forever at ~20-40 billion dollars, with substantial cobenefits. And attempts at "non-incremental" malaria solutions have already costed mid-high single digit billions. So it's not like the ratios are massively off. Importantly, "non-incremental" solutions like vaccines likely still requires fairly expensive development, distribution, and ongoing maintenance. So small mistakes might be there, but I don't see enough room left for us to be making large mistakes in the space. 

That's what I mean by "not enough zeroes."

To be clear my argument is not insensitive to numbers. If the incremental solutions to the problem have a price tag of >1T (eg global poverty, or aging-related deaths), and non-incremental solutions have had a total price tag of <1B, then I'm much more sympathetic to the "the EV for trying to identify more scalable interventions is likely higher than incremental solutions now, even without looking at details"-style arguments.

2
richard_ngo
Ah, I see. I think the two arguments I'd give here: 1. Founding 1DaySooner for malaria 5-10 years earlier is high-EV and plausibly very cheap; and there are probably another half-dozen things in this reference class. 2. We'd need to know much more about the specific interventions in that reference class to confidently judge that we made a mistake. But IMO if everyone in 2015-EA had explicitly agreed "vaccines will plausibly dramatically slash malaria rates within 10 years" then I do think we'd have done much more work to evaluate that reference class. Not having done that work can be an ex-ante mistake even if it turns out it wasn't an ex-post mistake.

I recently had a very interesting conversation about master morality and slave morality, inspired by the recent AstralCodexTen posts.

The position I eventually landed on was:

  1. Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that's a strong argument against slave morality.
  2. The defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are. Unfortunately we may be in a vicious spiral where we're too entrenched in slave morality to admire great people, which makes it harder to become great, which gives us fewer people to admire, which... By contrast, I picture past generations as being in a constant aspirational dialogue about what counts as greatness—e.g. defining concepts like honor, Aristotelean magnanimity ("greatness of soul"), etc.
  3. I think of master morality as a variant of virtue ethics which is particularly well-adapted to domains which have heavy positive tails—entrepreneurship, for example. However, in domains which have heavy negative tails, the pursuit of g
... (read more)

Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that's a strong argument against slave morality.

This is seems very wrong to me on a historical basis. When I think of the individuals who have done the most good for the world, I think of people who made medical advances like the smallpox vaccine, scientists who discovered new technologies like electricity, and social movements like abolitionism that defeated a great and widespread harm.  These people might want to "achieve something amazing", but they also have communitarian goals: to spread knowledge, help people or avert widespread suffering. 

Also, it's super weird to take the Nietzschean master and slave morality framework at face value. it does not seem to be an accurate representation of the morality systems of people today. 

4
slg
One crux here might be what improved lives the most over the last three hundred years. If you think economic growth has been the main driver of (human) well-being, then the mindset of people driving that growth is what the original post might have been hinting at. And I do agree with Richard that many of those people had something closer to master morality in their mind.
2
NickLaing
I agree. Among those who's motivation was to achieve something amazing include people like Hitler, Mao, Stalin, Manhattan project peeps - than other people. I love your examples titotal and would add  great statesmen who improved the world as well, like GhAndi and Mandella What are these examples of people who were motivated primarily by doing something amazing and changed the world hugely for the better?  
5
Daniel Birnbaum
Very interesting points. Here are a few other things to think about: 1. I think there are very few people whose primary motivation is helping others, so we shouldn't empirically expect them to be doing the most good because they represent a very small portion of the population. This is especially true if you think (which I do) that the vast majority of people who do good are 1) (consciously or unconsciously) signaling for social status or 2) not doing good very effectively (the people who are are a much smaller subgroup because doing non-effective good is easy). It would be very surprising, however, if those who try to do good effectively aren't doing much better than those who aren't, as individuals, on average, but it seems unlikely to me (though feel free to throw some stats that will change my mind!).  2. I'm very skeptical that "the defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are." Could you give more reason for why you think this?  3. I'm skeptical that 1) searching for equanimity is truly the best thing and 2) that we have good and tractable methods of achieving it. Perhaps people would be better off as being more Buddhist on the margin, but, to me, it seems like (thoughtfully!) getting the heavy positive tail end results and be really careful and thoughtful about negatives leads to a much better off society.  Let me know what you think! 

I'm leaning towards the view that "don't follow your passion" and "try do really high-leverage intellectual work" are both good pieces of advice in isolation, but that they work badly in combination. I suspect that there are very few people doing world-class research who aren't deeply passionate about it, and also that EA needs world-class research in more fields than it may often seem.

8
richard_ngo
Another related thing that isn't discussed enough is the immense difficulty of actually doing good research, especially in a pre-paradigmatic field. I've personally struggled to transition from engineer mindset, where you're just trying to build a thing that works (and you'll know when it does), to scientist mindset, where you need to understand the complex ways in which many different variables affect your results. This isn't to say that only geniuses make important advances, though - hard work and persistence go a long way. As a corollary, if you're in a field where hard work doesn't feel like work, then you have a huge advantage. And it's also good for building a healthy EA community if even people who don't manage to have a big impact are still excited about their careers. So that's why I personally place a fairly high emphasis on passion when giving career advice (unless I'm talking to someone with exceptional focus and determination).
9
richard_ngo
Then there's the question of how many fields it's actually important to have good research in. Broadly speaking, my perspective is: we care about the future; the future is going to be influenced by a lot of components; and so it's important to understand as many of those components as we can. Do we need longtermist sociologists? Hell yes! Then we can better understand how value drift might happen, and what to do about it. Longtermist historians to figure out how power structures will work, longtermist artists to inspire people - as many as we can get. Longtermist physicists - Anders can't figure out how to colonise the galaxy by himself. If you're excited about something that poses a more concrete existential risk, then I'd still advise that as a priority. But my guess is that there's also a lot of low-hanging fruit for would-be futurists in other disciplines.

What is the strongest argument, or the best existing analysis, that Givewell top charities actually do more good per dollar than good mainstream charities focusing on big-picture issues (e.g. a typical climate change charity, or the US Democratic party)?

If the answer is "no compelling case has been made", then does the typical person who hears about and donates to Givewell top charities via EA understand that?

If the case hasn't been made [edit: by which I mean, if the arguments that have been made are not compelling enough to justify the claims being made], and most donors don't understand that, then the way EAs talk about those charities is actively misleading, and we should apologise and try hard to fix that.

I think the strongest high-level argument for Givewell charities vs. most developed-world charity is the 100x multiplier.

That's a strong reason to suspect the best opportunities to improve the lives of current humanity lie in the developing world, but not decisive, and so usually analyses have been done, particularly of 'fan-favourite' causes like the ones you mention. 

I'd also note that both the examples you gave are not what I would consider 'Mainstream charity'; both have prima facie plausible paths for high leverage (even if 100x feels a stretch), and if I had to guess right now my gut instinct is that both are in the top 25% for effectiveness. 'Mainstream charity' in my mind looks more like 'your local church', 'the arts', or 'your local homeless shelter'. Some quantified insight into what people in the UK actually give to here.

At any rate, climate-change has had a few of these analyses over the years, off the top of my head here's a recent one on the forum looking at the area in general, there's also an old and more specific analysis of Cool Earth by GWWC, which after running through a bunch of numbers concludes:

Even with the most generous assumptions possible, this is s

... (read more)
9
richard_ngo
Hey Alex, thanks for the response! To clarify, I didn't mean to ask whether no case has been made, or imply that they've "never been looked at", but rather ask whether a compelling case has been made - which I interpret as arguments which seem strong enough to justify the claims made about Givewell charities, as understood by the donors influenced by EA. I think that the 100x multiplier is a powerful intuition, but that there's a similarly powerful intuition going the other way: that wealthy countries are many times more influential than developing countries (e.g. as measured in technological progress), which is reason to think that interventions in wealthy countries can do comparable amounts of good overall. On the specific links you gave: the one on climate change (Global development interventions are generally more effective than climate change interventions) starts as follows: I haven't read the full thing, but based on this, it seems like there's still a lot of uncertainty about the overall conclusion reached, even when the model is focused on direct quantifiable effects, rather than broader effects like movement-building, etc. Meanwhile the 80k article says that "when political campaigns are the best use of someone’s charitable giving is beyond the scope of this article". I appreciate that these's more work on these questions which might make the case much more strongly. But given that Givewell is moving over $100M a year from a wide range of people, and that one of the most common criticisms EA receives is that it doesn't account enough for systemic change, my overall expectation is still that EA's case against donating to mainstream systemic-change interventions is not strong enough to justify the set of claims that people understand us to be making. I suspect that our disagreement might be less about what research exists,  and more about what standard to apply for justification. Some reasons I think that we should have a pretty high threshold for thinki
9
AGB 🔸
I'm not quite sure what you're trying to get at here. In some trivial sense we can see that many people were compelled, hence I didn't bother to distinguish between 'case' and 'compelling case'. I wonder whether by 'compelling case' you really mean 'case I would find convincing'? In which case, I don't know whether that case was ever made. I'd be happy to chat more offline and try to compel you :) I don't think this intuition is similarly powerful at all, but more importantly I don't think it 'goes the other way', or perhaps don't understand what you mean by that phrase. Concretely, if we treat GDP-per-capita as a proxy for influentialness-per-person (not perfect, but seems like right ballpark), and how much we can influence people with $x also scales linearly with GDP-per-capita (i.e. it takes Y months' wages to influence people Z amount), that would suggest that interventions aimed at influencing worldwide events have comparable impact anywhere, rather than actively favouring developed countries by anything like the 100x margin. I agree. I think the appropriate standard is basically the 'do you buy your own bullshit' standard. I.e. if I am donating to Givewell charities over climate change (CC)  charities, that is very likely revealing that I truly think those opportunities are better all things considered, not just better according to some narrow criteria. At that point, I could be just plain wrong in expressing that opinion to others, but I'm not being dishonest. By contrast, if I give to CC charities over Givewell charities, I largely don't think I should evangelise on behalf of Givewell charities, regardless of whether they score better on some specific criteria, unless I am very confident that the person I am talking to cares about those specific criteria (even then I'd want to add 'I don't support this personally' caveats). My impression is that EA broadly meets this standard, and I would be disappointed to hear of a case where an individual or group had

After chatting with Alex Gordon-Brown, I updated significantly towards his position, which I've attempted to summarise below. Many thanks to him for taking the time to talk; I've done my best to accurately represent the conversation, but there may be mistakes. All of the following are conditional on focusing on near-term, human-centric charities.

Three key things I changed my mind on:

  1. I had mentally characterised EA as starting with Givewell-style reasoning, and then moving on to less quantifiable things. Whereas Alex (who was around at the time) pointed out that there were originally significant disagreements between EAs and Givewell, in particular with EAs arguing for less quantifiable approaches. EA and Givewell then ended up converging more over time, both as EAs found that it was surprisingly hard to beat Givewell charities even allowing for less rigorous analysis, and also as people at Givewell (e.g. the ones now running OpenPhil) became more convinced in less-quantifiable EA methodologies.
    1. Insofar as the wider world has the impression of EA as synonymous with Givewell-style reasoning, a lot of that comes from media reports focusing on it in ways we weren't responsible for.
    2. Alex
... (read more)
9
AGB 🔸
Thanks for the write-up. A few quick additional thoughts on my end: * You note that OpenPhil still expect their hits-based portfolio to moderately outperform Givewell in expectation. This is my understanding also, but one slight difference of interpretation is that it leaves me very baseline skeptical that most 'systemic change' charities people suggest would also outperform, given the amount of time Open Phil has put into this question relative to the average donor.  * I think it's possible-to-likely I'm mirroring your 'overestimating how representative my bubble was' mistake, despite having explicitly flagged this type of error before because it's so common. In particular, many (most?) EAs first encounter the community at university, whereas my first encounter was after university, and it wouldn't shock me if student groups were making more strident/overconfident claims than I remember in my own circles. On reflection I now have anecdotal evidence of this from 3 different groups. * Abstaning on the 'what is the best near-term human-centric charity' question, and focusing on talking about the things that actually appear to you to be among the best options, is a response I strongly support. I really wish more longtermists took this approach, and I also wish EAs in general would use 'we' less and 'I' more when talking about what they think about optimal opportunities to do good. 
2
richard_ngo
I have now read OpenPhil's sample of the back-of-the-envelope calculations on which their conclusion that it's hard to beat GiveWell was based. They were much rougher than I expected. Most of them are literally just an estimate of the direct benefits and costs, with no accounting for second-order benefits or harms, movement-building effects, political effects, etc. For example, the harm of a year of jail time is calculated as 0.5 QALYs plus the financial cost to the government - nothing about long-term effects of spending time in jail, or effects on subsequent crime rates, or community effects. I'm not saying that OpenPhil should have included these effects, they are clear that these are only intended as very rough estimates, but it means that I now don't think it's justified to treat this blog post as strong evidence in favour of GiveWell. Here's just a basic (low-confidence) case for the cost-efficacy of political advocacy: governmental policies can have enormous effects, even when they attract little mainstream attention (e.g. PEPFAR). But actually campaigning for a specific policy is often only the last step in the long chain of getting the cause into the Overton Window, building a movement, nurturing relationships with politicians, identifying tractable targets, and so on, all of which are very hard to measure, and  which wouldn't show up at all in these calculations by OpenPhil. Given this, what evidence is there that funding these steps wouldn't outperform GiveWell for many policies? (See also Scott Alexander 's rough calculations on the effects of FDA regulations, which I'm not very confident in, but which have always stuck in my head as an argument that how dull-sounding policies might have wildly large impacts.) Your other points make sense, although I'm now worried that abstaining about near-term human-centric charities will count as implicit endorsement. I don't know very much about quantitatively analysing interventions though, so it's plausible that
2
AGB 🔸
I think we’re still talking past each other here. You seem to be implicitly focusing on the question ‘how certain are we these will turn out to be best’. I’m focusing on the question ‘Denise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charities’. Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Currently, I’m not seeing a clear argument for that. ‘Might have wildly large impacts’, ‘very rough estimates’, ‘policy can have enormous effects’...these are all phrases that increase uncertainty rather than concretely change EVs and so are decision-irrelevant. (That’s not quite true; we should penalise rough things’ calculated EV more in high-uncertainty environments due to winners’ curse effects, but that’s secondary to my main point here). Another way of putting it is that this is the difference between one’s confidence level that what you currently think is best will still be what you think is best 20 years from now, versus trying to identify the best all-things-considered donation opportunity right now with one’s limited information. So concretely, I think it’s very likely that in 20 years I’ll think one of the >20 alternatives I’ve briefly considered will look like it was a better use of my money that Givewell charities, due to the uncertainty you’re highlighting. But I don’t know which one, and I don’t expect it to outperform 20x, so picking one essentially at random still looks pretty bad. A non-random way to pick would be if Open Phil, or someone else I respect, shifted their equivalent donation bucket to some alternative. AFAIK, this hasn’t happened. That’s the relevance of those decisions to me, rather than any belief that they’ve done a secret Uber-Analysis.
2
richard_ngo
Hmm, I agree that we're talking past each other. I don't intend to focus on ex post evaluations over ex ante evaluations. What I intend to focus on is the question: "when an EA make the claim that GiveWell charities are the charities with the strongest case for impact in near-term human-centric terms, how justified are they?" Or, relatedly, "How likely is it that somebody who is motivated to find the best near-term human-centric charities possible, but takes a very different approach than EA does (in particular by focusing much more on hard-to-measure political effects) will do better than EA?" In my previous comment, I used a lot of phrases which you took to indicate the high uncertainty of political interventions. My main point was that it's plausible that a bunch of them exist which will wildly outperform GiveWell charities. I agree I don't know which one, and you don't know which one, and GiveWell doesn't know which one. But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments publicly, in a way that we could learn from if we were more open to less quantitative analysis? (Alternatively, could someone know if they tried? But let's go with the former for now.) In other words, consider two possible worlds. In one world GiveWell charities are in fact the most cost-effective, and all the people doing political advocacy are less cost-effective than GiveWell ex ante (given publicly available information). In the other world there's a bunch of people doing political advocacy work which EA hasn't supported even though they have strong, well-justified arguments that their work is very impactful (more impactful than GiveWell's top charities), because that impact is hard to quantitatively estimate. What evidence do we have that we're not in the second world? In both worlds GiveWell would be saying roughly the same thing (because they have a high bar for rigour). Would OpenPhil
4
AGB 🔸
I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don't actually make a case; when I cash out people's claims it usually turns out they are asserting 10x - 100x multipliers, not 100x - 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/omission distinctions, and hopefully we at least agree that if even the entrenched advocate doesn't actually think their cause is best under my values, I should just move on.  As an aside, I know you wrote recently that you think more work is being done by EA's empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/Poverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We aren't actually that far apart on the empirical state of affairs. They just don't want to. They aren't refusing to because they have even better things to do, because most people do very little. Or as Rob put it: I think that last observation would also be my answer to 'what evidence do we have that we aren't in the second world?' Empirically, most people don't care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases it's debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.
1
Neel Nanda
Thanks for writing this up! I've found this thread super interesting to follow, and it's shifted my view on a few important points. One lingering thing that seems super important is longtermism vs prioritising currently existing people. It still seems to me that GiveWell charities aren't great from a longtermist perspective, but that the vast majority of people are not longtermists. Which creates a weird tension when doing outreach, since I rarely want to begin by trying to pitch longtermism, but it seems disingenuous to pitch GiveWell charities. Given that many EAs are not longtermist though, this seems overall fine for the "is the movement massively misleading people" question
4
richard_ngo
I don't think that the moral differences between longtermists and most people in similar circles (e.g. WEIRD) are that relevant, actually. You don't need to be a longtermist to care about massive technological change happening over the next century. So I think it's straightforward to say things like "We should try to have a large-scale moral impact. One very relevant large-scale harm is humans going extinct; so we should work on things which prevent it". This is what I plan to use as a default pitch for EA from now on.
6
abergal
Thank you for writing this post-- I have the same intuition as you about this being very misleading and found this thread really helpful.
5
richard_ngo
Here's Rob Wiblin: From my perspective at least, this seems like political spin. If advocacy for anti-malarial bednets was mainly intended as a way to "cut our teeth", rather than a set of literal claims about how to do the most good, then EA has been systematically misleading people for years. Nor does it seem to me that we're actually in a significantly better position to evaluate approaches to systemic change now, except insofar as we've attracted more people. But if those people were attracted because of our misleading claims, then this is not a defence.

Hi Richard, I just wanted to say that I appreciate you asking these questions! Based on the number of upvotes you have received, other people might be wondering the same, and it's always useful to propagate knowledge like Alex has written up further.

I would have appreciated it even more if you had not directly jumped to accusing EA of being misleading (without any references) before waiting for any answers to your question.

4
richard_ngo
This seems reasonable. On the other hand, it's hard to give references to a broad pattern of discourse. Maybe the key contention I'm making here is that "doing the most good per dollar" and "doing the most good that can be verified using a certain class of methodologies" are very different claims. And the more different that class is methodologies is from most people's intuitive conception of how to evaluate things, the more important it is to clarify that point.
4
richard_ngo
Or, to be more concrete, I believe (with relatively low confidence, though) that: * Most of the people whose donations have been influenced by EA would, if they were trying to donate to do as much good as possible without any knowledge of EA, give money to mainstream systemic change (e.g. political activism, climate change charities). * Most of those people believe that there's a consensus within EA that donations to Givewell's top charities do more good than these systemic change donations, to a greater degree than there actually is. * Most of those people would then be surprised to learn how little analysis EA has done on this question, e.g. they'd be surprised at how limited the scope of charities Givewell considers actually is. * A significant part of these confusions is due to EA simplifying its message in order to attract more people - for example, by claiming to have identified the charities that "do the most good per dollar", or by comparing our top charities to typical mainstream charities instead of the mainstream charities that people in EA's target audience previously believed did the most good per dollar (before hearing about EA).
4
AGB 🔸
  Related to my other comment, but what would you guess is the split of donations from EAs to Givewell's top charities versus 'these systemic change donations'? I ask because if it's highly skewed, I would be strongly against pretending that we're highly conflicted on this question while the reality of where we give says something very different; this question of how to represent ourselves accurately cuts both ways, and it is very tempting to try and be 'all things to all people'.  All things considered, the limited data I have combined with anecdata from a large number of EAs suggests to me that it is in fact highly skewed. I think this is backwards. The 'systemic change' objection, broadly defined, is by far the most common criticism of EA. Correspondingly, I think the movement would be much larger were it better-disposed to such interventions, largely neutralising this complaint and so appealing to a (much?) wider group of people. 
4
AGB 🔸
You may also be interested in this piece from Open Phil: https://www.openphilanthropy.org/blog/givewells-top-charities-are-increasingly-hard-beat

Disproportionately many of the most agentic and entrepreneurial young EAs I know are community-builders. I think this is because a) EA community-building currently seems neglected compared to other cause areas, but b) there's currently no standard community-building career pathway, so to work on it they had to invent their own jobs.

Hopefully the people I'm talking about changing the latter will lead to the resolution of the former.

There's an old EA forum post called Effective Altruism is a question (not an ideology) by Helen Toner, which I think has been pretty influential.*

But I was recently thinking about how the post rings false for me personally. I know that many people in EA are strongly motivated by the idea of doing the most good. But I was personally first attracted to an underlying worldview composed of stories about humanity's origins, the rapid progress we've made, the potential for the world to be much better, and the power of individuals to contribute to that; from there, given potentially astronomical stakes, altruism is a natural corollary.

I think that leaders in EA organisations are more likely to belong to the former category, of people inspired by EA as a question. But as I discussed in this post, there can be a tradeoff between interest in EA itself versus interest in the things EA deems important. Personally I prioritise making others care about the worldview more than making them care about the question: caring about the question pushes you to do the right thing in the abstract, but caring about the worldview seems better at pushing you towards its most productive frontiers. This seems a... (read more)

4
JP Addison🔸
See also: Effective Altruism is an Ideology not (just) a Question. Not endorsed by me, personally. I wouldn't call someone "not EA-aligned" if they disagreed about all of the worldview claims you made, but really care about understanding if someone is genuinely trying to answer the Question.

In the same way that covid was a huge opportunity to highlight biorisk, the current Ukraine situation may be a huge opportunity to highlight nuclear risks and possible solutions to them. What would it look like for this to work really well?

The concept of cluelessness seems like it's pointing at something interesting (radical uncertainty about the future) but has largely been derailed by being interpreted in the context of formal epistemology. Whether or not we can technically "take the expected value" even under radical uncertainty is both a confused question (human cognition doesn't fit any of these formalisms!), and also much less interesting than the question of how to escape from radical uncertainty. In order to address the latter, I'd love to see more work that starts from Bostrom's framing in terms of crucial considerations.

One use case of the EA forum which we may not be focusing on enough:

There are some very influential people who are aware of and somewhat interested in EA. Suppose one of those people checks in on the EA forum every couple of months. Would they be able to find content which is interesting, relevant, and causes them to have a higher opinion of EA? Or if not, what other mechanisms might promote the best EA content to their attention?

The "Forum Favourites" partly plays this role, I guess. Although because it's forum regulars who are most likely to highly upvote posts, I wonder whether there's some divergence between what's most valuable for them and what's most valuable for infrequent browsers.

2
aogara
“...whether there's some divergence between what's most valuable for them and what's most valuable for infrequent browsers.” I’d strongly guess that this is the case. Maybe Community posts should be removed from Forum favorites?
7
Aaron Gertler 🔸
By default, Community posts don't show up in Forum Favorites, or on the Frontpage at all. You have to check a box to show them. My recommendation for people interested in EA is to read the EA Newsletter, which filters more heavily than the Forum. effectivealtruism.org ranks first in Google for EA, and has a bunch of different newsletter signup boxes. As for the Forum, this is part of why the Motivation Series exists (and will soon be linked to from the homepage). As for more up-to-date content, I'd think that the average high-karma Frontpage post probably does a reasonable job of representing what people in EA are working on. But I'd be interested to hear others' thoughts on what the Forum could change to better meet this use case!

There was a lot of discussion in the early days of EA about replacement effects in jobs, and also about giving now vs giving later (for a taste of how much, see my list here, and Julia Weiss' disjoint list here).

The latter debate is still fairly prominent now. But I think that arguments about replacement effects became largely redundant when we started considering the value of becoming excellent in high-leverage domains like altruistically-focused research (for which the number of jobs isn't fixed like it is in, say, medicine).

One claim that I haven't seen... (read more)

7
Benjamin_Todd
I think that's a good point, though I've heard it discussed a fair amount. One way of thinking about it is that 'direct work' also has movement building benefits. This makes the ideal fraction of direct work in the portfolio higher than it first seems.
2
richard_ngo
Cool, good to know. Any pointers to places where people have made this argument at more length?
2
Benjamin_Todd
I'm not sure. Unfortunately there's a lot of things like this that aren't yet written up. There might be some discussion of the movement building value of direct work in our podcast with Phil Trammell.
2
richard_ngo
I see. Yeah, Phil and Rob do discuss it, but focused on movement-building via fundraising/recruitment/advocacy/etc, rather than via publicly doing amazing direct work. Perhaps they were implicitly thinking about the latter as well, though. But I suspect the choice of examples shapes people's impression of the argument pretty significantly. E.g. when it comes to your individual career, you'll think of "investing in yourself" very differently if the central examples are attending training programs and going to university, versus if the central example is trying to do more excellent and eye-catching work.
2
Benjamin_Todd
Agree. I've definitely heard the other point though - it's a common concern with 80k among donors (e.g. maybe 'concrete problems in AI safety' does far more to get people into the field than an explicit movement building org ever would). Not sure where to find a write up!
1
Hawk.Yang 🔸
.
2
richard_ngo
I think most of the 80,000 hours priority career paths qualify, as well as work on the other problem areas which seem important to them.
Curated and popular this week
Relevant opportunities