B

Buck

CEO @ Redwood Research
6220 karmaJoined Sep 2014Working (6-15 years)Berkeley, CA, USA

Comments
307

I think that one reason this isn’t done is that the people who have the best access to such metrics might not think it’s actually that important to disseminate them to the broader EA community, rather than just sharing them as necessary with the people for whom these facts are most obviously action-relevant.

I think you're right that my original comment was rude; I apologize. I edited my comment a bit.

I didn't mean to say that the global poverty EAs aren't interested in detailed thinking about how to do good; they definitely are, as demonstrated e.g. by GiveWell's meticulous reasoning. I've edited my comment to make it less sound like I'm saying that the global poverty EAs are dumb or uninterested in thinking.

But I do stand by the claim that you'll understand EA better if you think of "promote AMF" and "try to reduce AI x-risk" as results of two fairly different reasoning processes, rather than as results of the same reasoning process. Like, if you ask someone why they're promoting AMF rather than e.g. insect suffering prevention, the answer usually isn't "I thought really hard about insect suffering and decided that the math doesn't work out", it's "I decided to (at least substantially) reject the reasoning process which leads to seriously considering prioritizing insect suffering over bednets".

(Another example of this is the "curse of cryonics".)

Buck
4mo34
16
4

I don't think it makes sense to think of EA as a monolith which both promoted bednets and is enthusiastic about engaging with the kind of reasoning you're advocating here. My oversimplified model of the situation is more like:

  • Some EAs don't feel very persuaded by this kind of reasoning, and end up donating to global development stuff like bednets.
  • Some EAs are moved by this kind of reasoning, and decide not to engage with global development because this kind of reasoning suggests higher impact alternatives. They don't really spend much time thinking about how to best address global development, because they're doing things they think are more important.

(I think the EAs in the latter category have their own failure modes and wouldn't obviously have gotten the malaria thing right (assuming you're right that a mistake was made) if they had really tried to get it right, tbc.)

Buck
4mo17
11
7

Note that L was the only example in your list which was specifically related to EA. I believe that that accusation was false. See here for previous discussion.

I believe that these accusations are false. See here for previous discussion.

Can you give some examples of other strategies you think seem better?

Buck
6mo9
7
2
1
1

I think it was unhelpful to refer to “Harry Potter fanfiction” here instead of perhaps “a piece of fiction”—I don’t think it’s actually more implausible that a fanfic would be valuable to read than some other kind of fiction, and your comment ended up seeming to me like it was trying to use the dishonest rhetorical strategy of implying without argument that the work is less likely to be valuable to read because it’s a fanfic.

I found Ezra's grumpy complaints about EA amusing and useful. Maybe 80K should arrange to have more of their guests' children get sick the day before they tape the interviews.

I agree that we should tolerate people who are less well read than GPT-4 :P

For what it’s worth, gpt4 knows what rat means in this context: https://chat.openai.com/share/bc612fec-eeb8-455e-8893-aa91cc317f7d

Load more