(Poster's note: Given subject matter I am posting an additional copy here in the EA Forum. The theoretically canonical copy of this post is on my Substack and I also post to Wordpress and LessWrong.)
Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest’s announcement.

That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post.
This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term.
There are a lot of offshoots and threads involved, so I’m using some editorial discretion to organize and filter.
To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum.
As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads.



(As Kevin says, I did indeed mean should there.)




At this point there are two important threads that follow, and one additional reply of note.

Thread one, which got a bit tangled at the beginning but makes sense as one thread:




Thread two, which took place the next day and went in a different direction.



Link here to Ben’s post, GiveWell and the problem of partial funding.

Link to GiveWell blog post on giving now versus later.



Dustin’s “NO WE ARE FAILING” point seemed important so I highlighted it.

There was also a reply from Eliezer.


And this on pandemics in particular.

Sarah asked about the general failure to convince Dustin’s friends.




These two notes branch off of Ben’s comment that covers-all-of-EA didn’t make sense.


Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find.
This thread responds to Dustin’s claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length.
This came in response to Dustin’s challenge on whether info was 10x better.


After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length.
This response to the original Tweet is worth noting as well.

Again, thanks to everyone involved and sorry if I missed your contribution.
I've also had this thought (though wouldn't necessarily have thought of it as an outside view argument). I'm not convinced by counterarguments here in the thread so far.
Quoting from a reply below that argues for deferring to grantmakers (and thereby increasing their overhead with them getting more applications):
>You may also be unaware of ways it would backfire, and the reason something doesn't get funded is because others judge it to be net negative.
I mean, that's true in theory, but giving people who you know well (so have a comparative advantage at evaluating their character and competence) some extra resources isn't usually a high-variance decision. Sure, if one of your friends had a grand plan for having impact in the category of "tread carefully," then you probably want to consult experts to make sure it doesn't backfire. But you also want to talk to your friend/acquaintance to slow down in general, in that case, so it isn't a concept that only or particularly applies to whether to give them resources. And for many or even most people who work on EA topics, their work/activities don't come with high backfiring risks (at least I tentatively think so, even though I might agree with the statement "probably >10% of people in EA have predictably negative impact." Most people who have negative impact have low negative impact.)
>This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations.
I think both things are valuable. You can focus on comparative advantages and reducing overhead, or you can focus on benefits from scale and deep immersion.
One more thought on this: If someone is inexperienced with EA and feels unsuited for any grantmaking decisions, even in areas where they have local information that grantmakers lack, it makes more sense for them to defer. However, it gets tricky. They'll also tend to be bad at deciding who to defer to. So, yeah, they can reduce variance and go with something broadly accepted within the community. But that still covers a lot of things – it applies to longtermism as well as neartermism. Many funds in the community rely on quite specific normative views (and empirical one, but deference makes sense there more straightforwardly), and the person we're now talking about will be more poorly positioned to decide on this. So they're generally in a tricky situation and probably benefit from gaining a better understanding of several things. To summarize, I think if someone knows where and when to defer, they're probably also in a good enough position to decide that there's a particular person in their social environment who'd do good things if they had more money. (And the idea/proposal here is to only give money to people locally if you actually feel convinced by it, rather than doing it as a general policy. The original comment could maybe be interpreted as supporting a general policy or giving out money to less affluant acquaintances, whereas my stance is more like "Do it if it seems compellingly impactful to you!")