(Poster's note: Given subject matter I am posting an additional copy here in the EA Forum. The theoretically canonical copy of this post is on my Substack and I also post to Wordpress and LessWrong.)
Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest’s announcement.

That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post.
This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term.
There are a lot of offshoots and threads involved, so I’m using some editorial discretion to organize and filter.
To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum.
As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads.



(As Kevin says, I did indeed mean should there.)




At this point there are two important threads that follow, and one additional reply of note.

Thread one, which got a bit tangled at the beginning but makes sense as one thread:




Thread two, which took place the next day and went in a different direction.



Link here to Ben’s post, GiveWell and the problem of partial funding.

Link to GiveWell blog post on giving now versus later.



Dustin’s “NO WE ARE FAILING” point seemed important so I highlighted it.

There was also a reply from Eliezer.


And this on pandemics in particular.

Sarah asked about the general failure to convince Dustin’s friends.




These two notes branch off of Ben’s comment that covers-all-of-EA didn’t make sense.


Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find.
This thread responds to Dustin’s claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length.
This came in response to Dustin’s challenge on whether info was 10x better.


After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length.
This response to the original Tweet is worth noting as well.

Again, thanks to everyone involved and sorry if I missed your contribution.
Since we are talking about funding people within your network that you personally know, not randos, the idea is that you already know this stuff about some set of people. Like, explicitly, the case for self-funding norms is the case for utilizing informational capital that already exists rather than discarding it.
I think it is not that hard to keep up with what last year's best opportunities looked like and get a good sense of where the bar will be this year. Compiling the top 5 opportunities or whatever is a lot more labor intensive than reviewing the top 5 and you already state being informed enough to know about and agree with the decisions of funders. So I disagree with level at which we should think we are flying blind.
Yes I think this will be the most common source of disagreement at least in your case, my case, sapphire's case. With respect to the things I know about being rejected this was the case.
All of that said I think I have updated from your posts to be more encouraging of applying for EA funding and/or making forum posts. I will not do this in a deferrential manner and to me it seems harmful to do so -- I think people should feel discouraged if you explicitly discard what you personally know about their competence etc.