(Poster's note: Given subject matter I am posting an additional copy here in the EA Forum. The theoretically canonical copy of this post is on my Substack and I also post to Wordpress and LessWrong.)
Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest’s announcement.

That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post.
This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term.
There are a lot of offshoots and threads involved, so I’m using some editorial discretion to organize and filter.
To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum.
As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads.



(As Kevin says, I did indeed mean should there.)




At this point there are two important threads that follow, and one additional reply of note.

Thread one, which got a bit tangled at the beginning but makes sense as one thread:




Thread two, which took place the next day and went in a different direction.



Link here to Ben’s post, GiveWell and the problem of partial funding.

Link to GiveWell blog post on giving now versus later.



Dustin’s “NO WE ARE FAILING” point seemed important so I highlighted it.

There was also a reply from Eliezer.


And this on pandemics in particular.

Sarah asked about the general failure to convince Dustin’s friends.




These two notes branch off of Ben’s comment that covers-all-of-EA didn’t make sense.


Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find.
This thread responds to Dustin’s claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length.
This came in response to Dustin’s challenge on whether info was 10x better.


After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length.
This response to the original Tweet is worth noting as well.

Again, thanks to everyone involved and sorry if I missed your contribution.
(This is a long comment, but hopefully pretty informative.)
I didn't fund anything that wasn't getting funding from major funders, I just didn't defer totally to the funders and so overweighted some things and struck out others.
I think I had little trust in funders/evaluators in EAA early on, and part of the reason I donated to RP was because I thought there wasn't enough good research in EAA, even supporting existing EAA priorities, and I was impressed with RP's. I trust Open Phil and the EA Funds much more now, though, since
My main remaining disagreement is that I think we should be researching the wild animal effects of farmed animal interventions and thinking about how to incorporate them in our decisions, since they can plausibly shift priorities substantially. This largely comes down to a normative/decision-theoretic disagreement about what to do under deep uncertainty/with complex cluelessness, though, not a disagreement about what would actually happen.
Yes, but I expect funders/evaluators to be more informed about which undercover investigators would be best to fund, since I won't personally have the time or interest to look into their particular average cost-effectiveness, room for more funding, track record, etc., on my own, although I can read others' research if they're published and come to my own conclusions on its basis. Knowing that one opportunity is really good doesn't mean there aren't far better ones doing similar work. I might give directly to charities doing undercover investigations (or whatever other intervention) rather than through a fund, but I'd prefer to top up orgs that are already getting funded or recommended, since they seem more marginally cost-effective in expectation.
Generally, if there's a bar for cost-effectiveness, some things aren't meeting the bar, and there are things above the bar (e.g. being funded by major funders or recommended by evaluators) with room for more funding, I think you should just top up things above the bar with room for more funding, but you can select among them. If there's an opportunity you're excited about, but not being funded by major funders, I think you should recommend they apply to EA Funds or others (and maybe post about them on the EA Forum), because
There may be exceptions to this, but I think this is the guide most EAs (or at least most EAAs) should follow unless they're grantmakers themselves, are confident they'd be selected as grantmakers if they applied, or have major normative disagreements with the grantmakers.