Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and was the COO of Rethink Priorities from 2020 to 2024.
Thanks! That's a great question and something I should figure out how to handle. I'll think about the ideal implementation of this and include something for November, but I think if it comes up for October participants:
Nice! This is great pushback! I think that most my would be responses are covered by other people, so will add one thing just on this:
Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.
My experience isn't this. I think that I have probably engaged with something like ~15 >$1M donors in EA or adjacent fields. Doing a brief exercise in my head of thinking through everyone I could, I got to something like:
So I'd guess that even trying to do this approach, only like 50% of major donors would pass this filter. Though it seems possible luck also played a major role for many of those 50% too and I just don't know about it. I'm surprised you find the overall claim bizarre though, because to me it often feels somewhat self-evident from interacting with people from different wealth levels within EA, where it seems like the best calibrated people are often like, mid-level non-executives at organizations, who neither have information distortions from having power but also have deep networks / expertise and a sense of the entire space. I don't think ultra-wealthy people have worse views, to be clear — just that wealth and having well-calibrated, thoughtful views about the world seem unrelated (or to the extent they are correlated, those differences stop being meaningful below the wealth of the average EA donor), and certainly a default of "cause prioritization is directly downstream of the views of the wealthiest people" is worse than many alternatives.
I strongly agree about the clunkiness of this approach though, and many of the downsides you highlight. I think in my ideal EA, there would be lots and lots of various things like this tried, and good ones would survive and iterate, and just generally EAs experiment with different models for distributing funding, so this is my humble submission to that project.
Yeah, I agree that this seems tricky. I thought about sub-causes, but also worried they'd just make it really burdensome to participate every month.
I ended up making a Discord for participants, and added a channel where people can explain their allocation, so my hope is that this lets people who have strong sub-cause prioritization make the case to it for donors. Definitely interested in thoughts on how to improve this though, and seems worth exploring further.
After some discussions with someone offline that were clarifying, I want to clarify my decrease in confidence in the statement, "Farmed vertebrate welfare should be an EA focus".
I think my view is slightly more complicated than this implies. I think that given that OpenPhil and non-EA donors are basically able to fund what seem like the entirety of the good opportunities in this space, I don't think these groups are that talent constrained, and it seems like the best bets (e.g. corporate campaigns) will continue to have decreasing cost-effectiveness, new animal-focused talent should probably be mostly going into earning-to-give for invertebrates/WAW, and that donations should mostly go to groups there or the EA AWF (which should in turn mostly fund invertebrates and WAW). I don't think farmed vertebrate welfare should be the default way that EAs recommend to help animals
I mean something like directly implementing an intervention vs finance/HR/legal/back office roles, so ops just in the nonprofit sense.
Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it's less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks.
I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA's net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc.
But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn't matter in the longrun from EA.
I don't think it's all net-negative — I think there are lots of worlds where EA has lots of good and bad that kind of wash out, or where the overall sign is pretty ambiguous in the longrun.
Here are lots of ways I think are possible EA could end up causing a lot of possible harm. I don't really think any of these are that likely on their own — I just think it's generally easier to cause harm than produce good, so there are lots of ways EA can accidentally not achieve being overall positive, and I generally think it has an uphill road to climb to end up not being a neutral or ambiguous quirk in the ash heap of history.
Thanks! This is a great point. I'll work on getting some German-deductible options on the list for all categories for future months, but also can confirm that the pool has up to $1,500 (and potentially more) in donation swappable dollars to help navigate this right now.