A

abrahamrowe

4436 karmaJoined Working (6-15 years)

Bio

Principal — Good Structures

I previously co-founded and served as Executive Director at Wild Animal Initiative, and was the COO of Rethink Priorities from 2020 to 2024.

Comments
197

Topic contributions
1

Thanks! This is a great point. I'll work on getting some German-deductible options on the list for all categories for future months, but also can confirm that the pool has up to $1,500 (and potentially more) in donation swappable dollars to help navigate this right now.

Thanks! That's a great question and something I should figure out how to handle. I'll think about the ideal implementation of this and include something for November, but I think if it comes up for October participants:

  • Pledge in USD, stating the other currency amount planned to give in the alternate currency (spot converted on the day of the pledge) in the comments.
  • Give them amounts to give in their preferred currency using that rate.
  • Once donated and receipts are submitted, I'll spot convert at the time they donated, and if the dollar weakened relative to their original pledge substantially, backstop it.

Nice! This is great pushback! I think that most my would be responses are covered by other people, so will add one thing just on this:

Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.

My experience isn't this. I think that I have probably engaged with something like ~15 >$1M donors in EA or adjacent fields. Doing a brief exercise in my head of thinking through everyone I could, I got to something like:

  • ~33% inherited wealth / family business
  • ~40% seems like they mostly "earned it" in the sense that it seems like they started a business or did a job well, climbed the ranks in a company due to their skills, etc. To be generous, I'm also including people here who were early investors in crypto, say, where they made a good but highly speculative bet at the right time.
  • ~20% seems like the did a lot of very difficult work, but also seem to have gotten really really lucky - e.g. grew a pre-existing major family business a lot, were roommates with Mark Zuckerberg, etc.
    • Obviously we don't have the counterfactuals on these people's lucky breaks, so it's hard for me to guess what the world looks like where they didn't have this lucky break, but I'd guess it's at least at a much lower giving potential.
  • 7% I'm not really sure.
     

So I'd guess that even trying to do this approach, only like 50% of major donors would pass this filter. Though it seems possible luck also played a major role for many of those 50% too and I just don't know about it. I'm surprised you find the overall claim bizarre though, because to me it often feels somewhat self-evident from interacting with people from different wealth levels within EA, where it seems like the best calibrated people are often like, mid-level non-executives at organizations, who neither have information distortions from having power but also have deep networks / expertise and a sense of the entire space. I don't think ultra-wealthy people have worse views, to be clear — just that wealth and having well-calibrated, thoughtful views about the world seem unrelated (or to the extent they are correlated, those differences stop being meaningful below the wealth of the average EA donor), and certainly a default of "cause prioritization is directly downstream of the views of the wealthiest people" is worse than many alternatives. 


I strongly agree about the clunkiness of this approach though, and many of the downsides you highlight. I think in my ideal EA, there would be lots and lots of various things like this tried, and good ones would survive and iterate, and just generally EAs experiment with different models for distributing funding, so this is my humble submission to that project.

I agree! I think that these donors are probably the least incentivized to do this, but also where a the most value would come from. Though I'll note that as of me writing this comment the average is well above 10x the minimum donation.

Yeah, I agree that this seems tricky. I thought about sub-causes, but also worried they'd just make it really burdensome to participate every month.

I ended up making a Discord for participants, and added a channel where people can explain their allocation, so my hope is that this lets people who have strong sub-cause prioritization make the case to it for donors. Definitely interested in thoughts on how to improve this though, and seems worth exploring further.

After some discussions with someone offline that were clarifying, I want to clarify my decrease in confidence in the statement, "Farmed vertebrate welfare should be an EA focus".

I think my view is slightly more complicated than this implies. I think that given that OpenPhil and non-EA donors are basically able to fund what seem like the entirety of the good opportunities in this space, I don't think these groups are that talent constrained, and it seems like the best bets (e.g. corporate campaigns) will continue to have decreasing cost-effectiveness, new animal-focused talent should probably be mostly going into earning-to-give for invertebrates/WAW, and that donations should mostly go to groups there or the EA AWF (which should in turn mostly fund invertebrates and WAW). I don't think farmed vertebrate welfare should be the default way that EAs recommend to help animals

I mean something like directly implementing an intervention vs finance/HR/legal/back office roles, so ops just in the nonprofit sense.

Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it's less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks.

I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA's net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc.

But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn't matter in the longrun from EA.

I don't think it's all net-negative — I think there are lots of worlds where EA has lots of good and bad that kind of wash out, or where the overall sign is pretty ambiguous in the longrun.

Here are lots of ways I think are possible EA could end up causing a lot of possible harm. I don't really think any of these are that likely on their own — I just think it's generally easier to cause harm than produce good, so there are lots of ways EA can accidentally not achieve being overall positive, and I generally think it has an uphill road to climb to end up not being a neutral or ambiguous quirk in the ash heap of history.

  • The various charities don't produce enough value to offset the harms of FTX (seems likely they already have produced more to me, but I haven't thought about it)
  • Things around accidentally accelerating AI capabilities in ways that end up being harmful
  • Things around accidentally accelerating various bio capabilities in ways that end up being harmful.
  • Enabling some specific person into entering a position of power where they end up doing a lot of harm.
  • X-risk from AI is overblown, and the E/accs are right about the potential of AI, and lots of harm is caused by trying to slow AI development/regulate it.
  • There is even stronger reactionary response to some future EA effort that makes things worse is some way.
  • Most of the risk from AI is algorithmic bias/related things, and AI folks' conflict with people in that field ends up being harmful for reducing it.
  • Using EV only for making decisions accidentally leads to a really bad world, even when all decisions made were positive EV.
  • EA crowds out other better effective giving efforts that could have arisen.
Load more