I'm a theoretical CS grad student at Columbia specializing in mechanism design. I write a blog called Unexpected Values which you can find here: https://ericneyman.wordpress.com/. My academic website can be found here: https://sites.google.com/view/ericneyman/.
In my vocabulary, "abolitionist" means "person who is opposed to slavery" (e.g. "would vote to abolish slavery"). My sense is that this is the common meaning of the term, but let me know if you disagree.
It seems, then, that the analogy would be "person who is opposed to factory farming" (e.g. "would vote to outlaw factory farms"), instead of "vegetarian" and "animal donor". The latter two are much higher standards, as they require personal sacrifice (in the same way that "not consuming products made with slave labor" was a much higher standard -- one that very few people held themselves to).
The vast majority of EAs oppose factory farming, and I think they would have also supported the abolition of slavery.
In October, I wrote a post encouraging AI safety donors to donate to the Alex Bores campaign. Since then, I've spent a bunch of time thinking about the best donations for making the long-term future go well, and I still think that the Alex Bores campaign is the best donation opportunity for U.S. citizens/permanent residents. Under my views, donations to his campaign made this month are about 25x better than donations to standard AI safety 501(c)(3) organizations like LTFF.[1] I also think that donations made after December 31st are substantially almost 2 times less effective than donations made this month, because a lot of the value of donations to Bores comes from the value of signaling campaign strength and consolidating support, rather than from spending money on ads, and donations made in January won't become public until April. (See more discussion in my post.)
Some things has happened since then. The RAISE Act, Bores' AI safety legislation, was signed by the governor![2] Also, the big tech super PAC announced that Alex Bores would be their first target. I've been really impressed with how Bores has handled the situation -- see here for an interview with him about that. Bores also just went on Bloomberg's odd lots podcast; I haven't listened to it myself, but I heard that it was a good episode. I have generally been consistently impressed with Bores since the launch of his campaign.
If you're thinking about end-of-year donations, I strongly encourage you to consider donating to Bores. Here's a link to donate, though I recommend thinking about career considerations of political donations before deciding to donate. The maximum legal donation is $7,000.
(I think the second best donation opportunity is the Scott Wiener campaign -- here's a link to donate. Make sure to use this link rather than going to his website, because that'll let his team know that you're donating for AI safety reasons.)
In part, this is because of my bullishness on making the future go well conditioned on no AI takeover. I think Bores is particularly good from this perspective because he came across as particularly competent and high-integrity in a way that I expect to be important beyond AI takeover risk. For donors who only care about mitigating AI takeover, my guess is that donating to Bores is only around 10x better than e.g. LTFF.
Admittedly, in a weakened form, but I'm excited nonetheless.
[Not tax/financial advice]
I agree, especially for donors who want to give to 501(c)(3)'s, since a lot of Anthropic equity is pledged to c3's.
Another consideration for high-income donors that points in the same direction: if I'm not mistaken, 2025 is the last tax year where donors in the top tax bracket (AGI > $600k) can deduct up to 60% of their AGI; the One Big Beautiful Bill Act lowers this number to 35%. (Someone should check this, though, because it's possible that I'm misinterpreting the rule.)
As one of Zach's collaborators, I endorse these recommendations. If I had to choose among the 501c3s listed above, I'd choose Forethought first and the Midas Project second, but these are quite weakly held opinions.
I do recommend reaching out about nonpublic recommendations if you're likely to give over $20k!
[Link to donate; or consider a bank transfer option to avoid fees, see below.]
Nancy Pelosi has just announced that she is retiring. Previously I wrote up a case for donating to Scott Wiener, who is running for her seat, in which I estimated a 60% chance that she would retire. While I recommended donating on the day that he announced his campaign launch, I noted that donations would look much better ex post in worlds where Pelosi retires, and that my recommendation to donate on launch day was sensitive to my assessment of the probability that she would retire.
I know some people who read my post and decided (quite reasonably) to wait to see whether Pelosi retired. If that was you, consider donating today!
You can donate through ActBlue here (please use this link rather than going directly to his website, because the URL lets his team know that these are donations from people who care about AI safety).
Note that ActBlue charges a 4% fee. I think that's not a huge deal; however, if you want to make a large contribution and are already comfortable making bank transfers, shoot be a DM and I'll give you instructions for making the bank transfer!
Yup! Copying over from a LessWrong comment I made:
Roughly speaking, I'm interested in interventions that cause the people making the most important decisions about how advanced AI is used once it's built to be smart, sane, and selfless. (Huh, that was some convenient alliteration.)
And so I'm pretty keen on interventions that make it more likely that smart, sane, and selfless people are in a position to make the most important decisions. This includes things like:
No, just that, contra your title, most EAs would have been abolitionists, the way that I understand the word "abolitionist" to be used.