I'm a theoretical CS grad student at Columbia specializing in mechanism design. I write a blog called Unexpected Values which you can find here: https://ericneyman.wordpress.com/. My academic website can be found here: https://sites.google.com/view/ericneyman/.
[Not tax/financial advice]
I agree, especially for donors who want to give to 501(c)(3)'s, since a lot of Anthropic equity is pledged to c3's.
Another consideration for high-income donors that points in the same direction: if I'm not mistaken, 2025 is the last tax year where donors in the top tax bracket (AGI > $600k) can deduct up to 60% of their AGI; the One Big Beautiful Bill Act lowers this number to 35%. (Someone should check this, though, because it's possible that I'm misinterpreting the rule.)
As one of Zach's collaborators, I endorse these recommendations. If I had to choose among the 501c3s listed above, I'd choose Forethought first and the Midas Project second, but these are quite weakly held opinions.
I do recommend reaching out about nonpublic recommendations if you're likely to give over $20k!
[Link to donate; or consider a bank transfer option to avoid fees, see below.]
Nancy Pelosi has just announced that she is retiring. Previously I wrote up a case for donating to Scott Wiener, who is running for her seat, in which I estimated a 60% chance that she would retire. While I recommended donating on the day that he announced his campaign launch, I noted that donations would look much better ex post in worlds where Pelosi retires, and that my recommendation to donate on launch day was sensitive to my assessment of the probability that she would retire.
I know some people who read my post and decided (quite reasonably) to wait to see whether Pelosi retired. If that was you, consider donating today!
You can donate through ActBlue here (please use this link rather than going directly to his website, because the URL lets his team know that these are donations from people who care about AI safety).
Note that ActBlue charges a 4% fee. I think that's not a huge deal; however, if you want to make a large contribution and are already comfortable making bank transfers, shoot be a DM and I'll give you instructions for making the bank transfer!
Yup! Copying over from a LessWrong comment I made:
Roughly speaking, I'm interested in interventions that cause the people making the most important decisions about how advanced AI is used once it's built to be smart, sane, and selfless. (Huh, that was some convenient alliteration.)
And so I'm pretty keen on interventions that make it more likely that smart, sane, and selfless people are in a position to make the most important decisions. This includes things like:
This deserves a full post, but for now a quick take: in my opinion, P(no AI takeover) = 75%, P(future goes extremely well | no AI takeover) = 20%, and most of the value of the future is in worlds where it goes extremely well (and comparatively little value comes from locking in a world that's good-but-not-great).
Under this view, an intervention is good insofar as it affects P(no AI takeover) * P(things go really well | no AI takeover). Suppose that a given intervention can change P(no AI takeover) and/or P(future goes extremely well | no AI takeover). Then the overall effect of the intervention is proportional to ΔP(no AI takeover) * P(things go really well | no AI takeover) + P(no AI takeover) * ΔP(things go really well | no AI takeover).
Plugging in my numbers, this gives us 0.2 * ΔP(no AI takeover) + 0.75 * ΔP(things go really well | no AI takeover).
And yet, I think that very little AI safety work focuses on affecting P(things go really well | no AI takeover). Probably Forethought is doing the best work in this space.
(And I don't think it's a tractability issue: I think affecting P(things go really well | no AI takeover) is pretty tractable!)
(Of course, if you think P(AI takeover) is 90%, that would probably be a crux.)
California state senator Scott Wiener, author of AI safety bills SB 1047 and SB 53, just announced that he is running for Congress! I'm very excited about this.
It’s an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are.*
In my opinion, Scott Wiener has done really amazing work on AI safety. SB 1047 is my absolute favorite AI safety bill, and SB 53 is the best AI safety bill that has passed anywhere in the country. He's been a dedicated AI safety champion who has spent a huge amount of political capital in his efforts to make us safer from advanced AI.
On Monday, I made the case that donating to Alex Bores -- author of the New York RAISE Act -- calling it a "once in every couple of years opportunity", but flagging that I was also really excited about Scott Wiener.
I plan to have a more detailed analysis posted soon, but my bottom line is that donating to Wiener today is about 75% as good as donating to Bores was on Monday, and that this is also an excellent opportunity that will come up very rarely. (The main reason that it looks less good than donating to Bores is that he's running for Nancy Pelosi's seat, and Pelosi hasn't decided whether she'll retire. If not for that, the two donation opportunities would look almost exactly equally good, by my estimates.)
(I think that donating now looks better than waiting for Pelosi to decide whether to retire; if you feel skeptical of this claim, I'll have more soon.)
I have donated $7,000 (the legal maximum) and encourage others to as well. If you're interested in donating, here's a link.
Caveats:
*So, just to be clear, I think it's unlikely (20%?) that there will be a political donation opportunity at least this good in the next few months.
In the past, I've had:
My suggestion: set a reminder for, like, September 2026 (I'm guessing that the primary will be in June 2026). Reach out to the campaign if you haven't gotten anything by then.
I just did a BOTEC, and if I'm not mistaken, 0.0000099999999999999999999999999999999999999999988% is incorrect, and instead should be 0.0000099999999999999999999999999999999999999999998%. This is a crux, as it would mean that the SWWM pledge is actually 2x less effective than the GWWC pledge.
I tried to write out the calculations in this comment; in the process of doing so, I discovered that there's a length limit to EA Forum comments, so unfortunately I'm not able to share my calculations. Maybe you could share yours and we could double-crux?
Thanks for the correction!