Unable to work. Was community director of EA Netherlands, had to quit due to long covid.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research.
Here's one idea as a reference:
The campaign for ranked choice voting-style reforms in 7 states cost nearly $100 million, and failed in 6 out of 7 states (it was narrowly protected in Alaska) in 2024. The linked article is a decent description.
Ranked Choice Voting is a good way to reduce polarization in politics, elect more popular (and less extreme) candidates, and increase competition. It would also reduce the power of Trump over the Republican Party, which could lead to more Congressional pushback.
Despite the disappointing 2024 results, I believe there significant opportunity in 2026: the midterms have a more politically engaged turnout, and given the current situation, voters might be more open to reform.
There are alternatives to Ranked Choice Voting (like approval voting) and I'm no expert on them. It does seem like, if this campaign were to run for 2026 it would need to start soon.
There's also a decent chance that it would be perceived as very hostile by the current administration, and retaliation could do significant damage to the community or the specific funders behind a campaign
For forecasts, here's Manifold's US Democracy questions, which I suggest sorting by total traders (and unfortunately, anything n<30 traders becomes quite unreliable) and I also have a Manifold dashboard compiled where questions are grouped a bit more by theme here.
Main questions are:
I really like that you've chosen this topic and think it's an important one! I wrote my MA Philosophy thesis on this (in 2019, now outdated).
On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive
I want to flag that I disagree with this framing, as it's very anthropocentric. There are futures in which we go extinct but that are nevertheless highly valuable (happy sentient AI spreading via VNM probes). Perhaps more empirically relevant, I expect almost all effects to go via making the transition to superintelligence go well, and the most distinct action is focusing on digital sentience (which has little effect on extinction risk and much effect on the value of the future).
That sounds like [Cooperative AI](https://www.cooperativeai.com/post/new-report-multi-agent-risks-from-advanced-ai)
https://www.cooperativeai.com/post/new-report-multi-agent-risks-from-advanced-ai
One perspective one could have is that this is a positive-sum approach to influence-/power-seeking: supporting neglected policies that would benefit large amounts of the US public buys goodwill, helps develop connections with other funders, and might put people in positions of power that are highly sympathetic to EA ideas. With the current state of the EA brand, this might not be such a bad idea.
There are other ways of seeking influence but they tend to have fewer positive effects (donating to politicians, trying to run one's own candidates for office) and solely relying on the strategy "become experts and try to convince those in power of the necessary policies" isn't really bearing fruit. And it seems increasingly untenable to ignore politics, with US & UK (and Netherlands) already drastically slashing international aid and the AGI trajectory depending heavily on those in power.
It is of course different from the default EA strategy of "do the actual thing you believe is directly most cost-effective and communicate very explicitly about your theory of change". But I don't think that explicitly communicating this would be well-received by third parties. Even explicitly thinking of it this way internally is risky PR-wise.
It does seem important to clearly delineate within EA when and whose communication is meant to be representative of one's thinking, and which communication isn't. Muddying this could be quite detrimental to EA in the long-term. I'm not sure how OpenPhil should've acted here. Perhaps better if they had not posted it to the EA Forum so that they don't signal "we believe this is good on EA grounds".
All in all, I'm positively inclined towards this fund though.