Epistemic status: very tired.
As others mentioned, this feels like too much of an update based on one data point.
One of the largest advantages EAs running for office will have is their ability to fundraise from other EAs. I worry that skepticism of EAs in politics and/or slowness to act on time sensitive donation oppos will kneecap the success of future candidates.
Big picture, I think the impact case was pretty solid. The US govt is enormously influential. It moves a lot of money, regulates important industries, has the largest military, and can uniquely affect x risk. Members of congress exert significant control over the govt. Senators more, president most.
Having an extremely committed EA in govt seems worth A LOT to me.
Raising some amount of money is essential to winning, no matter how much outside money is committed to a race. Campaigns need to hire staff, get on the ballot, and do other things that super PACs can't do. They also get much more favorable rates on TV ad buys, can make better ads, etc. "Hard money", i.e. that raised by campaigns by retail donors and governed by donor caps, is way more valuable than "soft money", i.e. independent expenditure made by super PACs.
It seems clear to me that marginal hard dollars increase the odds of success, and it doesn't have to be that big of an increase for it to be a good bet in expected value terms.
I would guess that almost no EAs donating to GiveWell charities really understand the evidence base and models going into the recommendation, but we outsource our thinking to people/orgs we trust. Obviously, there's way less of a track record with running EAs for office and a lot of uncertainty baked into politics. But the most experienced, aligned people in the political data science world were supportive of this particular race happening, and A LOT of thinking went into this decision.
I've definitely noticed this as a part of the EA NYC community (and I wouldn't be surprised if this were true elsewhere). I think it might come from a place of trying to pre-empt common criticisms/characterizations of EA, but comes off as weird, especially when the person has no preconceptions about EA. EA has a strong culture that's pretty different from every other community I've ever been a part of, but it doesn't exert control over my life. Obviously, ideas and people from EA influence me in big ways, but because I believe those ideas and respect those people.
A few thoughts on how we could mitigate some of these risks:
This is a great post, and I'm glad these points are being raised. I share a lot of the same concerns (basically, what happens to EA long term when it's just a good deal to join it?).
A big and small personal win from these changes in funding:
But it's easy to get into self-serving territory where you value your time so highly that you can justify almost any expense (or don't think of cheaper ways to meet the same goals). This can also move us into territory where, to do ostensibly altruistic work, we don't give anything up, and, in fact, argue that others should give things to us.
This feels fundamentally different from the movement that attracted me 5 years ago (though the reasoning is very consistent, and may well be right).
Unilateral disarmament by the US seems bad, but if the US and USSR eliminated all nukes, as they almost did in 1986, that seems good to me. No other countries had anywhere close the number, and we could have been much more convincing in getting other countries to follow suit.
Great, thank you! This is definitely out of date, at least for GiveDirectly, where I used to work. GD has moved over $500M to people in poverty, though some substantial fraction of that (>$200M if my memory serves) was to people in the US. The Impact site says $100M.
Pre-ordered a hardcover copy!
Curious for more specifics on the hardcover vs. Kindle thing. Are Kindle pre-orders counted as some fraction of a hardcover order? If so, what is that fraction?
I'm excited for this series! I'm a big believer in EAs doing more things out in the world, both for the direct impacts but probably even more for the information value.
For example, I'm thrilled that Longview is getting into nuclear security grantmaking. I think this is:
(disclosure that I contract for Longview on something totally different and learned about this when everyone else did).
I think the sociology of EA will make us overly biased towards research and away from action, even when action would be more effective, in the near and long term. For example, I think there are major limitations to developing AI governance strategies in the absence of working with and talking to governments.
TBC, research is extremely important, and I'm glad the community is so focused on asking and answering important questions, but I'd be really happy to see more people "get after it" the way you have.
Thanks for this writeup!
Josh Clark also did a podcast series on x-risk called the End of the World. It's very good! Almost everyone he quotes is from FHI and it's very aligned with EA thinking on x-risk.
Another thing to consider is the enormous amount of info value we got out of this campaign. It looks like large amounts of money are not a sufficient condition for victory, but if Carrick hadn't been able to raise the amount of hard money needed to make the campaign happen, we would've learned a lot less.