I don't think it's that much of a sacrifice.
I don't understand how this is an argument applicable to anyone other than yourself; other people clearly feel differently.
I also think that for many, the only difference in practice would be slightly lower savings for retirement.
If that is something they care or worry about, it's a difference - adding the word "only" doesn't change that!
I've run very successful group brainstorming sessions with experts just in order to require them to actually think about a topic enough to realize what seems obvious to me. Getting people to talk through what the next decade of AI progress will look like didn't make them experts, or even get to the basic level I could have presented in a 15 minute talk - but it gives me me a chance to push them beyond their cached thoughts, without them rejecting views they see as extremes, since they are the ones thinking them!
But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.
I'll just note that when the original conversation started, I addressed this in a few parts.
To summarize, I think that yes, EA should be enormous, but it should not be a global community, and it needs to grapple with how the current community works, and figure out how to avoid ideological conformity.
There's also an important question about which EA causes are differentially more or less likely to be funded. If you think Pause AI is good, Anthropic's IPO probably won't help. If you think mechanistic interpretability is valuable, it might help to fund more training in relevant areas, but you should expect an influx of funding soon. And if you think animal welfare is important, funding new high risk startups that can take advantage of wave of funding in a year may be an especially promising bet.
This could either be a new resource or an extension of an existing one. I expect that improving an existing resource would be faster and require lower maintenance.
My suggestion would be to improve the AI Governance section of aisafety.info.
cc: @melissasamworth / @Søren Elverlin / @plex
To possibly strengthen the argument made, I'll point out that moving already-effective money to a more effective cause or donation is smaller counterfactually because they are already looking at the question, and could easily come to the conclusion on their own. Moving money in a "Normie" foundation, on the other hand, can have knock-on effects of getting them to think more about impact at all, and change their trajectory.
I'd find a breakdown informative, since the distribution both between different frontier firms and between safety and not seems really critical, at least in my view of the net impacts of a program. (Of course, none of this tells us counterfactual impact, which might be moving people on net either way.)