Graduate student at Johns Hopkins. Looking for entry level work, feel free to message me about any opportunities!
I accept that political donations and activism are among the best ways to do good as an individual.
But it is less obvious that EA as an academic discipline and social movement has the analytical frameworks that suit it to politics - we have progress studies and the abundance movement for that. Mainly, I think there is a big difference between consensus-building among experts or altruistically minded individuals and in the political sphere of the mass-public.
It is of course necessary for political donations to be analyzed as trade offs against donations to other cause areas. And there's a lot of research that needs doing on the effectiveness of campaign donations and protest movements in achieving expected outcomes. And certain cause areas definitely have issue-specific reasons to do political work.
But I wouldn't want to see an "EA funds for Democrats" or a "EAs Against Trump" campaign.
I don't have a good data source on hand, but my understanding is that pollution from car travel is particularly harmful to local air quality. Whereas, for instance, emissions from plane travel less so.
But yes, I assume some portion of Giving Green's grantees do work that benefit air quality at least second hand. It could be included in the calculator as a harm but just directed to Giving Green as well.
Yes, you are probably right. I just threw that out as a stand-in for what I'm looking for. Ending all factory farming is too high a bar (and might just happen due to paper clipping instead!).
Maybe 10-20x-ing donor numbers is closer? I'd reference survey data instead, but public opinions are already way ahead of actual motivations. But maybe "cited among top 10 moral problems of the day" would work. Could also be numbers of vegans.
I think that is both correct and interesting as a proposition.
But the topic as phrased seems more likely to mire it in more timelines debate. Rather than this proposition, which is a step removed from:
1. What timelines and probability distributions are correct
2. Are EAs correctly calibrated
And only then do we get to
3. EAs are "failing to do enough work aimed at longer than median cases".
- arguably my topic "Long timelines suggest significantly different approaches than short timelines" is between 2 & 3
I mean all of the above. I don't want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Musk's willingness to directly impose his personal values, not just current harms.
Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek aren't. So let's say just Open AI.
I would be very surprised if this doesn't split discussion at least 60/40.
Kudos for writing maybe the best article I've seen making this argument. I'll focus on the "catastrophic replacement" idea. I endorse what @Charlie_Guthmann said, but it goes further.
We don't have reason to be especially confident of the AI sentience y/n binary (I agree it is quite plausible, but definitely not as probable as you seem to claim). But you are also way overconfident that they will have minds roughly analogous to our own and not way stranger. They would not "likely go on to build their own civilization", let alone "colonize the cosmos", when there is (random guess) a 50% chance that they have only episodic mental states that perhaps form, emerge and end with discrete goals. Or simply fleeting bursts of qualia. Or just spurts of horrible agony that only subside with positive human feedback, where scheming is not even conceivable. Or that the AI constitutes many discrete minds, one enormous utility-monster mind, or just a single mind that's relatively analogous to the human pleasure/suffering scale.
It could nonetheless end up being the case that once "catastrophic replacement" happens, ASI(s) fortuitously adopt the correct moral theory (total hedonistic utilitarianism btw!) and go on to maximize value, but I consider this less likely to come about from either rationality or the nature of ASI technology in question. The reason is roughly that there are many of us with different minds, which are under a constant flux due to changing culture and technology. A tentative analogy: consider human moral progress like sand in an hourglass; eventually it falls to the bottom. AIs may come in all shapes and sizes, like sand grains and pebbles. They may never fall into the correct moral theory by whatever process it is that could (I hope) eventually drive human moral progress to a utopian conclusion.