Graduate student at Johns Hopkins. Looking for entry level work, feel free to message me about any opportunities!
I don't have a good data source on hand, but my understanding is that pollution from car travel is particularly harmful to local air quality. Whereas, for instance, emissions from plane travel less so.
But yes, I assume some portion of Giving Green's grantees do work that benefit air quality at least second hand. It could be included in the calculator as a harm but just directed to Giving Green as well.
Yes, you are probably right. I just threw that out as a stand-in for what I'm looking for. Ending all factory farming is too high a bar (and might just happen due to paper clipping instead!).
Maybe 10-20x-ing donor numbers is closer? I'd reference survey data instead, but public opinions are already way ahead of actual motivations. But maybe "cited among top 10 moral problems of the day" would work. Could also be numbers of vegans.
I think that is both correct and interesting as a proposition.
But the topic as phrased seems more likely to mire it in more timelines debate. Rather than this proposition, which is a step removed from:
1. What timelines and probability distributions are correct
2. Are EAs correctly calibrated
And only then do we get to
3. EAs are "failing to do enough work aimed at longer than median cases".
- arguably my topic "Long timelines suggest significantly different approaches than short timelines" is between 2 & 3
I mean all of the above. I don't want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Musk's willingness to directly impose his personal values, not just current harms.
Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek aren't. So let's say just Open AI.
I would be very surprised if this doesn't split discussion at least 60/40.
"Grok/xAI is a greater threat to AI Safety than either Open AI or Google DeepMind"
- (Controversial because the later presumably have a better chance of reaching AGI first. I take the question to mean "which one, everything else being equal and investment/human capital not being redistributed, would you prefer to not exist?"
Mostly I just want a way to provoke more discussion on the relative harms of Grok as a model, which has fallen into the "so obvious we don't mention it" category. I would welcome better framings.)
Really cool! Easy to use and looks great. Some feedback:
The word "offsetting" seems to have bad PR. But I quite like "Leave no harm" and "a clean slate". I think the general idea could be really compelling to certain parts of the population. There is at least some subsection of the population that thinks about charity in a "guilty conscious" sense. Maybe guilt is a good framing, especially since it is more generalizable here than most charities are capable of eliciting.
I'm certainly not an expert on this, but I wonder if this could have particular appeal to religious groups? The concept of "Ahimsa" in Hinduism, Buddhism, and Jainism seems relevant.
Last suggestion: Air pollution may be a good additional category of harms. I'm not sure what the best charity target would be though, given that it is hyper regional. Medical research? Could also add second-hand cigarette smoke to that.
Seems like the best bet is to make it as comprehensive as possible, without overly diluting the most important and evidence backed stuff like farmed animal welfare.
I accept that political donations and activism are among the best ways to do good as an individual.
But it is less obvious that EA as an academic discipline and social movement has the analytical frameworks that suit it to politics - we have progress studies and the abundance movement for that.
It is of course necessary for political donations to be analysed as trade offs against donations to other cause areas. And there's a lot of research that needs doing on the effectiveness of campaign donations and protest movements in achieving expected outcomes. And certain cause areas definitely have issue-specific reasons to do political work.
But I wouldn't want to see an "EA funds for Democrats" or a "EAs Against Trump" campaign.