Ann Garth

interested in climate change and systems thinking

Comments

You have more than one goal, and that's fine

Hi Teo! I know your comment was from a few years ago, but I was so excited to see someone else in EA talk about self-compassion. Self-compassion is one of the main things that lets me be passionate about EA and have a maximalist moral mindset without spiraling into guilt, and I think it should be much more well-known in the community. I don't know if you ever ended up writing more about this, but if you did, I hope you'd consider publishing it -- I think that could help a lot of people!

An argument that EA should focus more on climate change

Hi Rocket, thanks for sharing these thoughts (and I'm sorry it's taken me so long to get back to you)!

To respond to your specific points:

  1. Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective. 2. It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.

I certainly agree with this -- was only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis.

To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have t−1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe.

This is true (and very well-phrased!). I think there's some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. I'll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.

My mistakes on the path to impact

One data point: I recently got a job which, at the time I initially applied for it, I didn't really want (as I went through the interview process and especially now that I've started, I like it more than I thought I would based on the job posting alone).