Sorted by New

Wiki Contributions


An argument that EA should focus more on climate change

Hi Ann! Congratulations on this excellent piece :)

I want to bring up a portion I disagreed with and then address another section that really struck me. The former is:

Of course, co-benefits only affect the importance of an issue and don’t affect tractability or neglectedness. Therefore, they may not affect marginal cost-effectiveness.

I think I disagree with this for two reasons:

  1. Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective.
  2. It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.

The section that struck me was:

climate change is somewhat unique in that its harms are horrible and have time-limited solutions; the growth rate of the harms is larger, and the longer we wait to solve them the less we will be able to do.

To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in  years, then by next year we will only have  years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case,  tractability diminishes the longer we wait, as well as the timeframe. Compared to the AI case, for example, where the risk itself is unclear, I think this weighing makes climate change mitigation much more attractive.

Thanks for a great read!

Venn diagrams of existential, global, and suffering catastrophes

Why is "people decide to lock in vast nonhuman suffering" an example of failed continuation in the last diagram?