Hi Ann! Congratulations on this excellent piece :)
I want to bring up a portion I disagreed with and then address another section that really struck me. The former is:
Of course, co-benefits only affect the importance of an issue and don’t affect tractability or neglectedness. Therefore, they may not affect marginal cost-effectiveness.
I think I disagree with this for two reasons:
The section that struck me was:
climate change is somewhat unique in that its harms are horrible and have time-limited solutions; the growth rate of the harms is larger, and the longer we wait to solve them the less we will be able to do.
To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have t−1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe. Compared to the AI case, for example, where the risk itself is unclear, I think this weighing makes climate change mitigation much more attractive.
Thanks for a great read!
Why is "people decide to lock in vast nonhuman suffering" an example of failed continuation in the last diagram?