I think the value of work on climate change isn't very impacted by this analysis, since it seems almost certianly solved post-ASI, so climate-driven catastrophic setbacks will generally occur pre-ASI and so not increase the total number of times we need to try to align ASI. Whereas nukes are less certainly solved post-ASI, given we may still be in a multipolar war-like world.
In Appendix A.1, it's not clear to me that an absolute reduction is the best way of thinking about this. Perhaps it is more natural to think in relative reductions? I suppose some interventions could probably best be modelled as absolute reductions (e.g. asteroid or supervolcano itnerventions perhaps) and others as relative reductions (doubling the amount of resources spent on alignment research?).
Yes, I think the '100 years' criterion isn't quite what we want. E.g. if there is a catastrophic setback more than 100 years after we build an aligned ASI, thenw e don't need to rerun the alignment problem. (In practice, perhaps 100 years should be ample time to build good global governance and reduce catastrophic setback risk to near 0, but conceptually we want to clarify this.)
And I agree with Owen that shorter setbacks also seem important. In fact, in a simple binary model we could just define a catastrophic setback to be one that takes you from a society that has built aligned ASI to one where all aligned ASIs are destroyed. ie the key thing is not how many years back you go, but whether you regres back beneath the critical 'crunch time' period.
When I read the first italicised line of the post, I assumed that one of the unusual aspects was that the post was AI-written. So then I was unusually on the lookout for that while reading it. I didn't notice clear slop. The few times that seemed not quite in your voice/a bit more AI-coded were (I am probably forgetting some):
So overall, I would say the AIs acquitted themselves quite well!
Nice re LEEP's honesty (and being well-funded)!
My understanding is that at the end of the CE program, founders are given the opportunity to pitch a set of known regular CE donors for funding, and that most incubated charities get enough money for their first ~year of operations from that. See https://www.seednetworkfunders.com/ for more, seems like it is minimum $10k/year to join this group of donors.
Longview does advising for large donors like this. Some other orgs I know of are also planning for an influx of money from Anthropic employees, or thinking about how best to advise such donors on comparing cuase areas and charities and so forth. This is also relevant: https://forum.effectivealtruism.org/posts/qdJju3ntwNNtrtuXj/my-working-group-on-the-best-donation-opportunities But I agree more work on this seems good!
Perhaps the main downside is people may overuse the feature and it encourages people to spend time making small comments, whereas the current system nudges people towards leaving fewer more substantive comments and less nit-picky ones? Not sure if this has been an issue on LW, I don't read it as much.