Thanks for the post, Jan! I follow AI Alignment debates only superficially and I had heard of the continuity assumption as a big source of disagreement, but I didn't have a clear concept of where it stemmed from and what were it's practical implications. I think your post does a very good job at grounding the concept and filling those gaps.
These are just the first questions that came to mind, but may not necessarily overlap with Adreas' interests or knowledge:
Thank you Shen, this is wonderful! With my local group in Colombia we're getting ready to stage a fellowship for the second time and hearing about your experience gave me many ideas for things we may try to improve on.
If I wanted to be charitable to their answer of the cost of saving a life I'd point out that $5000 is roughly the cost of saving a life reliably and at scale. If you relax any of those conditions, saving a life might be cheaper (e.g. Givewell sometimes finances opportunities more cost-effective than AMF, or perhaps you're optimistic about some highly leveraged interventions like political advocacy). However, I wouldn't bet that this phenomenon would be behind a significant fraction of the divergence of their answers.