Robin Hanson says working on AI alignment today is justifiable only in proportion to the risk of a FOOM scenario. (A.k.a. hard takeoff, a.k.a. lumpy AI timeline.) I agree, even though the discussion may have moved on a bit.
But "lumpy" timelines don't seem restricted to AI. Runaway growth of genetically engineered organisms (BLOOM?) seems equally plausible. People have been thinking about climate tipping points for ages.
Can someone point me to any relevant writing on this? I haven't been able to find anything discussing the utility of studying FOOM-like scenarios (i.e. catastrophically rapid changes due to new technology) in general, rather than just in AI. I'm sure it's out there - just not sure what to Google.