Robin Hanson says working on AI alignment today is justifiable only in proportion to the risk of a FOOM scenario. (A.k.a. hard takeoff, a.k.a. lumpy AI timeline.) I agree, even though the discussion may have moved on a bit.

But "lumpy" timelines don't seem restricted to AI. Runaway growth of genetically engineered organisms (BLOOM?) seems equally plausible. People have been thinking about climate tipping points for ages.

Can someone point me to any relevant writing on this? I haven't been able to find anything discussing the utility of studying FOOM-like scenarios (i.e. catastrophically rapid changes due to new technology) in general, rather than just in AI. I'm sure it's out there - just not sure what to Google.

New Answer
Ask Related Question
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 1:34 PM

I suppose that Drexler's work on nanotechnology (e.g. Engines of Creation) may qualify as "writing on a FOOM-like scenario". I haven't read it, but my impression is that he theorized about massive economic growth caused by new technology, to the point of human life being fundamentally transformed. The book also gets into risk; Drexler coined the term "gray goo".