If we're being precise, I would just avoid thinking in terms of X-risk, since "X-risk" vs "not-an-X-risk" imposes a binary where really we should just care about losing expected value.
If we want a definition to help gesture at the kinds of things we mean when we talk about X-risk, several possibilities would be fine. I like something like destruction of lots of expected value.
If we wanted to make this precise, which I don't think we should, we would need to go beyond fraction of expected value or fraction of potential, since something could reduce our expectations to zero or negative without being an X-catastrophe (in particular, if our expectations had already been reduced to an insignificant positive value by a previous X-catastrophe; note that the definitions MichaelA and Mauricio suggest are undesirable for this reason), and some things that should definitely be called X-catastrophes can destroy expectation without decreasing potential. A precise definition would need to look more like expectations decreasing by at least a standard deviation. Again, I don't think this is useful, but any simpler alternative won't precisely describe what we mean.
We might also need to appeal to some idealization of our expectations, such as expectations from the point of view of an imaginary smart/knowledgable person observing human civilization, such that changes in our knowledge affecting our expectations don't constitute X-catastrophes, but not so idealized that our future is predictable and nothing affects our expectations...
Best to just speak in terms of what we actually care about, not X-risks but expected value.
Per Linch's point that defining existential risk entirely empirically is kind of impossible, I think that maybe we should embrace defining existential risk in terms of value by defining an arbitrary thresholds of value above which if the world is still capable of reaching that level of value then an existential catastrophe has not occurred.
But rather than use 1% or 50% or 90% of optimal as that threshold, we should use a much lower bar that is approximately at the extremely-fuzzy boundary of what seems like an "astronomically good future" in order to avoi... (read more)