If we're being precise, I would just avoid thinking in terms of X-risk, since "X-risk" vs "not-an-X-risk" imposes a binary where really we should just care about losing expected value.
If we want a definition to help gesture at the kinds of things we mean when we talk about X-risk, several possibilities would be fine. I like something like destruction of lots of expected value.
If we wanted to make this precise, which I don't think we should, we would need to go beyond fraction of expected value or fraction of potential, since something could reduce our expectations to zero or negative without being an X-catastrophe (in particular, if our expectations had already been reduced to an insignificant positive value by a previous X-catastrophe; note that the definitions MichaelA and Mauricio suggest are undesirable for this reason), and some things that should definitely be called X-catastrophes can destroy expectation without decreasing potential. A precise definition would need to look more like expectations decreasing by at least a standard deviation. Again, I don't think this is useful, but any simpler alternative won't precisely describe what we mean.
We might also need to appeal to some idealization of our expectations, such as expectations from the point of view of an imaginary smart/knowledgable person observing human civilization, such that changes in our knowledge affecting our expectations don't constitute X-catastrophes, but not so idealized that our future is predictable and nothing affects our expectations...
Best to just speak in terms of what we actually care about, not X-risks but expected value.
I wrote a post last year basically trying to counter misconceptions about Ord's definition and also somewhat operationalise it. Here's the "Conclusion" section:
That leaves ambiguity as to precisely what fraction is sufficient to count as "the vast majority", but I don't think that's a very important ambiguity - e.g., I doubt people's estimates would change a lot if we set the bar at 75% of potential lost vs 99%.
I think the more important ambiguities are what our "potential" is and what it means to "lose" it. As Ord defines x-risk, that's partly a question of moral philosophy - i.e. it's as if his definition contains a "pointer" to whatever moral theories we have credence in, our credence in them, and our way of aggregating that, rather than baking a moral conclusion in. E.g., his definition deliberating avoids taking a stance on things like whether a future where we stay on Earth forever or a future with only strange but in some sense "happy" digital minds, or failing to reach such futures, would be an existential catastrophe.
This footnote from my post is also relevant: