Why is gender separated into "men" and "non-men"? I find it very weird but I guess there is a reason. Is something like "men", "women", "other" not optimal for any reason? If so, is there a reason to keep "men" instead of "women"?
Here’s a video that gives you a decent overview of the author’s approach
Very interesting! (Although I think, at least for people living in Europe, not too surprising). But I deeply hated his Florida's-car-seller-from-the-80's style of speaking. The worse is that I will listen to it again because I was not too focused on it!
BTW, you made me buy a Roomba... It never crossed my mind I could buy it second hand :-)
Do you have any quote from someone who says we shouldn't care about catastrophic risks at all?
I'm not saying this. And I really don't see how you came to think I do.
The only thing I say is that I don't see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable.
I'm not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don't really know how to state it differently.
Let's speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk.
Yes, I don't think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.
Exactly. Even if the ant path may not be permanent, ie. if we could climb out of it.
My point is that, in terms of the effort I would like humanity to devote to minimise this risk, I don't think it makes any difference whether the ant state is strictly permanent or we could eventually get out of it. Maybe if it were guaranteed to get out of it or even "only" very likely that we could get out of this ant state I could understand devoting less effort in mitigating this risk than if we'd think the AGI will eliminate us (or the ant state would be unescapable).
If we agree on this, the fact that a risk is actually existential or not is in practice close to irrelevant.
Even if we think we eventually could climb out of our 'ant state' to a state with more potential for humanity...
;-)
[I just quickly listened to the post and I'm not philosopher, nor I know deeply about Ergodicity Economics]
Maybe Ergodicity Economics (EE) [site, wikipedia] could be relevant here? It is relevant for the St. Petersburg experiment. It has to do with expected values of stochastic processes not being equal to their time averages (ensemble average ≠ time average).
I am sure there is much more to EE than this, but the one insight that I took from it when I got to know about EE is that when one of the outcomes of the game is to lose everything, expected value does not do a good job describing it. And, at least for x-risks this is exactly the case (and I consider catastrophic risks also in this domain).
It seems that EE is not very well known in the EA community, at least the term is almost not mentioned in the forum, so I thought I would mention it here in case anyone wants to dig in more. I'm for sure not the better trained to go deeper into it nor do I have the time to attempt to do it.
One post that seems to address the issue of EE within the EA is this one.
I hope this is a good lead!
Thank you very much!
Thanks for the answer. I was referring to 2. I thought it was something we'll established. But I think I was so convinced of it because I did not think much about and I probably conflated it with 1 as well.
Thanks!