# All of ekka's Comments + Replies

My Most Likely Reason to Die Young is AI X-Risk

Great post! Thanks for writing it! I'm not great at probability so just trying to understand the methodology.

1. The cumulative probability of death should always sum up to 100% and P(Death|AGI) + P(Death|OtherCauses) = 100% (100% here being 100% of P(Death) i.e. all causes of death should be accounted for in P(Death) as opposed to 100% being equal to P(Death) + P(Life) sorry for the ambiguous wording ), so to correct for this would you scale natural death down as P(Death|AGI) increases i.e P(Death|OtherCauses) = (100-P(Death|AGI)) * Unscaled P(Death|Other
Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism)

Can Longtermism succeed without creating a benevolent stable authoritarianism given that it is unlikely that all humans will converge to the same values? Without such a hegemony or convergence of values, doesn't it seem like conflicting interests among different humans will eventually lead to a catastrophic outcome?

On Deference and Yudkowsky's AI Risk Estimates

For what it's worth, I found this post and the ensuing comments very illuminating. As a person relatively new to both EA and the arguments about AI risk, I was a little bit confused as to why there was not much push back on the very high confidence beliefs about AI doom within the next 10 years. My assumption had been that there was a lot of deference to EY because of reverence and fealty stemming from his role in getting the AI alignment field started not to mention the other ways he has shaped people's thinking. I also assumed that his track record on pr... (read more)

On Deference and Yudkowsky's AI Risk Estimates

Strong +1 on this. It in fact seems like the more someone thinks about something and takes a public position on it with strong confidence the more incentive they have to stick to the position they have. It's why making explicit forecasts and creating a forecasting track record is so important in countering this tendency. If arguments cannot be resolved by events happening in the real world then there is not much incentive for one to change their mind especially if it's about something speculative and abstract that one can generate arguments for ad infinit... (read more)

Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks for writing this, I found it very insightful! I just watched 'The Day After Trinity' over the weekend and one thing that stood out to me was that once the machinery of the Manhattan program was in motion it seemed like there was no stopping it. Relevant section of Robert Wilson and Frank Oppenheimer talking about it

3HaydnBelfield2mo
Thanks! And thanks for this link. Very moving on their sense of powerlessness.
Could economic growth substantially slow down in the next decade?

Thanks for the answer and also the link to the paper, very interesting! I did find it strange that they didn't include a graph but I haven't read enough economic papers to be confident.

My experience with imposter syndrome — and how to (partly) overcome it

Thanks for sharing this! I felt like I related to it at a lot. Instead of thinking that I'm fooling people I often just distrust the positive feedback I get and only trust feedback that is negative. If I get positive feedback from others I almost always disregard it and chalk it up to people being nice, sarcastic or too afraid to express their true opinions to a person of color. From my perspective anything I'm able to do anyone can do if they really want to and I'm not exceptional at all.

On a meta level I filled out the imposter syndrome questionnaire and... (read more)

How I failed to form views on AI safety

Thanks for writing this! It really resonated with me despite the fact that I only have a software engineering background and not much ML experience. I'm still struggling to form my views as well for a lot of the reasons you mentioned and one of my biggest sources of uncertainty has been trying to figure out what people with AI/ML expertise think about AI safety. This post has been very helpful in that regard (in addition to other information that I've been ingesting to help resolve this uncertainty). The issue of AGI timelines has come to be a major crux f... (read more)