Can Longtermism succeed without creating a benevolent stable authoritarianism given that it is unlikely that all humans will converge to the same values? Without such a hegemony or convergence of values, doesn't it seem like conflicting interests among different humans will eventually lead to a catastrophic outcome?
For what it's worth, I found this post and the ensuing comments very illuminating. As a person relatively new to both EA and the arguments about AI risk, I was a little bit confused as to why there was not much push back on the very high confidence beliefs about AI doom within the next 10 years. My assumption had been that there was a lot of deference to EY because of reverence and fealty stemming from his role in getting the AI alignment field started not to mention the other ways he has shaped people's thinking. I also assumed that his track record on pr... (read more)
Strong +1 on this. It in fact seems like the more someone thinks about something and takes a public position on it with strong confidence the more incentive they have to stick to the position they have. It's why making explicit forecasts and creating a forecasting track record is so important in countering this tendency. If arguments cannot be resolved by events happening in the real world then there is not much incentive for one to change their mind especially if it's about something speculative and abstract that one can generate arguments for ad infinit... (read more)
Thanks for writing this, I found it very insightful! I just watched 'The Day After Trinity' over the weekend and one thing that stood out to me was that once the machinery of the Manhattan program was in motion it seemed like there was no stopping it. Relevant section of Robert Wilson and Frank Oppenheimer talking about it
Thanks for the answer and also the link to the paper, very interesting! I did find it strange that they didn't include a graph but I haven't read enough economic papers to be confident.
Thank you for the detailed answer!
Thanks for sharing this! I felt like I related to it at a lot. Instead of thinking that I'm fooling people I often just distrust the positive feedback I get and only trust feedback that is negative. If I get positive feedback from others I almost always disregard it and chalk it up to people being nice, sarcastic or too afraid to express their true opinions to a person of color. From my perspective anything I'm able to do anyone can do if they really want to and I'm not exceptional at all.
On a meta level I filled out the imposter syndrome questionnaire and... (read more)
Thanks for writing this! It really resonated with me despite the fact that I only have a software engineering background and not much ML experience. I'm still struggling to form my views as well for a lot of the reasons you mentioned and one of my biggest sources of uncertainty has been trying to figure out what people with AI/ML expertise think about AI safety. This post has been very helpful in that regard (in addition to other information that I've been ingesting to help resolve this uncertainty). The issue of AGI timelines has come to be a major crux f... (read more)
Hello, my name is Eddie. I was born and raised in Kenya but have lived in Canada(currently in Vancouver) for over 10 years now where I work as a Software Engineer. I learnt about Effective Altruism through the conversations between Will MacAskill and Sam Harris on the Waking up App. The huge disparities between my country of birth and my current country of residence have drawn me to think about how things can be improved for those less fortunate than I am. I would like to move from mostly thinking about it, to doing more about it and seems the ideas of EA could help me move in that direction. I'm still engaging and wrapping my head around EA and what it's all about and I hope that joining the forum may help me in my learning process.
Great post! Thanks for writing it! I'm not great at probability so just trying to understand the methodology.